-----------------------------------------------------------------------------------
Post ID:11887
Sender:"=?ISO-8859-1?Q?Ant=F3nio_Mota?=" <amsmota@...>
Post Date/Time:2009-01-02 17:37:40
Subject:Using "hipertext as the engine of application state" in "data-centric" services
Message:
[ Attachment content not displayed ]
On 02.01.2009, at 18:37, António Mota wrote: > So, based on [2] I was wondering in change my data a bit (actually a > few bits), like this > > GET http://localhost:8080/rest/data/person/101 > > <person> > <firstName>TONINHO</firstName> > <lastName>METRALHA</lastName> > <account href="http://localhost:8080/rest/data/bank/accounts/010123101 > ">010123101</account> > </person> > > This way I keep the "data-centric" aspect of the service while getting > the benefits of HATEOAS when/where I need it. > > Is this a good approach in the light of HATEOAS? > > > Yes. You might also want to check whether at any point you rely on the specific structure of your URIs in your clients, e.g. the first part of the URI, or the "person" path element. If so, you might want to change things so that clients get to know about these from links. Ideally, your clients start with one single URI and only follow links from there. > Also, referring to [2], all that use of "extended" media-types, like > > application/vnd.bank.org.account+xml > > is a correct use of the concept of "hipertext as the engine of > application state"? Or putting in another way, the use of Media-Types > to describe the *structure* (as opposed to the *nature*) of a > data-centric response is correct, under a restfull point-of-view?? > > I think this is perfectly RESTful, but I'm sure some people will disagree. It would be extremely interesting to get Roy's feedback on that article. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ > > Thanks all. > > [1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > [2] http://www.infoq.com/articles/subbu-allamaraju-rest > > > P.S. I'm sorry to the discuss@restlet.tigris.org subscribers for the > duplicate post but I realize a little too late that this list is a > more general place to ask this kind of questions. > _______________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > António Manuel dos Santos Mota > > mobile PT: +351919623568 (deprecated) > mobile IE: +353(0)877718363 > mail: amsmota@... > skype: amsmota > msn: antoniomsmota@hotmail.com > linkedin: www.linkedin.com/in/amsmota > _______________________________________________ > > >
Yes, that's great step in the right direction. Most so called RESTful services even fail at that level. What you're doing is augmenting your plain old XML with hypermedia information (i.e. creating a new XML-based hypermedia MIME type). However, you'll still have the problem that you'll need to describe to everyone what it means. To be even more WOA-compliant--since you're really after REST+WWW--you should consider using an existing hypermedia MIME type, such as XHTML, Atom, or RDF. For example, in XHTML using semantic microformat annotations, your response may look like this: <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Metralha, Toninho</title> </head> <body> <div class="person"> Name: <span class="lastName">Metralha</span>, <span class="firstName"></span><br/> <a class="account" href="http://localhost:8080/rest/data/bank/accounts/010123101 " title="010123101">Account #010123101</a> </div> </body> </html> The downside is that this format is not compatible with your existing user agents (though content negotiation can address this issue), but the upside is that it's compatible with the most popular user agent around: the browser. With microformats, information is extracted by inspecting the XHTML document using xpath expressions. For example, first located the "person" element using "//*[@class='person']", then locate inside the person node the first and last names using ".// *[@class='firstName']" and ".//*[@class='lastName']" respectively. The inner text of these nodes are the names. For linked information, the convention is a bit different in that the data is captured in the "@title" attribute instead of the plain text. The end result is that you have pretty much complete freedom of how you present the information, while still capturing all the necessary details. In short, you've made your web-services layer completely hypermedia driven. More about microformats here at [1]. - Steve [1] http://microformats.org/ --------------------------------- Steve G. Bjorg MindTouch San Diego, CA 619.795.8459 office 425.891.5913 mobile http://twitter.com/bjorg On Jan 2, 2009, at 9:37 AM, António Mota wrote: > Hello again. > > After reading some interesting posts about REST, specially > > [1] REST APIs must be hypertext-driven > [2] Describing RESTful Applications > > I started to think that my own RESTish implementation of a "service > midleware" wasn't very RESTish at all. Basically because it hadn't > that much of that HATEOAS constraint. > > Now that was not a big problem because all the services implemented on > top of that "service layer" where a kind of basic "request-response > only" where there were very few situations where I had to change from > a "state" returned by the service to another, subsequent state. It was > more like a "data-centric" access like > > GET http://localhost:8080/rest/data/person/101 > > <person> > <firstName>TONINHO</firstName> > <lastName>METRALHA</lastName> > <account>010123101</account> > </person> > > and that is enough because that's all the info I need for now and it's > convenient to serialize/deserialize in Java using XStream. But of > course if it were a just litlle more complex example I start to loose > expandability and extensibility and restability... > > So, based on [2] I was wondering in change my data a bit (actually a > few bits), like this > > GET http://localhost:8080/rest/data/person/101 > > <person> > <firstName>TONINHO</firstName> > <lastName>METRALHA</lastName> > <account href="http://localhost:8080/rest/data/bank/accounts/ > 010123101 > ">010123101</account> > </person> > > This way I keep the "data-centric" aspect of the service while getting > the benefits of HATEOAS when/where I need it. > > Is this a good approach in the light of HATEOAS? > > > > Also, referring to [2], all that use of "extended" media-types, like > > application/vnd.bank.org.account+xml > > is a correct use of the concept of "hipertext as the engine of > application state"? Or putting in another way, the use of Media-Types > to describe the *structure* (as opposed to the *nature*) of a > data-centric response is correct, under a restfull point-of-view?? > > > Thanks all. > > [1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > [2] http://www.infoq.com/articles/subbu-allamaraju-rest > > > P.S. I'm sorry to the discuss@... subscribers for the > duplicate post but I realize a little too late that this list is a > more > general place to ask this kind of questions. > _______________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > António Manuel dos Santos Mota > > mobile PT: +351919623568 (deprecated) > mobile IE: +353(0)877718363 > mail: amsmota@... > skype: amsmota > msn: antoniomsmota@... > linkedin: www.linkedin.com/in/amsmota > _______________________________________________
On Jan 2, 2009, at 10:06 AM, Stefan Tilkov wrote: > On 02.01.2009, at 18:37, António Mota wrote: >> Also, referring to [2], all that use of "extended" media-types, like >> >> application/vnd.bank.org.account+xml >> >> is a correct use of the concept of "hipertext as the engine of >> application state"? Or putting in another way, the use of Media-Types >> to describe the *structure* (as opposed to the *nature*) of a >> data-centric response is correct, under a restfull point-of-view?? >> > I think this is perfectly RESTful, but I'm sure some people will > disagree. It would be extremely interesting to get Roy's feedback on > that article. Roy's blog post entitled "REST APIs must be hypertext-driven" [1] seem to indicate that documents are just that: documents. Their purpose is discovered by inspection. Of course, there are expectations when following a link and breaking these may cause confusion in the user agent even for humnas (for example, following a "pay now" link and arriving at a recipe for chocolate cake). Furthermore, encoding the nature of the document in the content type, means that this information will be out-of-band, which may cause other complications. - Steve [1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven -------------- Steve G. Bjorg http://mindtouch.com http://twitter.com/bjorg irc.freenode.net #mindtouch
On Jan 2, 2009, at 10:25 AM, Steve Bjorg wrote: > > Roy's blog post entitled "REST APIs must be hypertext-driven" [1] seem > to indicate that documents are just that: documents. Their purpose is > discovered by inspection. Of course, there are expectations when > following a link and breaking these may cause confusion in the user > agent even for humnas (for example, following a "pay now" link and > arriving at a recipe for chocolate cake). Furthermore, encoding the > nature of the document in the content type, means that this > information will be out-of-band, which may cause other complications. That is a misinterpretation. The idea behind using concrete media types is to keep the interactions visible, i.e., we should be able to apply some amount of processing without parsing the representation. That processing could be in the form of routing at a proxy, prioritizing/deprioritizing some messages, or applying different caching rules. The moment you start overloading media types, that visibility would be gone. Also note that this is a problem with any payload format, including Atom. If visibility is of no importance for your applications, fine, use HTML/XHTML, POX, Atom or whatever. If visibility is important, look for or design concrete media types. Subbu --- http://subbu.org
On Jan 2, 2009, at 10:37 AM, Subbu Allamaraju wrote: > On Jan 2, 2009, at 10:25 AM, Steve Bjorg wrote: > >> >> Roy's blog post entitled "REST APIs must be hypertext-driven" [1] >> seem >> to indicate that documents are just that: documents. Their purpose >> is >> discovered by inspection. Of course, there are expectations when >> following a link and breaking these may cause confusion in the user >> agent even for humnas (for example, following a "pay now" link and >> arriving at a recipe for chocolate cake). Furthermore, encoding the >> nature of the document in the content type, means that this >> information will be out-of-band, which may cause other complications. > > > That is a misinterpretation. Sorry, but can you clarify if you simply don't agree with my interpretation or if you *know* that my interpretation is wrong? > The idea behind using concrete media types is to keep the > interactions visible, i.e., we should be able to apply some amount > of processing without parsing the representation. That processing > could be in the form of routing at a proxy, prioritizing/ > deprioritizing some messages, or applying different caching rules. That can also be achieved by adding custom HTTP headers or known end- points. Since we're already talking about custom MIME types, I don't these alternatives as fundamentally different. Either way, the proxy/ gateway knows about the application's intent, which means they are coupled. Though headers would be more generic (or neutral in Nick- speak). > The moment you start overloading media types, that visibility would > be gone. Also note that this is a problem with any payload format, > including Atom. Would the web really have scaled as well as it did if we had "text/ login+html", "text/checkout+html" instead of only "text/html"? What if a page allows both operations, does it become "text/login+checkout +html"? I'm a bit wary about manifesting the intent at that level. > If visibility is of no importance for your applications, fine, use > HTML/XHTML, POX, Atom or whatever. If visibility is important, look > for or design concrete media types. Some concrete examples when to use one over the other could be helpful. What kind of documents/applications require specialized MIME types? For instance, my above "text/login+html" types are probably not what you had in mind. - Steve -------------- Steve G. Bjorg http://mindtouch.com http://twitter.com/bjorg irc.freenode.net #mindtouch
On Jan 2, 2009, at 11:04 AM, Steve Bjorg wrote: > On Jan 2, 2009, at 10:37 AM, Subbu Allamaraju wrote: > >> On Jan 2, 2009, at 10:25 AM, Steve Bjorg wrote: >> >>> >>> Roy's blog post entitled "REST APIs must be hypertext-driven" [1] >>> seem >>> to indicate that documents are just that: documents. Their >>> purpose is >>> discovered by inspection. Of course, there are expectations when >>> following a link and breaking these may cause confusion in the user >>> agent even for humnas (for example, following a "pay now" link and >>> arriving at a recipe for chocolate cake). Furthermore, encoding the >>> nature of the document in the content type, means that this >>> information will be out-of-band, which may cause other >>> complications. >> >> >> That is a misinterpretation. > > Sorry, but can you clarify if you simply don't agree with my > interpretation or if you *know* that my interpretation is wrong? See RFC 2616#7.1 and 14.17. Entity headers like Content-Type provide metadata about the entity. What a given media type means is something clients and servers learn out of band. This is true for existing IANA media types as well as new media types. > > >> The idea behind using concrete media types is to keep the >> interactions visible, i.e., we should be able to apply some amount >> of processing without parsing the representation. That processing >> could be in the form of routing at a proxy, prioritizing/ >> deprioritizing some messages, or applying different caching rules. > > That can also be achieved by adding custom HTTP headers or known end- > points. Since we're already talking about custom MIME types, I > don't these alternatives as fundamentally different. Either way, > the proxy/gateway knows about the application's intent, which means > they are coupled. Though headers would be more generic (or neutral > in Nick-speak). Same as above. > >> The moment you start overloading media types, that visibility would >> be gone. Also note that this is a problem with any payload format, >> including Atom. > > Would the web really have scaled as well as it did if we had "text/ > login+html", "text/checkout+html" instead of only "text/html"? What > if a page allows both operations, does it become "text/login+checkout > +html"? I'm a bit wary about manifesting the intent at that level. The answer is simple. Are clients required to apply special processing rules to any of these? > > >> If visibility is of no importance for your applications, fine, use >> HTML/XHTML, POX, Atom or whatever. If visibility is important, look >> for or design concrete media types. > > Some concrete examples when to use one over the other could be > helpful. What kind of documents/applications require specialized > MIME types? For instance, my above "text/login+html" types are > probably not what you had in mind. IMO, there is no binary answer to the question of whether you should always use some well-known media type or not. Either approach could bite back if not well-thought out. There were instances where some format-designers "forgot" to assign media types and created mess. Apart from visibility, consider interoperability and compatibility. Subbu -- http://subbu.org
The REST Wiki [1] seems to be down again. In fact, I believe the number of times I've tried to access it and it's been down or unbearably slow far exceeds the number of times I managed to actually use it. I know somebody provides it for free, and I'm thankful, but that doesn't make its unavailability any more tolerable. Is there some way to increase its QoS? Can we find a non-profit organization with some CPU cycles and bandwidth to spare? (My company can offer to host it, but it's probably best moved to some "neutral" place.) [1] http://rest.blueoxen.net/ Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Any reason you wouldn't want to move the contents to http://restpatterns.org? It's hosted on a scaling EC2 cluster with 24/7 monitoring [1] and the wiki is FOSS [2]. - Steve [1] http://is.gd/bq1E [2] http://www.mindtouch.com/Community -------------- Steve G. Bjorg http://mindtouch.com http://twitter.com/bjorg irc.freenode.net #mindtouch On Jan 3, 2009, at 12:00 PM, Stefan Tilkov wrote: > The REST Wiki [1] seems to be down again. In fact, I believe the > number of times I've tried to access it and it's been down or > unbearably slow far exceeds the number of times I managed to actually > use it. > > I know somebody provides it for free, and I'm thankful, but that > doesn't make its unavailability any more tolerable. Is there some way > to increase its QoS? Can we find a non-profit organization with some > CPU cycles and bandwidth to spare? (My company can offer to host it, > but it's probably best moved to some "neutral" place.) > > [1] http://rest.blueoxen.net/ > > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > >
On 03.01.2009, at 21:13, Steve Bjorg wrote: > Any reason you wouldn't want to move the contents to http://restpatterns.org? I didn't author or edit a single page there, so it would not be for me to decide. > It's hosted on a scaling EC2 cluster with 24/7 monitoring [1] That sounds great. > and the wiki is FOSS [2]. > I admit that I would feel a little more comfortable with a Wiki software not associated with a company. But I don't really care as long as the data can be exported in some standard Wiki markup. Stefan > - Steve > > [1] http://is.gd/bq1E > [2] http://www.mindtouch.com/Community > > -------------- > Steve G. Bjorg > http://mindtouch.com > http://twitter.com/bjorg > irc.freenode.net #mindtouch > > On Jan 3, 2009, at 12:00 PM, Stefan Tilkov wrote: > >> The REST Wiki [1] seems to be down again. In fact, I believe the >> number of times I've tried to access it and it's been down or >> unbearably slow far exceeds the number of times I managed to actually >> use it. >> >> I know somebody provides it for free, and I'm thankful, but that >> doesn't make its unavailability any more tolerable. Is there some way >> to increase its QoS? Can we find a non-profit organization with some >> CPU cycles and bandwidth to spare? (My company can offer to host it, >> but it's probably best moved to some "neutral" place.) >> >> [1] http://rest.blueoxen.net/ >> >> >> Stefan >> -- >> Stefan Tilkov, http://www.innoq.com/blog/st/ >> >> > >
> IMO, there is no binary answer to the question of whether you should > always use some well-known media type or not. Either approach could > bite back if not well-thought out. There were instances where some > format-designers "forgot" to assign media types and created mess. > Apart from visibility, consider interoperability and compatibility. Yeah, that would be my opinion too. And Roy's words: "A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types)." Personally, I would prefer to err on the side of extending existing standard media types, because I hate too much ad-hoc splintering of specifications for "special" cases that probably are not as special as their creators think. ;) -L
On Jan 2, 2009, at 11:47 AM, Subbu Allamaraju wrote: > > On Jan 2, 2009, at 11:04 AM, Steve Bjorg wrote: > >> On Jan 2, 2009, at 10:37 AM, Subbu Allamaraju wrote: >> >>> On Jan 2, 2009, at 10:25 AM, Steve Bjorg wrote: >>> >>>> >>>> Roy's blog post entitled "REST APIs must be hypertext-driven" [1] >>>> seem >>>> to indicate that documents are just that: documents. Their >>>> purpose is >>>> discovered by inspection. Of course, there are expectations when >>>> following a link and breaking these may cause confusion in the user >>>> agent even for humnas (for example, following a "pay now" link and >>>> arriving at a recipe for chocolate cake). Furthermore, encoding >>>> the >>>> nature of the document in the content type, means that this >>>> information will be out-of-band, which may cause other >>>> complications. >>> >>> >>> That is a misinterpretation. >> >> Sorry, but can you clarify if you simply don't agree with my >> interpretation or if you *know* that my interpretation is wrong? > > See RFC 2616#7.1 and 14.17. Entity headers like Content-Type provide > metadata about the entity. What a given media type means is > something clients and servers learn out of band. This is true for > existing IANA media types as well as new media types. Yes, and that's not a good thing. It means that user agents, intermediaries, and servers must be revved simultaneously. The cost of "educating" end-points is so enormous that it should only be done as a collective effort (I did a post on this some weeks ago showing the information theoretical justification for this). It also means that there is no graceful degradation since either a MIME type is known or it's not. And it doesn't set the stage for serendipitous engineering, imo. >>> The moment you start overloading media types, that visibility >>> would be gone. Also note that this is a problem with any payload >>> format, including Atom. >> >> Would the web really have scaled as well as it did if we had "text/ >> login+html", "text/checkout+html" instead of only "text/html"? >> What if a page allows both operations, does it become "text/login >> +checkout+html"? I'm a bit wary about manifesting the intent at >> that level. > > The answer is simple. Are clients required to apply special > processing rules to any of these? Maybe yes, maybe no... that's the point. A server cannot and should try to anticipate everything the client wants to do. Instead, it should package up the information into a "document" that is self- descriptive and not application specific (i.e. application/xhtml+xml vs. application/vnd.xyz+xml). > IMO, there is no binary answer to the question of whether you should > always use some well-known media type or not. Either approach could > bite back if not well-thought out. There were instances where some > format-designers "forgot" to assign media types and created mess. > Apart from visibility, consider interoperability and compatibility. First off, let me make clear that I'm discussing this to learn. I'm proponent, not a zealot. So I appreciate your point of view, even if I probe its reasons. :) To provide context: the API for our wiki application relies heavily today on custom XML types. It has its benefits (time to market). But also cost (time to educate). However, ever since I started digging my teeth into WOA, I've been putting on a "what-if" hat. For instance, what if one were to rely on semantic annotations of HTML instead of custom XML types? What if a user agent had to interpret a submit form instead of solely relying on built-in knowledge? Preliminary experimentation with using XHTML as standard MIME type for web-services has been quite positive, but I haven't applied to a product yet. And I know from my write up about the cost of custom MIME types, that using this approach is awfully expensive in the grand scheme of things. Hence, using established hypermedia formats should be preferred. However, since cost is bound by the total population of user agents which need to know about it, it's negligible for private, in-house, custom made applications. So, in conclusion, I agree with you that the answer varies, but I believe the outcome is not driven by the application, but its reach (which you may have alluded to already by stating interoperability and compatibility). The more user agents there are interacting with your API, the more driven you should be to fit it into an established MIME type. - Steve -------------- Steve G. Bjorg http://mindtouch.com http://twitter.com/bjorg irc.freenode.net #mindtouch
>> See RFC 2616#7.1 and 14.17. Entity headers like Content-Type
>> provide metadata about the entity. What a given media type means is
>> something clients and servers learn out of band. This is true for
>> existing IANA media types as well as new media types.
>
> Yes, and that's not a good thing. It means that user agents,
> intermediaries, and servers must be revved simultaneously. The cost
> of "educating" end-points is so enormous that it should only be done
> as a collective effort (I did a post on this some weeks ago showing
> the information theoretical justification for this). It also means
> that there is no graceful degradation since either a MIME type is
> known or it's not. And it doesn't set the stage for serendipitous
> engineering, imo.
Steve - that is what I was referring to as the misinterpretation.
Contrary to what you are saying, media types encourage shared formats.
In the absence of that, we have fragmented apps today.
Just imagine Flickr, Picasa Web, Panoramio and others using a shared
media type for managing photos. Since there is no such thing, what do
we have today? Point-to-point integrations that are not portable. That
is the opposite of "serendipitous engineering".
See Dare Obasanjo's post [1] where he uses contacts/address book APIs
as an example. The outcome is the same - fragmentation. I would also
include [2] in that bucket. OpenSocial has an opportunity to get it
right, but I am not aware of any move in that direction.
Here is what is happening today. Most server applications are
expecting client developers to do one of the following kinds of
programming:
if(someFunc(uri1)) // i.e. determine the kind of the URI used
{
doBlah();
}
else if(someFunc(uri2))
{
doBlahBlah();
}
The second category of code is the following for textual document
types such as XML, Atom and JSON.
obj = parseResponse(...);
if("foo".equals(obj.rootElemName)) // Or some nested element or a JSON
property
{
doBlah();
}
else if("bar".equals(obj.rootElemName)) {
doBlahBlah();
}
The first approach introduces coupling between URIs and clients while
the second expects the clients to guess the type of the representation
by parsing it. As late Alan Flavell wrote in [3], content-type
guessing was a source of a number of bugs in IE. This is not to say
that these two styles of code must be avoided at all cost.
Nonetheless, if applied at the root level, they tantamount to media
type tunneling, and tunneling, in general, is sub-optimal.
Consequently, most developers are trying to look for schemas, WADL and
the like to simplify this mess, which undermines the uniform interface
- please see some recent threads on why.
By the way, using the right HTTP method for the right operation is
just part of applying the uniform interface.
>>>> The moment you start overloading media types, that visibility
>>>> would be gone. Also note that this is a problem with any payload
>>>> format, including Atom.
>>>
>>> Would the web really have scaled as well as it did if we had "text/
>>> login+html", "text/checkout+html" instead of only "text/html"?
>>> What if a page allows both operations, does it become "text/login
>>> +checkout+html"? I'm a bit wary about manifesting the intent at
>>> that level.
>>
>> The answer is simple. Are clients required to apply special
>> processing rules to any of these?
>
> Maybe yes, maybe no... that's the point. A server cannot and should
> try to anticipate everything the client wants to do. Instead, it
> should package up the information into a "document" that is self-
> descriptive and not application specific (i.e. application/xhtml+xml
> vs. application/vnd.xyz+xml).
How about using SOAP then? I am not asking sarcastically. A lot in the
industry consider SOAP as a self-describing format. By looking at the
SOAP headers, and the namespace of the body's content, one should be
able to figure out how to process a message. So, why not?
> Preliminary experimentation with using XHTML as standard MIME type
> for web-services has been quite positive, but I haven't applied to a
> product yet. And I know from my write up about the cost of custom
> MIME types, that using this approach is awfully expensive in the
> grand scheme of things. Hence, using established hypermedia formats
> should be preferred. However, since cost is bound by the total
> population of user agents which need to know about it, it's
> negligible for private, in-house, custom made applications.
See the examples above, which show that not introducing media types
can become awfully expensive. Once you have a media type, there is at
least a chance for someone else to reuse it or collaborate with you on
improving it.
> So, in conclusion, I agree with you that the answer varies, but I
> believe the outcome is not driven by the application, but its reach
> (which you may have alluded to already by stating interoperability
> and compatibility). The more user agents there are interacting with
> your API, the more driven you should be to fit it into an
> established MIME type.
Compatibility considerations do sometimes require introducing new
media types.
Subbu
[1] http://www.25hoursaday.com/weblog/2008/10/24/RESTAPIDesignInventMediaTypesNotProtocolsAndUnderstandTheImportanceOfHyperlinks.aspx
[2] http://www.opensocial.org/Technical-Resources/opensocial-spec-v081/restful-protocol
[3] http://www.alanflavell.org.uk/www/content-type.html
On Jan 4, 2009, at 5:00 PM, Subbu Allamaraju wrote:
> Steve - that is what I was referring to as the misinterpretation.
> Contrary to what you are saying, media types encourage shared formats.
> In the absence of that, we have fragmented apps today.
Subbu, I'm sorry, but I must be missing something. Your answer seems
to advocate using shared types, such as "application/xhtml+xml"
instead of specialized MIME types such as "application/
vnd.bank.org.account+xml". Yet that was the same as I've been saying?!?
António's original question was if the MIME type should capture the
internals of the returned document (e.g. transaction vs. account). It
should not. Instead, the document itself must drive the state
transitions of the user agent. For RESTful applications, the content
type should only convey what hypermedia representation was used (XHTML
vs. Atom vs. RDF etc.).
> Just imagine Flickr, Picasa Web, Panoramio and others using a shared
> media type for managing photos. Since there is no such thing, what do
> we have today? Point-to-point integrations that are not portable. That
> is the opposite of "serendipitous engineering".
Nod.
> Here is what is happening today. Most server applications are
> expecting client developers to do one of the following kinds of
> programming:
>
> if(someFunc(uri1)) // i.e. determine the kind of the URI used
> {
> doBlah();
> }
> else if(someFunc(uri2))
> {
> doBlahBlah();
> }
I don't understand this code. Why would the client inspect the URI to
determine what to do? URIs are opaque and should be used as is (unless
the context prescribes how URI can be constructed, such as URI
templates or form elements). Maybe it's just because I've never such
code before, but do people really do this?
> The second category of code is the following for textual document
> types such as XML, Atom and JSON.
>
> obj = parseResponse(...);
> if("foo".equals(obj.rootElemName)) // Or some nested element or a JSON
> property
> {
> doBlah();
> }
> else if("bar".equals(obj.rootElemName)) {
> doBlahBlah();
> }
Well, the root element should always be <html>. ;) Joking aside, the
user agent should only attempt to find the expected information in the
response document. The MIME type should be irrelevant at this stage
since it merely conveys the nature of the hypermedia engine used.
Different hypermedia engines have different expressive powers. For
instance, HTML is the most versatile (arguably too versatile) and
includes a way to express resource creation and COD, whereas AtomPub
only defines resources deletion and editing (both define rules for
resource retrieval).
Introducing a specialized MIME type here would confuse the user agent
since it doesn't know in what "language" you're expressing your
hypermedia engine in.
> The first approach introduces coupling between URIs and clients while
> the second expects the clients to guess the type of the representation
> by parsing it. As late Alan Flavell wrote in [3], content-type
> guessing was a source of a number of bugs in IE. This is not to say
> that these two styles of code must be avoided at all cost.
> Nonetheless, if applied at the root level, they tantamount to media
> type tunneling, and tunneling, in general, is sub-optimal.
See above. There is no guessing of content types.
> Consequently, most developers are trying to look for schemas, WADL and
> the like to simplify this mess, which undermines the uniform interface
> - please see some recent threads on why.
I'm not one of them. Contracts are the antithesis of HATEOAS.
> By the way, using the right HTTP method for the right operation is
> just part of applying the uniform interface.
And that is governed by the MIME type definition (i.e. use GET for <a
href="...">), not by the server. The server merely uses the
expressive power of the hypermedia type to convey possible operations.
> How about using SOAP then? I am not asking sarcastically. A lot in the
> industry consider SOAP as a self-describing format. By looking at the
> SOAP headers, and the namespace of the body's content, one should be
> able to figure out how to process a message. So, why not?
SOAP is not self-descriptive like HTML or Atom are: it's statically
self-descriptive, not dynamically. That is, it does not define how
the response describes further state transitions. I'm gonna leave at
that, as it's a tangent.
>> Preliminary experimentation with using XHTML as standard MIME type
>> for web-services has been quite positive, but I haven't applied to a
>> product yet. And I know from my write up about the cost of custom
>> MIME types, that using this approach is awfully expensive in the
>> grand scheme of things. Hence, using established hypermedia formats
>> should be preferred. However, since cost is bound by the total
>> population of user agents which need to know about it, it's
>> negligible for private, in-house, custom made applications.
>
> See the examples above, which show that not introducing media types
> can become awfully expensive. Once you have a media type, there is at
> least a chance for someone else to reuse it or collaborate with you on
> improving it.
Unless you're going to push the media type definitions through a
standards body, you'll be wasting your time. Your time would be
better invested in extending or constraining an existing hypermedia
type. For instance, you could limit HTML to a subset for your web-
services or augment Atom to provide the means to describe how
resources can be created. I think Google didn't pretty nice job with
their GData API and how they've applied to their various applications,
while at the same time exposing the limitations of AtomPub as the
hypermedia engine.
>> So, in conclusion, I agree with you that the answer varies, but I
>> believe the outcome is not driven by the application, but its reach
>> (which you may have alluded to already by stating interoperability
>> and compatibility). The more user agents there are interacting with
>> your API, the more driven you should be to fit it into an
>> established MIME type.
>
> Compatibility considerations do sometimes require introducing new
> media types.
This is a contradiction in terms. How can you justify introducing a
new media type for the purpose of compatibility? By definition
nothing in the world will be compatible with it!
I stand by my initial position that custom types are bad. New RESTful
APIs should either build on AtomPub or a form of semantic HTML. The
choice will be driven by how much expressive power the application
expects the user agent to be able to cope with.
- Steve
--------------
Steve G. Bjorg
http://mindtouch.com
http://twitter.com/bjorg
irc.freenode.net #mindtouch
On Jan 4, 2009, at 9:38 PM, Steve Bjorg wrote:
> On Jan 4, 2009, at 5:00 PM, Subbu Allamaraju wrote:
>> Steve - that is what I was referring to as the misinterpretation.
>> Contrary to what you are saying, media types encourage shared
>> formats.
>> In the absence of that, we have fragmented apps today.
>
> Subbu, I'm sorry, but I must be missing something. Your answer
> seems to advocate using shared types, such as "application/xhtml
> +xml" instead of specialized MIME types such as "application/
> vnd.bank.org.account+xml". Yet that was the same as I've been
> saying?!?
Nope. If you read my previous response on tunneling and Dare's post,
it is the opposite of what you are saying.
> For RESTful applications, the content type should only convey what
> hypermedia representation was used (XHTML vs. Atom vs. RDF etc.).
Can you explain how came to that conclusion?
>> if(someFunc(uri1)) // i.e. determine the kind of the URI used
>> {
>> doBlah();
>> }
>> else if(someFunc(uri2))
>> {
>> doBlahBlah();
>> }
>
> I don't understand this code. Why would the client inspect the URI
> to determine what to do? URIs are opaque and should be used as is
> (unless the context prescribes how URI can be constructed, such as
> URI templates or form elements). Maybe it's just because I've never
> such code before, but do people really do this?
Flickr style client code may look like the above. Or, take a client
app that is trying to work with the three photo sharing sites I
mentioned. Please don't read too much into the structure of the code.
It is the dependency I am pointing out.
>>> So, in conclusion, I agree with you that the answer varies, but I
>>> believe the outcome is not driven by the application, but its reach
>>> (which you may have alluded to already by stating interoperability
>>> and compatibility). The more user agents there are interacting with
>>> your API, the more driven you should be to fit it into an
>>> established MIME type.
>>
>> Compatibility considerations do sometimes require introducing new
>> media types.
>
> This is a contradiction in terms. How can you justify introducing a
> new media type for the purpose of compatibility? By definition
> nothing in the world will be compatible with it!
Nope. For the same reason XHTML has a different media type than HTML.
> I stand by my initial position that custom types are bad. New
> RESTful APIs should either build on AtomPub or a form of semantic
> HTML. The choice will be
Fair enough. I can't object :)
Subbu
On Jan 4, 2009, at 10:08 PM, Subbu Allamaraju wrote:
> On Jan 4, 2009, at 9:38 PM, Steve Bjorg wrote:
>
>> For RESTful applications, the content type should only convey what
>> hypermedia representation was used (XHTML vs. Atom vs. RDF etc.).
>
> Can you explain how came to that conclusion?
Conclusion is a strong word. That's more the way I'm leaning
currently. Regardless, this exchange has motivated me enough to
finally commit some of my thoughts to a wiki page entitled "The
Hypermedia Scale".
http://restpatterns.org/Articles/The_Hypermedia_Scale
The driving question behind it is that if HATEOAS is the style to
follow, then how does one translate the HATEOAS principles that have
worked so well for human-to-machine interactions to machine-to-machine
interactions? Surprisingly, while there are multiple, established
hypermedia types, none are either complete or constrained enough for
this use case. Atom lacks the crucial ability to describe how to
create new entries in the presence of extensions, and HTML has so much
expressive power that it's causing headaches. It would be interesting
to have a discussion on how to improve on this (or, just as
importantly, correct the article where it's wrong).
>>> if(someFunc(uri1)) // i.e. determine the kind of the URI used
>>> {
>>> doBlah();
>>> }
>>> else if(someFunc(uri2))
>>> {
>>> doBlahBlah();
>>> }
>>
>> I don't understand this code. Why would the client inspect the URI
>> to determine what to do? URIs are opaque and should be used as is
>> (unless the context prescribes how URI can be constructed, such as
>> URI templates or form elements). Maybe it's just because I've
>> never such code before, but do people really do this?
>
> Flickr style client code may look like the above. Or, take a client
> app that is trying to work with the three photo sharing sites I
> mentioned. Please don't read too much into the structure of the
> code. It is the dependency I am pointing out.
Ok, now I get it. The check was to determine what host the user agent
interacts with, not the structure of the path.
Yes, sadly that will remain the reality for a while as there doesn't
appear to be a hypermedia type that has just the right expressive
power for all our RESTful services needs. However, that doesn't mean
"carte blanche" to run off and invent completely new types. :)
>>>> So, in conclusion, I agree with you that the answer varies, but I
>>>> believe the outcome is not driven by the application, but its reach
>>>> (which you may have alluded to already by stating interoperability
>>>> and compatibility). The more user agents there are interacting
>>>> with
>>>> your API, the more driven you should be to fit it into an
>>>> established MIME type.
>>>
>>> Compatibility considerations do sometimes require introducing new
>>> media types.
>>
>> This is a contradiction in terms. How can you justify introducing
>> a new media type for the purpose of compatibility? By definition
>> nothing in the world will be compatible with it!
>
> Nope. For the same reason XHTML has a different media type than HTML.
I've read your response at least half a dozen times. The only
conclusion I could draw was that you meant to say that a new media
type is introduced to *break* compatibility and not preserve it, am I
correct? Yet you begin your response by negating mine, which stated
that it would not be compatible. (confused)
- Steve
--------------
Steve G. Bjorg
http://mindtouch.com
http://twitter.com/bjorg
irc.freenode.net #mindtouch
On 04.01.2009, at 23:59, Steve Bjorg wrote: > So, in conclusion, I agree with you that the answer varies, but I > believe the outcome is not driven by the application, but its reach > (which you may have alluded to already by stating interoperability > and compatibility). The more user agents there are interacting with > your API, the more driven you should be to fit it into an > established MIME type. This is the same discussion as DSLs vs. general purpose programming languages, or internal DSLs vs. external DSLs, or UML vs. MOF, or a specific XML format vs. using an existing one, or a generic WADL-type thing vs. an application-specific description, or using POST vs. introducing a new verb. Using an existing solution and "tunneling" your specifics through it is good because you can rely on existing tools to support it, at least to some degree. It's bad because you're not explicit about what you do, and you inherit lots of stuff that you potentially don't need. Using a specific solution means you can do exactly what you need, but you can't rely on available tools and might end up re-inventing the wheel. I don't think there's a "right" or "wrong" here: both options are valid, it's really a design choice in every specific situation. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Sorry, if this topic has already been discussed. Searched the list but couldn't find a concrete answer, so shooting my question here. For performance reasons, we think that an aggregate request is a good idea. For example, let's say a server keeps users and their presence status. In a typical operation, the client will request list of users and a separate request for their presence status. Since user/presence status are separate resources, the client makes separate requests like GET /users/<id> and GET /presence/<id> If I want to express an aggregate operation for retrieving user info AND the presence info, how would I express that in the URI? Creating a composite resource is probably not a good idea as there can many combinations of resources. Any ideas? thanks, -rama
Hello again: First let me thank you guys for all the answers. I think the first part of my post is well answered, regarding the use of hiperlinks as attributes on my custom XML entities. Even if the use of such hiperlinks is not confined to use in HTTP, as at that moment I have implemented all my services using HTTP, IMAP and JMS, and probably others will follow. And this is a very important constraint for us, to have the same services accessible from different connectors. Now for my second question relating to the use of "extended" media-types (like application/vnd.bank.org.account+xml) it seems it's a question prone to intense debate. So I think I'll expand a little more on this. My initial positions about this, for me that I'm not a expert in anything and thus it's a position derived from "intuition" rather that rational thinking, is that "extended" media-types used to describe "structure" are bad, because a) they are not standard, and I don't see the point of using media-types not universally recognised b) they are not self sufficient because they are not self-describing, in the sense that, as in Subbu examples, they are bound to a schema c) mime-types should be used to specify different "representations" of the same resource, as in the same report being represented by a text/html response or a application/pdf response, and not different "entities" like application/vnd.bank.org.account+xml or application/vnd.bank.org.customer+xml. Because the "entity" should be bound to a resource, not to it's media-type. However, in the other hand, I think they will probably be a) a very powerful way of meta description/definition b) it probable solve my use-case problem (that I'll describe in a moment) Now let me say that I often find Mr.Fielding writings somewhat "dense" to say the least (my fault, for sure, even more considering that I learned English at about the same time I've learned Cobol) but by reading his entry: "A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types." I simply can't understand what he is trying to say about media-types. For instance, "defining the media type" refers to existing, universally known media-types or the definition on (extended) new ones? Since he later refers to "existing standard media types" I guess the first part refers to "new" media-types, that will point to a solution like the one Subbu used in his article. And what exactly are "extended relation names"? Or for that matter, what are "relation names"? So what about my concrete motivation for my changes and this post? 1) to effectively implement HATEAOS 2) to get rid of the use of any kind of contract, namely WADL This can be seen in the following example: GET /rest/reports <reports> <report>Report A<\report> <report>Report B<\report> <report>Report C<\report> </reports> The point 1) is solved by using <reports> <report href="/rest/reports/reporta">Report A<\report> <report href="/rest/reports/reportb">Report B<\report> <report href="/rest/reports/reportc">Report C<\report> </reports> Now to navigate to Report A, for instance, I have to peek at a WADL file to know what are params that Report A have to be giving, for instance Report A -> employee number, Report B -> start date, end date So for a WADL-free solution what can I have? <reports> <report href="/rest/reports/reporta" param="employeeNumber">Report A<\report> <report href="/rest/reports/reportb" param="startDate" param="endDate">Report B<\report> <report href="/rest/reports/reportc">Report C<\report> </reports> but that is clearly insufficient, so I have to change the "structure" <reports> <report href="/rest/reports/reporta"> <name>Report A</name> <param type="xs:integer">employeeNumber</param> <\report> (...) </reports> OR ELSE <reports> <report href="/rest/reports/reporta" type="application/reports+xml">Report A<\report> (...) </reports> which seems more simple and somehow more flexible and "adaptable" and less "breaking" to what I have now. So, given this real-life example, what seems to be the cons-pros? Giving ff course that there are no "only-one", optimal solution... Once again, thanks for all the insightful help. On Jan 5, 2009 8:57am, Stefan Tilkov <stefan.tilkov@...> wrote: > > > > > > > > > > On 04.01.2009, at 23:59, Steve Bjorg wrote: > > > > > So, in conclusion, I agree with you that the answer varies, but I > > > believe the outcome is not driven by the application, but its reach > > > (which you may have alluded to already by stating interoperability > > > and compatibility). The more user agents there are interacting with > > > your API, the more driven you should be to fit it into an > > > established MIME type. > > > > This is the same discussion as DSLs vs. general purpose programming > > languages, or internal DSLs vs. external DSLs, or UML vs. MOF, or a > > specific XML format vs. using an existing one, or a generic WADL-type > > thing vs. an application-specific description, or using POST vs. > > introducing a new verb. > > > > Using an existing solution and "tunneling" your specifics through it > > is good because you can rely on existing tools to support it, at least > > to some degree. It's bad because you're not explicit about what you > > do, and you inherit lots of stuff that you potentially don't need. > > Using a specific solution means you can do exactly what you need, but > > you can't rely on available tools and might end up re-inventing the > > wheel. > > > > I don't think there's a "right" or "wrong" here: both options are > > valid, it's really a design choice in every specific situation. > > > > Stefan > > -- > > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > > > > > > > >
Hello Rama, > GET /users/<id> and GET /presence/<id> Why not just do these two GETs, and cache the representations if performance is an issue? It seems to me that conflating two logically different entities isn't necessarily a good thing to do. Jim
On 05.01.2009, at 06:26, ramsub4 wrote: > Creating a composite resource is probably not a good idea as there can > many combinations of resources. Any ideas? I don't see why this is a bad idea. What would be the downside of creating new resources? I think resources can be created cheaply enough so that a generic solution becomes unnecessary in the majority of actual use cases. If you start going down the generic route, you end up inventing SQL over HTTP GET which I consider a bad idea indeed. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On 05.01.2009, at 12:21, amsmota@... wrote: > Now let me say that I often find Mr.Fielding writings somewhat > "dense" to say the least (my fault, for sure, even more considering > that I learned English at about the same time I've learned Cobol) > but by reading his entry: > > "A REST API should spend almost all of its descriptive effort in > defining the media type(s) used for representing resources and > driving application state, or in defining extended relation names > and/or hypertext-enabled mark-up for existing standard media types." > > I simply can't understand what he is trying to say about media- > types. For instance, "defining the media type" refers to existing, > universally known media-types or the definition on (extended) new > ones? In my understanding, it refers to defining new media types such as "application/atom+xml". > Since he later refers to "existing standard media types" I guess the > first part refers to "new" media-types, that will point to a > solution like the one Subbu used in his article. Yes, although I found it easier to interpret Roy's remark in the context of Web-scale applications instead of those that might be internal in an enterprise. > And what exactly are "extended relation names"? Or for that matter, > what are "relation names"? In <link rel='XYZ' ref="http://example.org/1234" />, "XYZ" is the relation name. So my interpretation is: Whenever you design a RESTful application, you should primarily be thinking about the content types. Whether they're completely proprietary, re-use some existing media type or something in between is orthogonal. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On 05.01.2009, at 12:21, amsmota@... wrote: > c) mime-types should be used to specify different "representations" > of the same resource, as in the same report being represented by a > text/html response or a application/pdf response, and not different > "entities" like application/vnd.bank.org.account+xml or application/ > vnd.bank.org.customer+xml. Because the "entity" should be bound to a > resource, not to it's media-type. I think this is matter of decoupling vs. cohesion, i.e. if a meaningful application would need to understand the "customer" and "account" entities, you could use something like application/ vnd.bank.org.core-banking+xml (possibly with a type="account|customer" parameter"). Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Yes, that's what I was thinking right now, but then why don't use just Content-Type: application/xml;type=vnd.bank.org.account instead of "creating" a new media-type that is not actually a "media type" at all? On Jan 5, 2009 1:35pm, Stefan Tilkov <stefan.tilkov@...> wrote: > On 05.01.2009, at 12:21, amsmota@... wrote: > > > > > c) mime-types should be used to specify different "representations" of the same resource, as in the same report being represented by a text/html response or a application/pdf response, and not different "entities" like application/vnd.bank.org.account+xml or application/vnd.bank.org.customer+xml. Because the "entity" should be bound to a resource, not to it's media-type. > > > > > > > I think this is matter of decoupling vs. cohesion, ie if a meaningful application would need to understand the "customer" and "account" entities, you could use something like application/vnd.bank.org.core-banking+xml (possibly with a type="account|customer" parameter"). > > > > Stefan > > -- > > Stefan Tilkov, http://www.innoq.com/blog/st/ > > >
Why don't you have a "sub-resource"
GET /users/{id}/presence
(in url template notation)
> On Jan 5, 2009 5:26am, ramsub4 ramsub4@...> wrote:
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Sorry, if this topic has already been discussed. Searched the list but
> >
> > couldn't find a concrete answer, so shooting my question here.
> >
> >
> >
> > For performance reasons, we think that an aggregate request is a good
> >
> > idea. For example, let's say a server keeps users and their presence
> >
> > status. In a typical operation, the client will request list of users
> >
> > and a separate request for their presence status. Since user/presence
> >
> > status are separate resources, the client makes separate requests like
> >
> >
> >
> > GET /users/ and GET /presence/
> >
> >
> >
> > If I want to express an aggregate operation for retrieving user info
> >
> > AND the presence info, how would I express that in the URI?
> >
> >
> >
> > Creating a composite resource is probably not a good idea as there can
> >
> > many combinations of resources. Any ideas?
> >
> >
> >
> > thanks,
> >
> >
> >
> > -rama
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
Or even more straightforward, Content-Type: application/xml;type=http://www.mycompany.com/schemas/bankaccounts.xsd On Jan 5, 2009 1:48pm, amsmota@... wrote: > Yes, that's what I was thinking right now, but then why don't use just > > Content-Type: application/xml;type=vnd.bank.org.account > > instead of "creating" a new media-type that is not actually a "media type" at all? > > > > > On Jan 5, 2009 1:35pm, Stefan Tilkov stefan.tilkov@...> wrote: > > On 05.01.2009, at 12:21, amsmota@... wrote: > > > > > > > > > > c) mime-types should be used to specify different "representations" of the same resource, as in the same report being represented by a text/html response or a application/pdf response, and not different "entities" like application/vnd.bank.org.account+xml or application/vnd.bank.org.customer+xml. Because the "entity" should be bound to a resource, not to it's media-type. > > > > > > > > > > > > > > I think this is matter of decoupling vs. cohesion, ie if a meaningful application would need to understand the "customer" and "account" entities, you could use something like application/vnd.bank.org.core-banking+xml (possibly with a type="account|customer" parameter"). > > > > > > > > Stefan > > > > -- > > > > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > > > >
> >> See RFC 2616#7.1 and 14.17. Entity headers like Content-Type > >> provide metadata about the entity. What a given media type means is > >> something clients and servers learn out of band. This is true for > >> existing IANA media types as well as new media types. > > > > Yes, and that's not a good thing. It means that user agents, > > intermediaries, and servers must be revved simultaneously. The cost > > of "educating" end-points is so enormous that it should only be done > > as a collective effort (I did a post on this some weeks ago showing > > the information theoretical justification for this). It also means > > that there is no graceful degradation since either a MIME type is > > known or it's not. And it doesn't set the stage for serendipitous > > engineering, imo. I'm with Steve here. I mean, if we're trying to stick to the specs, how about http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.7 : "Use of non-registered media types is discouraged." ? I think the Obasanjo article supports the idea that OpenSocial is a good approach - no coupling to specific URI schemes, and no client "guessing" either. And it's based on using the well-standardized (though not IANA-registered?) 'application/xrds+xml' media type, rather than inventing a new media type. Same with some other well-designed RESTful API's that have been mentioned. So, similarly, Flickr/Picasa/Panoramio could use a microformat or some other semantically-enabled format to make their services more portable. No need for a new media type. Like I said, I could see a theoretical case for both, but as a matter of practicality, I would err on the side of using an existing media type that supports semantic extensions for specific purposes.
On 05.01.2009, at 15:00, amsmota@... wrote: > Or even more straightforward, > > Content-Type: application/xml;type=http://www.mycompany.com/schemas/bankaccounts.xsd > > > This has been beaten to death recently: http://tech.groups.yahoo.com/group/rest-discuss/message/11734 Stefan
Hello, Bill Burke wrote: > > > I was reading the exchange between Gunnar Peterson and Pete Lacey: > > http://72.249.21.88/nonintersecting/?year=2006&monthnum=12&day=01&name=restful-security&page= > <http://72.249.21.88/nonintersecting/?year=2006&monthnum=12&day=01&name=restful-security&page=> > > Another thing I worry about is some of the things Gunnar talks about in > his series of blogs: Protecting the message from message routers. I've > also been pushing hard at JBoss to get REST over HTTP as a unifying > protocol and architectural design for our ESB and issues like this will > start to crop up as I re-educate (brainwash?) our people. It's already mentioned in one of the comments on the blog entry you refer to, but in case you've missed it, HTTPsec sounds interesting: http://httpsec.org/ The specification is under GNU Free Documentation License 1.2, but the implementation is under a proprietary licence. > Finally, I'm looking for new security options to implement and promote. Perhaps Henry's blog on authentication using FOAF+SSL might be of interest: http://blogs.sun.com/bblfish/entry/foaf_ssl_a_first_implementation http://blogs.sun.com/bblfish/entry/foaf_ssl_adding_security_to There are longer discussions on this on the foaf-protocols list: http://lists.foaf-project.org/pipermail/foaf-protocols/ Best wishes, Bruno. -- http://blog.distributedmatter.net/
> On Jan 4, 2009, at 10:08 PM, Subbu Allamaraju wrote: > >> On Jan 4, 2009, at 9:38 PM, Steve Bjorg wrote: >> >>> For RESTful applications, the content type should only convey what >>> hypermedia representation was used (XHTML vs. Atom vs. RDF etc.). >> >> Can you explain how came to that conclusion? > > Conclusion is a strong word. That's more the way I'm leaning > currently. Regardless, this exchange has motivated me enough to > finally commit some of my thoughts to a wiki page entitled "The > Hypermedia Scale". > http://restpatterns.org/Articles/The_Hypermedia_Scale > > The driving question behind it is that if HATEOAS is the style to > follow, then how does one translate the HATEOAS principles that have > worked so well for human-to-machine interactions to machine-to- > machine interactions? Surprisingly, while there are multiple, > established hypermedia types, none are either complete or > constrained enough for this use case. Atom lacks the crucial > ability to describe how to create new entries in the presence of > extensions, and HTML has so much expressive power that it's causing > headaches. It would be interesting to have a discussion on how to > improve on this (or, just as importantly, correct the article where > it's wrong). Steve - allow me to refer back to my previous comment that there is yes/no answer to this question. You seem to be alluding that it "incorrect" to create new media types, which is not the case. There are two ways to let clients learn about the contents of a representation and neither is wrong. One is less optimal than the other. Subbu --- http://subbu.org
>>> > I'm with Steve here. I mean, if we're trying to stick to the specs, > how about http://www.w3.org/Protocols/rfc2616/rfc2616- > sec3.html#sec3.7 : > "Use of non-registered media types is discouraged." ? You may be reading too much into that. Since that RFC was written, a number of new types were introduced. Note that standardization happens usually after a discovered need for interop. > I think the Obasanjo article supports the idea that OpenSocial is a > good approach - no coupling to specific URI schemes, and no client > "guessing" either. And it's based on using the well-standardized > (though not IANA-registered?) 'application/xrds+xml' media type, > rather than inventing a new media type. Same with some other > well-designed RESTful API's that have been mentioned. Please look at the JSON/XML examples - not the XRDS part. Subbu --- http://subbu.org
On Jan 5, 2009 3:53pm, Subbu Allamaraju <subbu@...> wrote: > > There are two ways to let clients learn about the contents of a > representation and neither is wrong. One is less optimal than the other. > For what I read in this thread and the other one "MIME properties instead +", it seems to me that BOTH are less than optimal... :) I mean, I know there is no "the" solution, but it's a bit frustrating for me to have to do things that are "less than optimal", or at least "less than good". Nevertheless since this is not a urgent matter for us I'll keep looking and reading, and maybe discussing. Cheers.
And another slightly OT post (feel free to point me to a more appropriate list): What's people's opinion about using text/uri-list, defined in [1], as a generic format for lists of URIs? The RFC says it's intended for a specific purpose, namely identification of replicated resources. So would it be better to a) invent a new media type in the vnd tree, b) use text/uri-list beyond its original scope or c) draft a new RFC for this? Thanks, Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
I am a big fan on text/uri-list and use it all the time. I would vote for 'b' (although I was unaware it had a well-defined specific purpose). I typically use it for two purposes: 1. batch deletes: I have a url that retrieves all items marked for deletion as text/uri-list. I can then iterate those, sending an http DELETE to each. 2. I use them for an "ingest" function. I have an "ingester" end-point I can POST to (it accepts 'text/uri-list') which will do a GET on the body of the post (generally just a single uri, i.e. a list w/ one member). That way I can do the equivalent of an AtomPub POST-to-media-collection but the media item lives at a given uri, not on my desktop. Would be curious to hear if these uses sound virtuous, evil or somewhere in the middle. --peter keane On Mon, Jan 5, 2009 at 12:30 PM, Stefan Tilkov <stefan.tilkov@...> wrote: > And another slightly OT post (feel free to point me to a more > appropriate list): > > What's people's opinion about using text/uri-list, defined in [1], as > a generic format for lists of URIs? The RFC says it's intended for a > specific purpose, namely identification of replicated resources. So > would it be better to a) invent a new media type in the vnd tree, b) > use text/uri-list beyond its original scope or c) draft a new RFC for > this? > > Thanks, > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > >
Here's the missing link: [1] http://www.ietf.org/rfc/rfc2483.txt Stefan On 05.01.2009, at 19:30, Stefan Tilkov wrote: > And another slightly OT post (feel free to point me to a more > appropriate list): > > What's people's opinion about using text/uri-list, defined in [1], as > a generic format for lists of URIs? The RFC says it's intended for a > specific purpose, namely identification of replicated resources. So > would it be better to a) invent a new media type in the vnd tree, b) > use text/uri-list beyond its original scope or c) draft a new RFC for > this? > > Thanks, > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > >
Jim Webber wrote: > > > Hello Rama, > > > GET /users/<id> and GET /presence/<id> > > Why not just do these two GETs, and cache the representations if > performance is an issue? It seems to me that conflating two logically > different entities isn't necessarily a good thing to do. I sometimes think it's difficult to design good symmetric data apis, especially if you are designing them from already existing relational data. Here are some reasons I hear about: it'll be slower first time round for the client. it'll cost the client more money* it's more hits on the backing data store (ie fine grained access over the network) it'll be more engineering effort and DUF (to put the caching layer in) I can make counter-arguments to each one of these, but that doesn't make them wrong. Granted, presence isn't the best example, but there are other extended data, such as permissions, settings, preferences that might justify a single GET. The conflation point you mention; I sometimes see that come up in the real world as a desire for fine grained updates after having munged all the data. Stefan mentioned that inventing SQL over GET was a bad thing, but it is hard to predict what amount of the data graph clients will want. By hard I this - it took the Atom WG many man-years over a period of years to standardize a Feed and an Entry. Most people who hit this problem for their domain don't have those kinds of resources or timelines - and most people who design network data formats appear to be programmers not data specialists. I'm happy that using REST can produces good technical designs up to a layer, but designing good formats that can be used well on top of that layer is challenging. It's hard to know what's logically separate anyway - why is it ok to put Categories in an Entry in Atom, but not make Categories resources? Bill * aggregate data comes up a lot in my part of the mobile space.
ramsub4 wrote: > GET /users/<id> and GET /presence/<id> > > If I want to express an aggregate operation for retrieving user info > AND the presence info, how would I express that in the URI? > > Creating a composite resource is probably not a good idea as there can > many combinations of resources. Any ideas? <person> ... <presence src="" current="" /> </person> IOW, do both. Bill
UriTemplate, I think it belongs to the JSR-311 specification, or at least
belongs to Jersey.
1. /usersetails/rama/presence
2. /userdetails/*/presence
working with a UriTemplate like "/userdetails/{id}/presence", something
like this, using a Jersey annotated Java interface
@GET
@Path(""/userdetails/{name }/presence")
@Produces("text/xml")
String detectPresence( @PathParam("name") String name );
On Jan 5, 2009 6:21pm, Ramamoorthy Subramanian <ramsub4@...> wrote:
> Am not clear on what you mean by url template notation. Could you
describe with an example for each of the following case?
>
> 1. user details/presence info of user 'rama'
>
> 2. user details/presence info of all users. How would you specify the URL
here?
>
> thanks,
>
> -rama
>
> From: "amsmota@..." amsmota@...>
> To: ramsub4 ramsub4@...>
> Sent: Monday, January 5, 2009 5:51:35 AM
> Subject: Re: [rest-discuss] Aggregate URI
>
>
> Why don't you have a "sub-resource"
>
> GET /users/{id}/presence
>
> (in url template notation)
>
>
> On Jan 5, 2009 5:26am, ramsub4 ramsub4@...> wrote:
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Sorry, if this topic has already been discussed. Searched the list but
> >
> > couldn't find a concrete answer, so shooting my question here.
> >
> >
> >
> > For performance reasons, we think that an aggregate request is a good
> >
> > idea. For example, let's say a server keeps users and their presence
> >
> > status. In a typical operation, the client will request list of users
> >
> > and a separate request for their presence status. Since user/presence
> >
> > status are separate resources, the client makes separate requests like
> >
> >
> >
> > GET /users/ and GET /presence/
> >
>
> >
> >
> > If I want to express an aggregate operation for retrieving user info
> >
> > AND the presence info, how would I express that in the URI?
> >
> >
> >
> > Creating a composite resource is probably not a good idea as there can
> >
> > many combinations of resources. Any ideas?
> >
> >
> >
> > thanks,
> >
> >
> >
> > -rama
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
>
>
>
>
>
>
>
On 05.01.2009, at 19:55, amsmota@... wrote: > UriTemplate, I think it belongs to the JSR-311 specification, or at > least belongs to Jersey. Not really; JSR 311 relied on this (although Marc Hadley, the spec co- lead, co-authored it): http://bitworking.org/projects/URI-Templates/spec/draft-gregorio-uritemplate-03.html (Expired, but I couldn't find a newer version.) Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Agree with Stefan. Creating a composite resource is the safest option. Tunneling SQL over a GET is not necessarily bad from the (HTTP) protocol point of view, but opening up SQL over HTTP is an abstraction leak. Subbu On Jan 5, 2009, at 5:21 AM, Stefan Tilkov wrote: > On 05.01.2009, at 06:26, ramsub4 wrote: > >> Creating a composite resource is probably not a good idea as there >> can >> many combinations of resources. Any ideas? > > I don't see why this is a bad idea. What would be the downside of > creating new resources? I think resources can be created cheaply > enough so that a generic solution becomes unnecessary in the > majority of actual use cases. If you start going down the generic > route, you end up inventing SQL over HTTP GET which I consider a bad > idea indeed. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ --- http://subbu.org
I was talking about the Jersey implementation (wich is the reference implementation) of JSR-311, but it seems UriTemplate is Jersey specific, not JSR-311. https://jersey.dev.java.net/source/browse/*checkout*/jersey/tags/jersey-1.0.1/api/jersey/index.html _______________________________________________ Melhores cumprimentos / Beir beannacht / Best regards António Manuel dos Santos Mota mobile PT: +351919623568 (deprecated) mobile IE: +353(0)877718363 mail: amsmota@... skype: amsmota msn: antoniomsmota@hotmail.com linkedin: www.linkedin.com/in/amsmota _______________________________________________ 2009/1/5 Stefan Tilkov <stefan.tilkov@...>: > On 05.01.2009, at 19:55, amsmota@... wrote: > >> UriTemplate, I think it belongs to the JSR-311 specification, or at >> least belongs to Jersey. > > Not really; JSR 311 relied on this (although Marc Hadley, the spec co- > lead, co-authored it): > > http://bitworking.org/projects/URI-Templates/spec/draft-gregorio-uritemplate-03.html > > (Expired, but I couldn't find a newer version.) > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > >
On 05.01.2009, at 21:48, António Mota wrote: > I was talking about the Jersey implementation (wich is the reference > implementation) of JSR-311, but it seems UriTemplate is Jersey > specific, not JSR-311. > > I know. I just wanted to point URI Templates are used in the JSR 311/ JAX-RS spec, not only in Jersey. Stefan
On Mon, Jan 5, 2009 at 12:48 PM, António Mota <amsmota@...> wrote: > I was talking about the Jersey implementation (wich is the reference > implementation) of JSR-311, but it seems UriTemplate is Jersey > specific, not JSR-311. > > https://jersey.dev.java.net/source/browse/*checkout*/jersey/tags/jersey-1.0.1/api/jersey/index.html > Although the source code you cite is about the Jersey specific implementation of URI templates, the JAX RS specification [1] does indeed require all JAX-RS implementations to support template processing on URIs. See Section 3.7 in particular. Craig McClanahan [1] http://jcp.org/aboutjava/communityprocess/final/jsr311/index.html
Well I'm not much for following specs to the letter in the first place. :) And I don't think I'm reading into the spec any more-so than if we extrapolate sections 7.1 and 14.17 - which describe *what* a content-type is - to answer questions about *why* you should or should not express metadata as new content-type. ;) So let's forget about the spec ... you nailed it a long time ago - there's no single answer and the decision should be based on a number of design factors. I'm still trying to learn about all this stuff, but one that stands out to me could be if the metadata is semantic or technical? IMO, metadata to express whether content is visual, audible, or textual seems clearly technical, right? Hence the obvious choice of using different content-types like image/*, audio/*, text/*. Other content-type metadata seems to be technical in nature as well - file formats, character sets, etc. But metadata to express *semantics* seems like a very different issue? Semantics cut across the technical differences in some areas, but are highly specialized in others. For example, the semantics of having alternative R-, PG-13-, PG-, and G-rated resources could apply to images, audio, or text. On the other hand, a semantic meaning of "synonyms" is particular to language-, i.e. text-, based data. So I personally apply this to hypertext as an engine for application state by prefering to put any *semantic* metadata that will drive state transitions (hyperlinks!) into standardized content formats - microformats and semantically-aware formats; and to use as many pre-existing content-types as possible. If I do manage to come up with a genuinely new *technical* type of data, I'll register a new content-type value. Though I'm not sure that would have anything to do with state transitions. ;) -L --- In rest-discuss@yahoogroups.com, Subbu Allamaraju <subbu@...> wrote: > > >>> > > I'm with Steve here. I mean, if we're trying to stick to the specs, > > how about http://www.w3.org/Protocols/rfc2616/rfc2616- > > sec3.html#sec3.7 : > > "Use of non-registered media types is discouraged." ? > > You may be reading too much into that. Since that RFC was written, a > number of new types were introduced. Note that standardization happens > usually after a discovered need for interop. > > > I think the Obasanjo article supports the idea that OpenSocial is a > > good approach - no coupling to specific URI schemes, and no client > > "guessing" either. And it's based on using the well-standardized > > (though not IANA-registered?) 'application/xrds+xml' media type, > > rather than inventing a new media type. Same with some other > > well-designed RESTful API's that have been mentioned. > > Please look at the JSON/XML examples - not the XRDS part. > > Subbu > --- > http://subbu.org >
Stefan Tilkov wrote: > The RFC says it's intended for a >> specific purpose, namely identification of replicated resources. The RFC uses it for that purpose, but I see nothing in it to indicate that it need not be used for other purposes. Indeed the part focusing on text/uri-list doesn't mention this purpose. Good separation of concerns. Of itself, it is a format that encodes a list of zero or more URIs along with optional comments. If that solves your purposes then I don't see a problem. >> So >> would it be better to a) invent a new media type in the vnd tree, b) >> use text/uri-list beyond its original scope or c) draft a new RFC for >> this? d) Use text/uri-list in what I'm not sure is beyond its original scope at all. Now, my further thought is wondering whether it is suitable for the hypertext document that SHOULD accompany a 301, 302, 303 or 307. The gist of such a note would be a single URI, which it could certainly provide, though there is no semantics for clearly indicating that this URI should now be followed beyond the use of comments.
On 06.01.2009, at 15:43, Jon Hanna wrote: > Stefan Tilkov wrote: > > The RFC says it's intended for a > >> specific purpose, namely identification of replicated resources. > > The RFC uses it for that purpose, but I see nothing in it to indicate > that it need not be used for other purposes. Indeed the part > focusing on > text/uri-list doesn't mention this purpose. Good separation of > concerns. > > I'm happy if that's the way one can interpret this: "Intended usage : Limited Use The text/uri-list media type is intended for use in applications which utilize URIs for replicated resources." That's what it says in the RFC as part of the application to register an official IANA type. > Of itself, it is a format that encodes a list of zero or more URIs > along > with optional comments. If that solves your purposes then I don't > see a > problem. > > >> So > >> would it be better to a) invent a new media type in the vnd tree, > b) > >> use text/uri-list beyond its original scope or c) draft a new RFC > for > >> this? > > d) Use text/uri-list in what I'm not sure is beyond its original scope > at all. > > Now, my further thought is wondering whether it is suitable for the > hypertext document that SHOULD accompany a 301, 302, 303 or 307. The > gist of such a note would be a single URI, which it could certainly > provide, though there is no semantics for clearly indicating that this > URI should now be followed beyond the use of comments. > I think it would be ideal for a 300. 300, 301 and 302 all require the URI the UA is being redirected to inside a Location header. But RFC 2616 also says "Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s).", I wonder what the use case for URI(s) instead of URI is, but if it's to let the client know about alternative URIs, text/uri-list would be a perfect match even in its constrained usage. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Somewhat OT, but I'm trying to find some information on the usage of RDDL [1] in the real world. Is anybody on this list using it? Anybody care to share an opinion on its usefulness and degree of actual deployment? [1] http://www.rddl.org/ Thanks, Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Stefan Tilkov wrote: > "Intended usage : Limited Use > The text/uri-list media type is intended for use in applications which > utilize URIs for replicated resources." > > That's what it says in the RFC as part of the application to register an > official IANA type. Ah, so that was there. I was looking for that as it was one of my reasons for having similar misgivings in the past and I must have gone past it when I re-scanned the RFC after your post. Back to my old opinion then. Whatever their intentions, they created a format that gives a list of URIs and if that's all the semantics needed it can be used. > I think it would be ideal for a 300. I don't. 300 entails there being possible reasons for the client preferring one of the options over the others (or else the server should have just sent the "best" one or even a random one). text/uri-list doesn't have enough semantics to offer a way to make such a choice. > 300, 301 and 302 all require the URI the UA is being redirected to > inside a Location header. Actually, it's a SHOULD in these regards, but I can't see much value in not doing so. > But RFC 2616 also says "Unless the request > method was HEAD, the entity of the response SHOULD contain a short > hypertext note with a hyperlink to the new URI(s).", I wonder what the > use case for URI(s) instead of URI is, but if it's to let the client > know about alternative URIs, text/uri-list would be a perfect match even > in its constrained usage. I can't see much value in multiple URIs either. Note that such an entity SHOULD be sent even if there is a single URI, though with most (all?) modern implementations it's of no value as they will just follow the URI in the location header. text/uri-list with a single URI could perhaps be a way to follow the letter of the RFC in this regard, though I'm not sure if it follows the spirit.
On 06.01.2009, at 17:14, Jon Hanna wrote: > Stefan Tilkov wrote: > > I think it would be ideal for a 300. > > I don't. 300 entails there being possible reasons for the client > preferring one of the options over the others (or else the server > should > have just sent the "best" one or even a random one). On re-reading the appropriate section, I agree. Stefan
Hi Fabio,
* Fabio Mancinelli <fm@...> [2008-12-19 12:40]:
> Imagine to have a model where you have a document identified by
> an id that can have different translations (one of them being
> the default) and revisions (each translation has its own
> independent revision history).
>
> I could model these resources in the following way:
>
> /* Default language *
> /{docId}
> /{docId}/versions
> /{docId}/versions/{version}
> /* Additional translations */
> /{docId}/translations
> /{docId}/translations/{lang}
> /{docId}/translations/{lang}/versions
> /{docId}/translations/{lang}/versions/{version}
>
> Or, with the same expressive power, I might do:
>
> /{docId}[?translation=lang&version=v]
> /{docId}/translations
> /{docId}/versions[?translation=lang]
my first inclination would be to make the language segment
mandatory and prepend it to all URIs:
/{lang}/{docId}
/{lang}/{docId}/versions
/{lang}/{docId}/versions/{version}
Just make / or the equivalent entry point a redirect to the
default language. That makes your URIs much simpler, and they
also look cleaner and are more hackable.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
Hi, I have been experimenting with page partial updates, whereas http://server/breadcrumb doesnt have an html representation visible to a browser, but contains an html fragment. Is anyone aware of a media type alredy defined for xml fragments? My search has only returned http://www.w3.org/TR/xml-fragment.html which never made it to recommendation. Any comments are welcome. -- Sebastien _________________________________________________________________ Imagine a life without walls. See the possibilities http://clk.atdmt.com/UKM/go/122465943/direct/01/
I think this is a case where you're probably looking for a microformat and not a media type? -L --- In rest-discuss@yahoogroups.com, Sebastien Lambla <seb@...> wrote: > > > Hi, > > I have been experimenting with page partial updates, whereas http://server/breadcrumb doesnt have an html representation visible to a browser, but contains an html fragment. > > Is anyone aware of a media type alredy defined for xml fragments? My search has only returned http://www.w3.org/TR/xml-fragment.html which never made it to recommendation. > > Any comments are welcome. > > -- > Sebastien > _________________________________________________________________ > Imagine a life without walls.� See the possibilities > http://clk.atdmt.com/UKM/go/122465943/direct/01/ >
Some HTML fragments will be valid HTML5, so that's a possibility, and you'll still use text/html. You might also consider wrapping the fragment in Atom and using @type="html", as that's specifically designed for encapsulating HTML fragments. Mark On Wed, Jan 7, 2009 at 9:58 AM, Sebastien Lambla <seb@...> wrote: > Hi, > > I have been experimenting with page partial updates, whereas > http://server/breadcrumb doesnt have an html representation visible to a > browser, but contains an html fragment. > > Is anyone aware of a media type alredy defined for xml fragments? My search > has only returned http://www.w3.org/TR/xml-fragment.html which never made it > to recommendation. > > Any comments are welcome. > > -- > Sebastien > > ________________________________ > Choose the perfect PC or mobile phone for you. Click here
This is a similar question to sebatien's question on html fragments:
Should you build in fragmentation to your XML models?
i.e. Make most elements/attributes optional so that you only PUT to the
server the thing you're interesting in updating? For example if you
just wanted to update the address of a customer and not his/her name:
PUT /customer/123
<customer>
<street>555 Tech Square</street>
</customer>
Or is most common practice to just PUT the whole document?
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
It depends on what a corresponding GET returns. If it is the same/ similar to the PUT request below, the one below seems fine. But I presume that a GET to that URI in your example would return a full customer representation, and not just a part of it. In that case, the PUT below is doing a partial update (a la PATCH), which is not how PUT is defined. To simplify this, how about changing the URI below to /customer/123/ street? <...> Subbu On Jan 8, 2009, at 2:14 PM, Bill Burke wrote: > This is a similar question to sebatien's question on html fragments: > > Should you build in fragmentation to your XML models? > > i.e. Make most elements/attributes optional so that you only PUT to > the > server the thing you're interesting in updating? For example if you > just wanted to update the address of a customer and not his/her name: > > PUT /customer/123 > <customer> > <street>555 Tech Square</street> > </customer> > > Or is most common practice to just PUT the whole document? > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com --- http://subbu.org
For me, the decision to support PATCH/MERGE-type updates depends on my caching support. If adding 'partial updates' will invalidate lots of other existing cache items that need to be 'very fresh', then I opt out of adding partial update support in order to cut down on 'out-of-phase' resources spread all over the Internets. mca http://amundsen.com/blog/ On Thu, Jan 8, 2009 at 17:27, Subbu Allamaraju <subbu@...> wrote: > It depends on what a corresponding GET returns. If it is the same/ > similar to the PUT request below, the one below seems fine. > > But I presume that a GET to that URI in your example would return a > full customer representation, and not just a part of it. In that case, > the PUT below is doing a partial update (a la PATCH), which is not how > PUT is defined. > > To simplify this, how about changing the URI below to /customer/123/ > street? > > <...> > > Subbu > > On Jan 8, 2009, at 2:14 PM, Bill Burke wrote: > >> This is a similar question to sebatien's question on html fragments: >> >> Should you build in fragmentation to your XML models? >> >> i.e. Make most elements/attributes optional so that you only PUT to >> the >> server the thing you're interesting in updating? For example if you >> just wanted to update the address of a customer and not his/her name: >> >> PUT /customer/123 >> <customer> >> <street>555 Tech Square</street> >> </customer> >> >> Or is most common practice to just PUT the whole document? >> -- >> Bill Burke >> JBoss, a division of Red Hat >> http://bill.burkecentral.com > > --- > http://subbu.org > > > ------------------------------------ > > Yahoo! Groups Links > > > >
> To simplify this, how about changing the URI below to /customer/123/
> street?
I'd propose one of those three.
So
PATCH /customer/123
Content-Type: application/vnd.org.diff+xml
<diff>
<replace sel="customer/address">
<address>
<street>bla</street>
</address>
</replace>
</diff>
Or
POST /customer/123
Content-Type: application/vnd.org.address+xml
<address>
<street>bla</street>
</address>
Or
PUT /customer/address
Content-Type:application/vnd.org.address+xml
<address>
...
> POST /customer/123 > Content-Type: application/vnd.org.address+xml > > <address> > <street>bla</street> > </address> > What do you return? 201? 200? > PUT /customer/address > Content-Type:application/vnd.org.address+xml > > <address> > ... I imagine you meant PUT /customer/123/address What if you want to change the address and the phone number? PUT /customer/123/address;phone ? v.
Sebastien Lambla wrote: > PATCH /customer/123 Didn't know about PATCH. Is it an approved RFC? Is it going to be rolled into HTTP? Thanks, Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
> Didn't know about PATCH. Is it an approved RFC? Is it going to be > rolled into HTTP? No it's not an approved rfc. The issue you'll run into is the diff format. See: http://lists.w3.org/Archives/Public/ietf-http-wg/2008JanMar/0316.html v.
vincent.lari wrote: > > > > > Didn't know about PATCH. Is it an approved RFC? Is it going to be > > rolled into HTTP? > No it's not an approved rfc. > The issue you'll run into is the diff format. See: > http://lists.w3.org/Archives/Public/ietf-http-wg/2008JanMar/0316.html > <http://lists.w3.org/Archives/Public/ietf-http-wg/2008JanMar/0316.html> > A diff format seems like overengineering at its finest. KISS. Just change your XML schemas to have optional elements/attributes. I guarantee that once REST becomes mainstream everybody will be defining their own XML data formats anyways. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
> A diff format seems like overengineering at its finest. Not if you care about the uniform interface, and this being rest-discuss... v.
vincent.lari wrote: > > > > A diff format seems like overengineering at its finest. > > Not if you care about the uniform interface, and this being > rest-discuss... > It is both overengineering and I'll add, over complicated. Take a look at this format: http://www.snellspace.com/wp/?p=895 Pretty damn cool. But think of the implications of general applications, specifically database centric ones. With a general diff model you have to transform from the database to your language, then transform from your language to the data format, then apply the the diff transformation, then bring it back into your language for any business logic processing, then finally back to the database. Think of how much complication something like this causes for your client code as well? If you start requiring the every-day developer to generate and deal with more artifacts than what you have with WS-* you're not going to make much headway. Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
The upside, of course, is the separation of concerns: Once you've defined a diff format for e.g. JSON, you can throw in two (or three) documents and generate the diff document, or you can apply a diff as a PATCH - and it will work for any content as long as it's serialzed as JSON . And as media types and verbs are two different domains, PATCH doesn't even have to know about any particular data format. BTW, this is James Snell's latest PATCH draft, I believe: http://tools.ietf.org/id/draft-dusseault-http-patch-11.txt Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ On 09.01.2009, at 03:59, Bill Burke wrote: > > > vincent.lari wrote: > > > > > > > A diff format seems like overengineering at its finest. > > > > Not if you care about the uniform interface, and this being > > rest-discuss... > > > > It is both overengineering and I'll add, over complicated. Take a look > at this format: > > http://www.snellspace.com/wp/?p=895 > > Pretty damn cool. But think of the implications of general > applications, specifically database centric ones. With a general diff > model you have to transform from the database to your language, then > transform from your language to the data format, then apply the the > diff > transformation, then bring it back into your language for any business > logic processing, then finally back to the database. > > Think of how much complication something like this causes for your > client code as well? If you start requiring the every-day developer to > generate and deal with more artifacts than what you have with WS-* > you're not going to make much headway. > > Bill > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > >
On 09.01.2009, at 07:20, Stefan Tilkov wrote: > The upside, of course, is the separation of concerns: Once you've > defined a diff format for e.g. JSON, you can throw in two (or three) > documents and generate the diff document, or you can apply a diff as a > PATCH - and it will work for any content as long as it's serialzed as > JSON . And as media types and verbs are two different domains, PATCH > doesn't even have to know about any particular data format. > That wasn't phrased clearly: I meant you can do this as soon as you have an implementation of a processor that does this. Stefan > BTW, this is James Snell's latest PATCH draft, I believe: > http://tools.ietf.org/id/draft-dusseault-http-patch-11.txt > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > On 09.01.2009, at 03:59, Bill Burke wrote: > >> >> >> vincent.lari wrote: >>> >>> >>>> A diff format seems like overengineering at its finest. >>> >>> Not if you care about the uniform interface, and this being >>> rest-discuss... >>> >> >> It is both overengineering and I'll add, over complicated. Take a >> look >> at this format: >> >> http://www.snellspace.com/wp/?p=895 >> >> Pretty damn cool. But think of the implications of general >> applications, specifically database centric ones. With a general diff >> model you have to transform from the database to your language, then >> transform from your language to the data format, then apply the the >> diff >> transformation, then bring it back into your language for any >> business >> logic processing, then finally back to the database. >> >> Think of how much complication something like this causes for your >> client code as well? If you start requiring the every-day developer >> to >> generate and deal with more artifacts than what you have with WS-* >> you're not going to make much headway. >> >> Bill >> >> -- >> Bill Burke >> JBoss, a division of Red Hat >> http://bill.burkecentral.com >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > >
I agree with the concerns that Bill points out. Translating those diffs to the database layer can get expensive/complex, unless such translation is implicit in the programming framework of choice. Before casting the problem as that of "partially updating *a* resource", it may be cheaper to either adjust the granularity of resources or identify special-purpose (i.e. application specific) resources that can make such updates to resources. The same goes for batch use cases as well. Subbu On Jan 8, 2009, at 10:20 PM, Stefan Tilkov wrote: > The upside, of course, is the separation of concerns: Once you've > defined a diff format for e.g. JSON, you can throw in two (or three) > documents and generate the diff document, or you can apply a diff as a > PATCH - and it will work for any content as long as it's serialzed as > JSON . And as media types and verbs are two different domains, PATCH > doesn't even have to know about any particular data format. > > BTW, this is James Snell's latest PATCH draft, I believe: > http://tools.ietf.org/id/draft-dusseault-http-patch-11.txt > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > On 09.01.2009, at 03:59, Bill Burke wrote: > >> >> >> vincent.lari wrote: >>> >>> >>>> A diff format seems like overengineering at its finest. >>> >>> Not if you care about the uniform interface, and this being >>> rest-discuss... >>> >> >> It is both overengineering and I'll add, over complicated. Take a >> look >> at this format: >> >> http://www.snellspace.com/wp/?p=895 >> >> Pretty damn cool. But think of the implications of general >> applications, specifically database centric ones. With a general diff >> model you have to transform from the database to your language, then >> transform from your language to the data format, then apply the the >> diff >> transformation, then bring it back into your language for any >> business >> logic processing, then finally back to the database. >> >> Think of how much complication something like this causes for your >> client code as well? If you start requiring the every-day developer >> to >> generate and deal with more artifacts than what you have with WS-* >> you're not going to make much headway. >> >> Bill >> >> -- >> Bill Burke >> JBoss, a division of Red Hat >> http://bill.burkecentral.com >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > --- http://subbu.org
Ditto. I think if we find ourselves wanting to update "part of a resource" we need to actually re-think the resource as a compositional resource and expose an interface for updating each resource separately. --- In rest-discuss@yahoogroups.com, Subbu Allamaraju <subbu@...> wrote: > > I agree with the concerns that Bill points out. Translating those > diffs to the database layer can get expensive/complex, unless such > translation is implicit in the programming framework of choice. Before > casting the problem as that of "partially updating *a* resource", it > may be cheaper to either adjust the granularity of resources or > identify special-purpose (i.e. application specific) resources that > can make such updates to resources. The same goes for batch use cases > as well. > > Subbu
> I think if we find ourselves wanting to update "part of a > resource" we need to actually re-think the resource as a compositional > resource and expose an interface for updating each resource separately. Especially if your resource has an 'expensive' attribute (like a photo) that you do not want to upload every time. But what about the media type? We'd need to use a separate media type for each 'sub-resource'. And what if we want to update two sub-resources at the same time (e.g. address and salary)? we'd need a uri like /user/123/?part=address;salary. Does it make sense? We'd need another media-type too. Actually we'd need a media type for any combination of subresources (assuming we're not using application+xml). I personally sometimes use PUT for partial updates because I don't have good answers to these questions (I too agree with Bill's comment on over-engineering). I just don't call my API RESTful... -v
Vincent: Many times when I do partial updates, I use "application/x-www-form-urlencoded" as the Media Type. Sometimes "application/atom+xml", "text/xml", "text/plain", etc. You don't need a custom media type for each interaction between user-agent and server. mca http://amundsen.com/blog/ On Fri, Jan 9, 2009 at 14:23, vincent.lari <vincent.lari@...> wrote: > > >> I think if we find ourselves wanting to update "part of a >> resource" we need to actually re-think the resource as a compositional >> resource and expose an interface for updating each resource separately. > > Especially if your resource has an 'expensive' attribute (like a > photo) that you do not want to upload every time. > But what about the media type? We'd need to use a separate media type > for each 'sub-resource'. > And what if we want to update two sub-resources at the same time (e.g. > address and salary)? we'd need a uri like > /user/123/?part=address;salary. Does it make sense? We'd need another > media-type too. Actually we'd need a media type for any combination of > subresources (assuming we're not using application+xml). > > I personally sometimes use PUT for partial updates because I don't > have good answers to these questions (I too agree with Bill's comment > on over-engineering). I just don't call my API RESTful... > > -v > > > ------------------------------------ > > Yahoo! Groups Links > > > >
> But what about the media type? We'd need to use a separate media type > for each 'sub-resource'. No not necessarily a separate media type if you use 1 extensible/composable type. It could all be xml, or json, or some other extensible/composable type. > And what if we want to update two sub-resources at the same time (e.g. > address and salary)? we'd need a uri like > /user/123/?part=address;salary. Does it make sense? Not really, no. If we find ourselves updating multiple "sub-resources" together, it's likely that they're actually a compositional resource. e.g., PUT /user/123/street PUT /user/123/city PUT /user/123/state PUT /user/123/postal could compose into a single request: PUT /user/123/address Or in your example, PUT /user/123/address PUT /user/123/salary could compose into: PUT /user/123/localized-salary >We'd need another > media-type too. Actually we'd need a media type for any combination of > subresources (assuming we're not using application+xml). No, we don't *need* a new type per resource. We should just use 1 extensible/composable media type. PUT /user/123/address <address> <street>Fulton</street> <city>Tulsa</city> <state>OK</state> <postal>74137</postal> </address> PUT /user/123/salary <salary> <value>100000</value> <currency>USD</currency> </salary> PUT /user/123/localized-salary <localized-salary> <address> <street>Fulton</street> <city>Tulsa</city> <state>OK</state> <postal>74137</postal> </address> <salary> <value>100000</value> <currency>USD</currency> </salary> </localized-salary>
* Bill Burke <bburke@...> [2009-01-09 02:00]: > Just change your XML schemas to have optional elements/ > attributes. I guarantee that once REST becomes mainstream > everybody will be defining their own XML data formats anyways. If you do that, then PUTting a document with omitted elements and attributes means they should be deleted from the existing record, not persisted from the previous state of the resource. PUT is an assertion by the client that after request processing, the full state of the resource should correspond to the provided entity and nothing but the provided entity. Now the server is free to implement this any way it wants, which it has to be, in order to be able to normalise the data, insert timestamps, add metadata, provide thumbnails for pictures, or suchlike. It must therefore be free to make assumptions about the entity that the client did not specify explicitly. So people squint and say that inserting data into the new state of the resource that comes from its previous state falls under this. But the PUT request should be self-contained. If the result of the server’s pre-processing is a resource state which from the client’s perspective contradicts in semantically significant ways the client-asserted resource state (and that is clearly the case when you are using PUT as a make-believe PATCH), then you are breaking the PUT contract and your interface is no longer HTTP’s uniform interface. If you don’t like the fully general diff formats for PATCH, invent an app-specific one. That’s what media types are for. This approach is not a good solution in the big picture (see Bill’s recent post about Snowflake APIs[1]), but it is still a *much* better idea than breaking the uniform interface. [1]: http://www.dehora.net/journal/2009/01/09/snowflake-apis/ Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Stefan Tilkov <stefan.tilkov@...> [2009-01-05 10:00]: > I don't think there's a "right" or "wrong" here: both options > are valid, it's really a design choice in every specific > situation. Exactly. I just caught up with the whole Steve Bjorg vs Subbu Allamaraju thread, watching their polar positions play out, and I don’t understand why either of them is taking such a dogmatic stance. Sticking to known media types while you figure out what kinds of things you want to provide to clients and what kinds of things client will need from you is good. Consolidating and formalising that knowledge once it exists is also good. I would say that most of the time you should err on the side of using well-established media types until you have a feel for the issue. “Innovating†in a vacuum is bad. It doesn’t help anyone. You make your mistakes while flying blind because there are few implementations at all ends and they all have to upgrade in lock step. (How often do we have to learn the lesson that this is a recipe for failure?) But people with similar apps should occasionally sit down at a table together and find out how they can standardise their approaches into a separate format. I didn’t read Dare’s post about OpenSocial but from what I get from this thread, this is what happened there. This is good. There is no correct dogma to answer the question of how specific one’s media type should be. All options are valid, each with pros and cons, and you need to decide on a case-by-case basis which side to pick. This sort of tradeoff is what engineering is about (and REST is the closest we have to it in software development). Sorry to the cookie cutter brigade. :-) In passing, though, I have to note that it would be nice if we could do a better job of what media types tried to do with their type/subtype separation, ie. have a standardised way to specify a layering of specifity of formats, including multiple formats, so that it would be possible to say that a document is text, and specifically HTML, and specifically a combination of hCard+hTag+ hEXIF+image-link, and specifically a Flickr photo, so as to allow clients to know what the representation means without having to parse it, at whatever their level of understanding of the specified format. I don’t know if this would work in practice, after all the type/subtype thing in media types is mostly a failure. Maybe that was just because of it tried to constrain types to just two layers. It would also be necessary to do a better job of what media types tried to accomodate with the `+xml` suffix contortion, ie. make sure that types reliant on possibly multiple lower-level formats are expressible in a sensible fashion. If it did work, it would resolve the tradeoff issue nicely. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
imho, it doesn't matter at all , both of the approaches are fine , I would go with the second cos it just looks simpler to me . Although, a quick question - is the translation based on human languages ? if so, have you thought about using HTTP headers to decide on the language ? Or do you want each translation to be addressable as a separate resource Cheers Devdatta
* Devdatta <dev.akhawe@...> [2009-01-10 16:15]: > it doesn't matter at all Not in REST terms, no. In URI design terms, it does. > is the translation based on human languages ? if so, have you > thought about using HTTP headers to decide on the language ? Or > do you want each translation to be addressable as a separate > resource Wherever I have encountered server-driven language selection in the wild it has only ever annoyed me. My preference is always to read content in the language it was originally authored in, unless that is a language I don’t speak of course. So I’ll take an English translation of a French page, but if it is available in both English and German I want either English or German depending on which one is the original version. At other times the different language versions differ significantly, even though they inform about roughly the same things, in which case I might want to look at several of them. Also, if I hit the conference WLAN in Denmark I don’t want to have to fight to get Google in something other than Danish. Etc etc etc. Conneg for language versions is a neat-sounding idea, but in practice there are so many contradictory requirements, edge cases and exceptional circumstances as to make it worthless. And where it is employed there is frequently no painless way to get the actually desired language version. Just Say No. Make the different language versions easily, cleanly addressable and stick a bunch of flag icons on your page, then call it a day. Your users will thank you and you will have less code to write and debug. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Aristotle, Is there a requirement that the Media Type of the PUTted entity be the same as the Media Type of the resource that is being replaced? For example, say I have a resource at some URI that is xhtml containing a set of key/value pairs or something like that. Would it be appropriate/allowed/whatever to PUT an application/x-www-form-urlencoded entity that replaces the key/value pairs of the resource with that URI? Stan Dyck Aristotle Pagaltzis wrote: > * Bill Burke <bburke@...> [2009-01-09 02:00]: > >> Just change your XML schemas to have optional elements/ >> attributes. I guarantee that once REST becomes mainstream >> everybody will be defining their own XML data formats anyways. >> > > If you do that, then PUTting a document with omitted elements and > attributes means they should be deleted from the existing record, > not persisted from the previous state of the resource. > > PUT is an assertion by the client that after request processing, > the full state of the resource should correspond to the provided > entity and nothing but the provided entity. Now the server is > free to implement this any way it wants, which it has to be, in > order to be able to normalise the data, insert timestamps, add > metadata, provide thumbnails for pictures, or suchlike. It must > therefore be free to make assumptions about the entity that the > client did not specify explicitly. > > So people squint and say that inserting data into the new state > of the resource that comes from its previous state falls under > this. But the PUT request should be self-contained. If the result > of the server’s pre-processing is a resource state which from the > client’s perspective contradicts in semantically significant ways > the client-asserted resource state (and that is clearly the case > when you are using PUT as a make-believe PATCH), then you are > breaking the PUT contract and your interface is no longer HTTP’s > uniform interface. > > If you don’t like the fully general diff formats for PATCH, > invent an app-specific one. That’s what media types are for. This > approach is not a good solution in the big picture (see Bill’s > recent post about Snowflake APIs[1]), but it is still a *much* > better idea than breaking the uniform interface. > > [1]: http://www.dehora.net/journal/2009/01/09/snowflake-apis/ > > Regards, >
Hi Stan, * Stan Dyck <stan.dyck@...> [2009-01-10 19:30]: > Is there a requirement that the Media Type of the PUTted entity > be the same as the Media Type of the resource that is being > replaced? as per the message you quoted: > * Aristotle Pagaltzis <pagaltzis@...> [2009-01-10 12:30]: > > Now the server is free to implement this any way it wants, > > which it has to be, in order to be able to normalise the > > data, insert timestamps, add metadata, provide thumbnails for > > pictures, or suchlike. It must therefore be free to make > > assumptions about the entity that the client did not specify > > explicitly. Translating from one media type to another is one of the things in which the server is free to process the entity for storage, so the answer to your question is yes. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On Sat, Jan 10, 2009 at 4:04 PM, Aristotle Pagaltzis <pagaltzis@gmx.de> wrote: > Make the different language versions easily, cleanly addressable > and stick a bunch of flag icons on your page, then call it a day. Never, ever use flags when you mean languages. Flags symbolize countries, not languages. The relationship between languages and countries is far from 1:1. http://www.w3.org/TR/i18n-html-tech-lang/#ri20040808.173208643 http://www.jankoatwarpspeed.com/post/2008/10/27/You-should-never-use-flags-for-language-choice.aspx My personal preference are language names written in the target language, as in wikipedia. Also notice links for languages don't conflict with conneg at all —one can just use conneg for the default case (e.g. base url in apache multiviews). > Your users will thank you They will probably flame you 'cause you're an imperialist pig who used the wrong flag. -- Leonardo Boiko http://namakajiri.net
Aristotle Pagaltzis wrote: > My preference is always to read content in the language it was > originally authored in, unless that is a language I don’t speak > of course. So I’ll take an English translation of a French page, > but if it is available in both English and German I want either > English or German depending on which one is the original version. This isn't a choice of language per se, it's a choice of original version over translation. "en" is a choice of language, "original version" is not. > At other times the different language versions differ > significantly, even though they inform about roughly the same > things, in which case I might want to look at several of them. Reading several translations of something is a different task to reading something without thinking or caring about whether or not it has other translations out there. As a different task for the user, it has different requirements for the tech. > Also, if I hit the conference WLAN in Denmark I don’t want to > have to fight to get Google in something other than Danish. The problem with Google is that while it does language con-neg it does Geo IP first. Google.com handles language fine once you can convince it to let you actually go to google.com. Alas, from Denmark it likes to redirect you to google.dk where language con-neg isn't done. Not a language con-neg problem, quite the opposite. > Conneg for language versions is a neat-sounding idea, but in > practice there are so many contradictory requirements, edge cases > and exceptional circumstances as to make it worthless. And where > it is employed there is frequently no painless way to get the > actually desired language version. > > Just Say No. How do I work out whether to say No, Non, Nien, Nej, Ne or use Irish (which lacking an exact translation for "no" makes for a more involved translation. > Make the different language versions easily, cleanly addressable > and stick a bunch of flag icons on your page, then call it a day. The mapping between flags and languages harmless in some cases, but in other debated with AK-47s, pipe bombs the torturing of people imprisoned without trial and so on. In this regard at least, it doesn't scale.
Stan Dyck wrote: > Is there a requirement that the Media Type of the PUTted entity be the > same as the Media Type of the resource that is being replaced? The media type is just of the (zero to many) entity, not the resource. Hence you can certainly PUT in a different media type than you GET.
* Leonardo Boiko <leoboiko@...> [2009-01-10 20:55]: > Never, ever use flags when you mean languages. Good point. > > Your users will thank you > > They will probably flame you 'cause you're an imperialist pig > who used the wrong flag. The mention of flags in my mail was an afterthought and had little to do with what users would be thankful for. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Jon Hanna <jon@...> [2009-01-10 22:35]: > Aristotle Pagaltzis wrote: > > My preference is always to read content in the language it was > > originally authored in, unless that is a language I don’t speak > > of course. So I’ll take an English translation of a French page, > > but if it is available in both English and German I want either > > English or German depending on which one is the original version. > > This isn't a choice of language per se, it's a choice of original > version over translation. > > "en" is a choice of language, "original version" is not. So where’s the conneg header that lets me pick original vs translation? If there isn’t one, the effect is the same – there are too few variables going into conneg. > > At other times the different language versions differ > > significantly, even though they inform about roughly the same > > things, in which case I might want to look at several of > > them. > > Reading several translations of something is a different task > to reading something without thinking or caring about whether > or not it has other translations out there. As a different task > for the user, it has different requirements for the tech. I may want to perform either task on the same set of documents at various times, so the server must be capable of accommodating both cases. So as far as I can tell, my point stands. > > Also, if I hit the conference WLAN in Denmark I don’t want to > > have to fight to get Google in something other than Danish. > > The problem with Google is that while it does language con-neg > it does Geo IP first. I know. I picked that as a general example of server-driven langauge choice. In fact I bet that what Google does works better for a majority of users than relying on language conneg would, since the latter requires digging around several layers deep in the preferences of the browser, so most people never find it and most of those who do don’t know that it sometimes actually has an effect and what that effect is, and hence won’t touch it anyway. (As with many of the more advanced features of HTTP, UA UI is the biggest reason they’re not getting traction.) Either way, the effect of Geo IP is much the same as conneg: it is usually a pain to work around if the server makes the wrong choice on your behalf. And that’s the point I was getting at. > > Make the different language versions easily, cleanly > > addressable and stick a bunch of flag icons on your page, > > then call it a day. > > The mapping between flags and languages harmless in some cases, > but in other debated with AK-47s, pipe bombs the torturing of > people imprisoned without trial and so on. In this regard at > least, it doesn't scale. Right, please disregard the flags part, which was a half-baked afterthought. (Use textual links, as Leonard proposed.) I stand by the rest. All of your charges are valid insofar as my argument wasn’t particularly solidly constructed, but I stand by my overall thrust that language conneg as per RFC 2616 is a complicated but insufficient solution to a non-problem that is actually a feature with many advantages. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Hi > thrust that language conneg as per RFC 2616 is a complicated but > insufficient solution to a non-problem that is actually a feature > with many advantages. My point in the first mail was that IF conneg is sufficient for him to decide which translation to serve, then he should go for it over including it in the URI. IF he wants to be able to "address each translation as a seperate resource" then he should use his current design. More than all the arguments you have put forward, my main argument for having a separate resource for each translation is that browser's don't allow me to quickly and simply select a language. And regards to the URI design, I haven't really understood the OP's use case but pray what is the difference between - whatever.com/translation/en whatever.com/?translation=en using / over ? and = doesn't make any readability difference*. I strongly argue this point because I have seen a some people change over from the second URI design to the first and say their design is now 'RESTful' Cheers Devdatta *In fact for me, the first case is *less* readable
On Jan 10, 2009, at 5:04 PM, Aristotle Pagaltzis wrote: > All of your charges are valid insofar as my argument wasn’t > particularly solidly constructed, but I stand by my overall > thrust that language conneg as per RFC 2616 is a complicated but > insufficient solution to a non-problem that is actually a feature > with many advantages. It is actually the other way around. Changing the language in a UA is the inconvenient part, and it is likely because users don't change their language selection at the OS or UA level often. Language negotiation itself is not the complicated part. To answer the original question, the solution really depends on the client-side usage. If the clients are machines, and are capable of negotiation, I would stick with language negotiation. If, on the other hand, the application is user-facing, and users are required to switch between languages often for this specific application (for whatever reasons), then provide links to switch between languages. Subbu
I haven't read the full thread in all details but here are my thoughts on internationalization (i18N) based upon over a decade of experience. 1. Don't confuse translation with localization (l10N). 2. If you have a page resource which has had it's UI elements localized, for example a data entry form, then you are looking at one resource with multiple language-specific representation that you can conneg to. 3. l10N conneg can be either language-driven from the Accept-Language header and/or country-driven from the Request IP address. Both are valid localizations and both can be used in concert. For example, the price of goods and services are country-specific localizations not language-specific (both in terms of the currency used and the tax regime to be applied). 4. If you are looking at translations, then you have different resources. For example, Tolstoy wrote in Russian. If I don't speak Russian or read Cyrillic, then I should have a "Read the English translation link" I can click. Notice that the UI elements within which the Tolstoy text appears can still be localized (so I see the available translations listed in a language I can understand). 5. With the health warning that URI structure is orthogonal to REST, and whilst the URI is opaque architecturally nonetheless human-readable URIs are good; this is my personal preference for URI structure: example.com/localized-resource => the resource URI (no representation) example.com/localized-resource.en | .en-gb | .fr | .de => the language conneg URI (no representation) example.com/localized-resource.en-gb.html | .fr.html => the localized html representation if there is an IP-driven country-specific localization: example.co.uk/localized-resource.en-gb.html | .fr.html => the localized html representation for the UK when it comes to translations, use the same structure as above except that each translation will have a different name, e.g. example.co.uk/tolstoy-in-russian.en-gb.html | .fr.html => the localized html representation for the UK but with Russian content example.co.uk/tolstoy-in-english.en-gb.html | .fr.html => the localized html representation for the UK but with the english content translation shown Note to self - I should blog this. Regards, Alan Dean http://twitter.com/adean On Sun, Jan 11, 2009 at 8:34 AM, Subbu Allamaraju <subbu@...> wrote: > > On Jan 10, 2009, at 5:04 PM, Aristotle Pagaltzis wrote: > >> All of your charges are valid insofar as my argument wasn't >> particularly solidly constructed, but I stand by my overall >> thrust that language conneg as per RFC 2616 is a complicated but >> insufficient solution to a non-problem that is actually a feature >> with many advantages. > > It is actually the other way around. Changing the language in a UA is > the inconvenient part, and it is likely because users don't change > their language selection at the OS or UA level often. Language > negotiation itself is not the complicated part. > > To answer the original question, the solution really depends on the > client-side usage. If the clients are machines, and are capable of > negotiation, I would stick with language negotiation. If, on the other > hand, the application is user-facing, and users are required to switch > between languages often for this specific application (for whatever > reasons), then provide links to switch between languages. > > Subbu > ------------------------------------ > > Yahoo! Groups Links > > > >
Alan, Great write up! You should wiki it! ;) - Steve -------------- Steve G. Bjorg http://mindtouch.com http://twitter.com/bjorg irc.freenode.net #mindtouch On Jan 11, 2009, at 3:00 AM, Alan Dean wrote: > I haven't read the full thread in all details but here are my thoughts > on internationalization (i18N) based upon over a decade of experience. > > 1. Don't confuse translation with localization (l10N). > > 2. If you have a page resource which has had it's UI elements > localized, for example a data entry form, then you are looking at one > resource with multiple language-specific representation that you can > conneg to. > > 3. l10N conneg can be either language-driven from the Accept-Language > header and/or country-driven from the Request IP address. Both are > valid localizations and both can be used in concert. For example, the > price of goods and services are country-specific localizations not > language-specific (both in terms of the currency used and the tax > regime to be applied). > > 4. If you are looking at translations, then you have different > resources. For example, Tolstoy wrote in Russian. If I don't speak > Russian or read Cyrillic, then I should have a "Read the English > translation link" I can click. Notice that the UI elements within > which the Tolstoy text appears can still be localized (so I see the > available translations listed in a language I can understand). > > 5. With the health warning that URI structure is orthogonal to REST, > and whilst the URI is opaque architecturally nonetheless > human-readable URIs are good; this is my personal preference for URI > structure: > > example.com/localized-resource => the resource URI (no > representation) > example.com/localized-resource.en | .en-gb | .fr | .de => the > language conneg URI (no representation) > example.com/localized-resource.en-gb.html | .fr.html => the > localized html representation > > if there is an IP-driven country-specific localization: > > example.co.uk/localized-resource.en-gb.html | .fr.html => the > localized html representation for the UK > > when it comes to translations, use the same structure as above except > that each translation will have a different name, e.g. > > example.co.uk/tolstoy-in-russian.en-gb.html | .fr.html => the > localized html representation for the UK but with Russian content > example.co.uk/tolstoy-in-english.en-gb.html | .fr.html => the > localized html representation for the UK but with the english content > translation shown > > Note to self - I should blog this. > > Regards, > Alan Dean > http://twitter.com/adean > > On Sun, Jan 11, 2009 at 8:34 AM, Subbu Allamaraju <subbu@...> > wrote: >> >> On Jan 10, 2009, at 5:04 PM, Aristotle Pagaltzis wrote: >> >>> All of your charges are valid insofar as my argument wasn't >>> particularly solidly constructed, but I stand by my overall >>> thrust that language conneg as per RFC 2616 is a complicated but >>> insufficient solution to a non-problem that is actually a feature >>> with many advantages. >> >> It is actually the other way around. Changing the language in a UA is >> the inconvenient part, and it is likely because users don't change >> their language selection at the OS or UA level often. Language >> negotiation itself is not the complicated part. >> >> To answer the original question, the solution really depends on the >> client-side usage. If the clients are machines, and are capable of >> negotiation, I would stick with language negotiation. If, on the >> other >> hand, the application is user-facing, and users are required to >> switch >> between languages often for this specific application (for whatever >> reasons), then provide links to switch between languages. >> >> Subbu >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >>
Note to self: add to RestPatterns wiki also! On Sun, Jan 11, 2009 at 8:04 PM, Steve Bjorg <steveb@...> wrote: > Alan, > > Great write up! You should wiki it! ;) > > - Steve > > -------------- > Steve G. Bjorg > http://mindtouch.com > http://twitter.com/bjorg > irc.freenode.net #mindtouch > > On Jan 11, 2009, at 3:00 AM, Alan Dean wrote: > >> I haven't read the full thread in all details but here are my thoughts >> on internationalization (i18N) based upon over a decade of experience. >> >> 1. Don't confuse translation with localization (l10N). >> >> 2. If you have a page resource which has had it's UI elements >> localized, for example a data entry form, then you are looking at one >> resource with multiple language-specific representation that you can >> conneg to. >> >> 3. l10N conneg can be either language-driven from the Accept-Language >> header and/or country-driven from the Request IP address. Both are >> valid localizations and both can be used in concert. For example, the >> price of goods and services are country-specific localizations not >> language-specific (both in terms of the currency used and the tax >> regime to be applied). >> >> 4. If you are looking at translations, then you have different >> resources. For example, Tolstoy wrote in Russian. If I don't speak >> Russian or read Cyrillic, then I should have a "Read the English >> translation link" I can click. Notice that the UI elements within >> which the Tolstoy text appears can still be localized (so I see the >> available translations listed in a language I can understand). >> >> 5. With the health warning that URI structure is orthogonal to REST, >> and whilst the URI is opaque architecturally nonetheless >> human-readable URIs are good; this is my personal preference for URI >> structure: >> >> example.com/localized-resource => the resource URI (no representation) >> example.com/localized-resource.en | .en-gb | .fr | .de => the >> language conneg URI (no representation) >> example.com/localized-resource.en-gb.html | .fr.html => the >> localized html representation >> >> if there is an IP-driven country-specific localization: >> >> example.co.uk/localized-resource.en-gb.html | .fr.html => the >> localized html representation for the UK >> >> when it comes to translations, use the same structure as above except >> that each translation will have a different name, e.g. >> >> example.co.uk/tolstoy-in-russian.en-gb.html | .fr.html => the >> localized html representation for the UK but with Russian content >> example.co.uk/tolstoy-in-english.en-gb.html | .fr.html => the >> localized html representation for the UK but with the english content >> translation shown >> >> Note to self - I should blog this. >> >> Regards, >> Alan Dean >> http://twitter.com/adean >> >> On Sun, Jan 11, 2009 at 8:34 AM, Subbu Allamaraju <subbu@...> wrote: >>> >>> On Jan 10, 2009, at 5:04 PM, Aristotle Pagaltzis wrote: >>> >>>> All of your charges are valid insofar as my argument wasn't >>>> particularly solidly constructed, but I stand by my overall >>>> thrust that language conneg as per RFC 2616 is a complicated but >>>> insufficient solution to a non-problem that is actually a feature >>>> with many advantages. >>> >>> It is actually the other way around. Changing the language in a UA is >>> the inconvenient part, and it is likely because users don't change >>> their language selection at the OS or UA level often. Language >>> negotiation itself is not the complicated part. >>> >>> To answer the original question, the solution really depends on the >>> client-side usage. If the clients are machines, and are capable of >>> negotiation, I would stick with language negotiation. If, on the other >>> hand, the application is user-facing, and users are required to switch >>> between languages often for this specific application (for whatever >>> reasons), then provide links to switch between languages. >>> >>> Subbu >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >>> > >
* Devdatta <dev.akhawe@...> [2009-01-11 05:55]:
> I haven't really understood the OP's use case but pray what is
> the difference between -
>
> whatever.com/translation/en
> whatever.com/?translation=en
>
> using / over ? and = doesn't make any readability difference*.
My suggestion was
example.org/en/somedoc
which is easier to hack and in some web frameworks also easier to
dispatch (because the language is in a fixed place in the URI).
> I strongly argue this point because I have seen a some people
> change over from the second URI design to the first and say
> their design is now 'RESTful'
I never said it has anything to do with RESTfulness.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
On Jan 11, 2009, at 1:33 PM, Aristotle Pagaltzis wrote: > My suggestion was > > example.org/en/somedoc > > which is easier to hack and in some web frameworks also easier to > dispatch (because the language is in a fixed place in the URI). I like this URI structure, but doing so because of "some" web frameworks is about the worst reason imaginable. - Steve -------------- Steve G. Bjorg http://mindtouch.com http://twitter.com/bjorg irc.freenode.net #mindtouch
* Steve Bjorg <steveb@...> [2009-01-11 22:45]: > On Jan 11, 2009, at 1:33 PM, Aristotle Pagaltzis wrote: > > My suggestion was > > > > example.org/en/somedoc > > > > which is easier to hack and in some web frameworks also > > easier to dispatch (because the language is in a fixed place > > in the URI). > > I like this URI structure, but doing so because of "some" web > frameworks is about the worst reason imaginable. It’s bad as a reason but nice as a bonus. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On Sun, Jan 11, 2009 at 10:47:01PM +0100, Aristotle Pagaltzis wrote: > * Steve Bjorg <steveb@...> [2009-01-11 22:45]: > > On Jan 11, 2009, at 1:33 PM, Aristotle Pagaltzis wrote: > > > My suggestion was > > > > > > example.org/en/somedoc > > > > > > which is easier to hack and in some web frameworks also > > > easier to dispatch (because the language is in a fixed place > > > in the URI). > > > > I like this URI structure, but doing so because of "some" web > > frameworks is about the worst reason imaginable. > > It’s bad as a reason but nice as a bonus. It seems weird to be encoding language into the URI of a resource when one of the goals of HTTP, resource variants, and content negotiation was to keep the representation variants collected under a single resource name. Most commonly, this includes content type and language. Of course, there are going to be times when you want to explicitly select a resource variant instead of leaving it up to content negotiation, and in this case you want to encode the language or content type into the URI in the least damaging way possible. Just like using http://example.org/html/about and http://example.org/pdf/about would seem a little absurd, so does http://example.org/en/about to me. In each case you end up with some leaked abstraction of resource negotiation across the whole of your site. All your regular URIs now start with "/html/"! There is no way of pointing someone to http://example.org/about and falling back on conneg. The only sensible way to have your cake and eat it when it comes to providing content negotiation and manual variant selection is to encode the variants at the tail end of the URI, so you would choose: http://example.org/about http://example.org/about.en http://example.org/about.fr Or maybe: http://example.org/about http://example.org/about.pdf http://example.org/about.html In both of these cases, you can manually link to and use the resource variants without imposing a "URI tax" on the rest of your site. Leave out the extensions and you fall back to conneg. Best, -- Noah Slater, http://tumbolia.org/nslater
* Noah Slater <nslater@...> [2009-01-12 02:25]: > It seems weird to be encoding language into the URI of a > resource when one of the goals of HTTP, resource variants, > and content negotiation was to keep the representation variants > collected under a single resource name. Most commonly, this > includes content type and language. Different-language versions are rarely fungible in the way that different-content-type versions are. Even then the concept is fraught with leaky abstractions, but at least it remains in the realm of formalisms rather than touching on human culture. All the really painful problems in computing lie at its intersection with culture: scripts/writing systems, languages, datetimes, finance, etc. Humans are holistic and messy. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On Mon, Jan 12, 2009 at 09:22:29AM +0100, Aristotle Pagaltzis wrote: > * Noah Slater <nslater@...> [2009-01-12 02:25]: > > It seems weird to be encoding language into the URI of a > > resource when one of the goals of HTTP, resource variants, > > and content negotiation was to keep the representation variants > > collected under a single resource name. Most commonly, this > > includes content type and language. > > Different-language versions are rarely fungible in the way that > different-content-type versions are. Saying that content types are fungible is fallacious. One HTML representation of a resource may be totally different from another. Web architecture does not mandate that resource variants be either fungible or canonical representations. Even then, I'm unsure what fungibility would have to do with anything. A key concept of Webarch is that resources may have multiple representations, and that these can be navigated by the UA. By designing a site that ignores this provision totally, you're missing out on some low-hanging fruit. > Even then the concept is fraught with leaky abstractions, but at least it > remains in the realm of formalisms rather than touching on human culture. All > the really painful problems in computing lie at its intersection with culture: > scripts/writing systems, languages, datetimes, finance, etc. Humans are > holistic and messy. I'm not sure what this is meant to mean. Are you saying that the split between resource and representation is a leaky abstraction? Sure, it's not perfect, but for the most part it works, and is a core part of Webarch. Providing two language variants of a resource is no more a profound statement of absolutes than providing two media types. It's simply a way of advertising alternate representations of a single resource using the standard provisions of HTTP. I don't see the problem. -- Noah Slater, http://tumbolia.org/nslater
Noah Slater wrote: > > Providing two language variants of a resource is no more a profound statement of > absolutes than providing two media types. It's simply a way of advertising > alternate representations of a single resource using the standard provisions of > HTTP. I don't see the problem. > > I completely agree with you here - but apparently, according to previous discussions, separate representations are best treated as separate resources. Even for content type (i.e. separate .html .xml .json URIs for different representations of the same resource). The rationale is that, if I send you a link (say, in an email) to example.com/document that I had negotiated as the german xml representation - if you follow that link; your browser will open and request the english html representation. Apparently that's a problem, although I remain thoroughly unconvinced of this if these are in fact merely separate representations of the *same* resource. Regards, Mike
Hi there! I like the idea of RESTful web services a lot in theory, especially compared to the alternative... So we started building a pudding proofing web service for our application. (Please note that the web service is only an auxiliary function of the application, web services are not our core business.) Our prototype service model is intentionally simplistic: we have "nodes" and "users" in a one-to-many containment relationship. All is fine creating, requesting and updating them, deleting "users" is also fine. Before deleting a "node" however, the normal usage pattern would be migrating all its "users" to another "node". I am having trouble adding this single "RPC-ish" operation to our otherwise RESTful service efficiently. The clean solution would be the client listing, and migrating all "users" of the "node". This is however neither efficient, nor atomic. The quick and dirty solution would be passing the fallback "node" as an argument of the DELETE operation, but that breaks many principles and benefits of REST. I am pretty sure similar architecturally unstylish situations will arise in all our services, just like databases have stored procedures, not just SQL. Surely I am not the first with such an issue. Could the kind members of this fine discussion group share their opinions on this, and/or point me towards relevant literature? Thank you in advance: Gabor Szokoli
On Mon, Jan 12, 2009 at 12:57:02PM +0000, Mike wrote: > I completely agree with you here - but apparently, according to previous > discussions, separate representations are best treated as separate > resources. Even for content type (i.e. separate .html .xml .json URIs > for different representations of the same resource). This advice ignores a significant chunk of Web architecture. > The rationale is that, if I send you a link (say, in an email) to > example.com/document that I had negotiated as the german xml > representation - if you follow that link; your browser will open and > request the english html representation. Apparently that's a problem, > although I remain thoroughly unconvinced of this if these are in fact > merely separate representations of the *same* resource. Sure, there are cases when you want to be able to link to a specific representation, and you can still provide for this. Let's say you have the following document: http://example.org/doc This is available in English and German. Your computer system is configured with an English locale and so when you request the document you get the English version. If you pass a link to your friend who is German, his system is configured with a German locale and so he will get the German version. This is transparent localisation, which is great! There might be times when you want to bypass this system. Perhaps the document is an excerpt from Wittgenstein's Tractatus and you want to make a comment about the German version explicitly. You provide the following language variants: http://example.org/doc.en http://example.org/doc.de Include something like the following in the head element: <link rel="alternate" hreflang="en" href="/doc.en" title="English"> <link rel="alternate" hreflang="de" href="/doc.de" title="Deutsche"> Include something in the body element that renders similar to: Available languages: [English], [Deutsche] You can now select a variant and pass the link to a friend without it being negotiated. Using this method, you are providing first class language variants and a resource that knows how to negotiate between them. Best of both worlds. -- Noah Slater, http://tumbolia.org/nslater
I, with my limited understand of "restfulness", don't see why passing the "migrate to" node as a parameter will "breaks many principles and benefits of REST". It seems to me the "correct" way to do it. Why do you think that? And what principles and benefits are you refering? On Jan 12, 2009 11:01am, Gabor Szokoli <szocske@...> wrote: > > > > > > > > > > Hi there! > > > > I like the idea of RESTful web services a lot in theory, especially > > compared to the alternative... > > > > So we started building a pudding proofing web service for our > > application. (Please note that the web service is only an auxiliary > > function of the application, web services are not our core business.) > > > > Our prototype service model is intentionally simplistic: we have > > "nodes" and "users" in a one-to-many containment relationship. > > All is fine creating, requesting and updating them, deleting "users" > > is also fine. > > Before deleting a "node" however, the normal usage pattern would be > > migrating all its "users" to another "node". > > I am having trouble adding this single "RPC-ish" operation to our > > otherwise RESTful service efficiently. > > The clean solution would be the client listing, and migrating all > > "users" of the "node". > > This is however neither efficient, nor atomic. > > The quick and dirty solution would be passing the fallback "node" as > > an argument of the DELETE operation, but that breaks many principles > > and benefits of REST. > > > > I am pretty sure similar architecturally unstylish situations will > > arise in all our services, just like databases have stored procedures, > > not just SQL. > > > > Surely I am not the first with such an issue. Could the kind members > > of this fine discussion group share their opinions on this, and/or > > point me towards relevant literature? > > > > Thank you in advance: > > > > Gabor Szokoli > > > > > > > > > >
On 12.01.2009, at 12:01, Gabor Szokoli wrote: > Before deleting a "node" however, the normal usage pattern would be > migrating all its "users" to another "node". > I am having trouble adding this single "RPC-ish" operation to our > otherwise RESTful service efficiently. > The clean solution would be the client listing, and migrating all > "users" of the "node". > This is however neither efficient, nor atomic. You can GET the user list from /node/old and POST it to /node/new. The implementation of the node resource would remove the users from their current node and add them to itself. Assuming all users and nodes are kept in the same data store, it can do so atomically. The next GET to /node/old will return an empty list. The next GET to / node/new will return both the previously existing as well as the migrated users. If the old and new nodes are kept in different systems, you need to find a different solution, e.g. by marking the users as "in migration" in the old node before actually migrating them. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Jan 12, 2009, at 3:01 AM, Gabor Szokoli wrote: > Our prototype service model is intentionally simplistic: we have > "nodes" and "users" in a one-to-many containment relationship. > All is fine creating, requesting and updating them, deleting "users" > is also fine. > Before deleting a "node" however, the normal usage pattern would be > migrating all its "users" to another "node". > I am having trouble adding this single "RPC-ish" operation to our > otherwise RESTful service efficiently. > The clean solution would be the client listing, and migrating all > "users" of the "node". > This is however neither efficient, nor atomic. > The quick and dirty solution would be passing the fallback "node" as > an argument of the DELETE operation, but that breaks many principles > and benefits of REST. Take a look at the WebDAV spec for some lateral thinking inspiration [1]. In particular the MOVE method [2]. - Steve [1] http://tools.ietf.org/html/rfc4918 [2] http://restpatterns.org/HTTP_Methods/MOVE --------------------------------- Steve G. Bjorg MindTouch San Diego, CA 619.795.8459 office 425.891.5913 mobile http://twitter.com/bjorg
At Sun, 11 Jan 2009 00:34:56 -0800, Subbu Allamaraju <subbu@...> wrote: > > […] > > To answer the original question, the solution really depends on the > client-side usage. If the clients are machines, and are capable of > negotiation, I would stick with language negotiation. If, on the > other hand, the application is user-facing, and users are required > to switch between languages often for this specific application (for > whatever reasons), then provide links to switch between languages. Something else to keep in mind is that many crawlers, including Heritrix, which is used by many organizations to archive the web, do not (currently) handle content-negotiation, so if you are not exposing languages with URIs you are going to have only your server default language version archived. best, Erik Hetzner
* Stefan Tilkov <stefan.tilkov@...> [2009-01-12 14:55]: > You can GET the user list from /node/old and POST it to > /node/new. The implementation of the node resource would remove > the users from their current node and add them to itself. > Assuming all users and nodes are kept in the same data store, > it can do so atomically. Not quite, since the old node must still be deleted in a separate step. POSTing the URI of the old node to the new node seems like the least objectionable approach to me. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On 12.01.2009, at 21:35, Aristotle Pagaltzis wrote: > * Stefan Tilkov <stefan.tilkov@...> [2009-01-12 14:55]: > > You can GET the user list from /node/old and POST it to > > /node/new. The implementation of the node resource would remove > > the users from their current node and add them to itself. > > Assuming all users and nodes are kept in the same data store, > > it can do so atomically. > > Not quite, since the old node must still be deleted in a separate > step. > > True. The implementation handling the POST could delete the old node if it's empty, but I agree this violates assumptions, so I'd do it in a separate step. But Gabor wrote > Before deleting a "node" however, the normal usage pattern would be > migrating all its "users" to another "node". So migrating would be the first action, followed by a DELETE. > POSTing the URI of the old node to the new node seems like the > least objectionable approach to me. > For "MOVE" semantics without a separate verb, I agree. Stefan > > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> >
* Stefan Tilkov <stefan.tilkov@...> [2009-01-12 22:00]: > * Aristotle Pagaltzis <pagaltzis@...> [2009-01-12 21:40]: > > POSTing the URI of the old node to the new node seems like > > the least objectionable approach to me. > > For "MOVE" semantics without a separate verb, I agree. I thought about MOVE momentarily, but to me that seems more objectionable: the old node isn’t overwritten as MOVE would suggest, but rather their contents are merged. This is not a MOVE, it’s something like, I dunno, SUBSUME. And if I’m not going to adhere strictly to the contract of a verb, I’d rather fall back on POST… Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
In the lines of what Aristotle was suggesting, it is better to make these changes in the context of another resource, a la a sidekick. Here is one possibility: POST /mover <mover> <source>some ref to the source node</source> <target>some ref to the node to which the children of the source will be added to</target> </mover> Upon success 303 See Other Location: URI to the updated target node If you prefer, you can create new permanent or semi-permanent resources after each POST. Please note that there is nothing unRESTful about this model - resources can spawn other resources or can be ephemeral. Stefan's suggestion will work, except that it leaves up to the client to DELETE the source node in a separate step. If your application requires atomicity, the above will guarantee that. Subbu On Jan 12, 2009, at 3:01 AM, Gabor Szokoli wrote: > Hi there! > > I like the idea of RESTful web services a lot in theory, especially > compared to the alternative... > > So we started building a pudding proofing web service for our > application. (Please note that the web service is only an auxiliary > function of the application, web services are not our core business.) > > Our prototype service model is intentionally simplistic: we have > "nodes" and "users" in a one-to-many containment relationship. > All is fine creating, requesting and updating them, deleting "users" > is also fine. > Before deleting a "node" however, the normal usage pattern would be > migrating all its "users" to another "node". > I am having trouble adding this single "RPC-ish" operation to our > otherwise RESTful service efficiently. > The clean solution would be the client listing, and migrating all > "users" of the "node". > This is however neither efficient, nor atomic. > The quick and dirty solution would be passing the fallback "node" as > an argument of the DELETE operation, but that breaks many principles > and benefits of REST. > > I am pretty sure similar architecturally unstylish situations will > arise in all our services, just like databases have stored procedures, > not just SQL. > > Surely I am not the first with such an issue. Could the kind members > of this fine discussion group share their opinions on this, and/or > point me towards relevant literature? > > Thank you in advance: > > Gabor Szokoli > >
* Subbu Allamaraju <subbu@...> [2009-01-13 02:15]:
> POST /mover
That’s more SOAPy than the other suggestions so far: you put the
verb in the URI and the addresses in the entity body, so in HTTP
uniform interface terms this is the most opaque of approaches
suggested. Intermediaries would have to parse an XML entity body
to route the request and caches will never know to invalidate
either of the actual resources involved.
I consider explicit addressing the primary criterion to preserve
when straining against the uniform interface. So the solution I
proposed is bad insofar as it puts the verb in the entity body
rather than in the HTTP method, but at least it exposes one of
the resources being operated on at the HTTP level. That’s the
best that can be done within the confines of the HTTP uniform
interface. (If you do it by introducing a new general-purpose
method that covers these semantics then you have done better but
you also have a different uniform interface than the HTTP one –
tradeoffs, tradeoffs…)
PS.: in case someone wants to propose minting a new processor
resource for each node so one can POST the URI to something like
this:
/node/new/subsumer
I considered and rejected that because it merely essentially
lifts the verb from the entity body to the URI (which is a minor
gain as it allows intermediaries to specifically route such
requests without inspecting the body) at the cost of misdirected
cache invalidation in intermediaries and clients (which is a big
loss).
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
Well, there is the diference in that this way the oldnode will delete itself after posting his list of users to the neunode... > On Jan 13, 2009 1:11pm, Aristotle Pagaltzis pagaltzis@...> wrote: > > * amsmota@... amsmota@...> [2009-01-13 13:50]: > > > > > I still don't see why > > > > > > > > > > POST /oldnode?newnode=xpto > > > > > > > > > > is unrestish... > > > > > > > > Since this is a POST request you would put the key/value pair in > > > > the entity body rather than the URI, and then you basically have > > > > the same solution as I proposed, except that you suggest posting > > > > to the old node where I suggested posting to the new one (which > > > > makes no practical difference). > > > > > > > > Regards, > > > > -- > > > > Aristotle Pagaltzis // http://plasmasturm.org/> > >
One approach would be to treat the process as a 'job' that might take
time to complete. In this case, you could use something like the
following:
POST /job-queue/
<migrate-document>
<source-id />
<destination-id />
</migrate-document>
RESPONSE 202 Accepted
Location: /job-queue/{job-id}
GET /job-queue/{job-id}
would return a document showing the status of the job.
Once the job is complete, the resource could include a <link
rel="delete" href="source-id" /> to instruct the client to complete
the workflow.
The server could to whatever is deemed appropriate to guard against
broken workflow including locking {source-id} and/or {destination-id}
while the job is in progress, reporting 405 (Method Not Allowed) for
DELETE /{source-id} while the job is in process, etc.
mca
http://amundsen.com/blog/
On Tue, Jan 13, 2009 at 07:36, Aristotle Pagaltzis <pagaltzis@...> wrote:
> * Subbu Allamaraju <subbu@...> [2009-01-13 02:15]:
>> POST /mover
>
> That's more SOAPy than the other suggestions so far: you put the
> verb in the URI and the addresses in the entity body, so in HTTP
> uniform interface terms this is the most opaque of approaches
> suggested. Intermediaries would have to parse an XML entity body
> to route the request and caches will never know to invalidate
> either of the actual resources involved.
>
> I consider explicit addressing the primary criterion to preserve
> when straining against the uniform interface. So the solution I
> proposed is bad insofar as it puts the verb in the entity body
> rather than in the HTTP method, but at least it exposes one of
> the resources being operated on at the HTTP level. That's the
> best that can be done within the confines of the HTTP uniform
> interface. (If you do it by introducing a new general-purpose
> method that covers these semantics then you have done better but
> you also have a different uniform interface than the HTTP one –
> tradeoffs, tradeoffs…)
>
> PS.: in case someone wants to propose minting a new processor
> resource for each node so one can POST the URI to something like
> this:
>
> /node/new/subsumer
>
> I considered and rejected that because it merely essentially
> lifts the verb from the entity body to the URI (which is a minor
> gain as it allows intermediaries to specifically route such
> requests without inspecting the body) at the cost of misdirected
> cache invalidation in intermediaries and clients (which is a big
> loss).
>
> Regards,
> --
> Aristotle Pagaltzis // <http://plasmasturm.org/>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
[ Attachment content not displayed ]
Since amsmota@... accidentally emailed me instead of the list, here’s the dialogue that followed: * amsmota@... <amsmota@...> [2009-01-13 13:50]: > I still don't see why > > POST /oldnode?newnode=xpto > > is unrestish... * Aristotle Pagaltzis <pagaltzis@...> [2009-01-13 14:11]: > Since this is a POST request you would put the key/value pair > in the entity body rather than the URI, and then you basically > have the same solution as I proposed, except that you suggest > posting to the old node where I suggested posting to the new > one (which makes no practical difference). * amsmota@... <amsmota@...> [2009-01-13 14:25]: > Well, there is the diference in that this way the oldnode will > delete itself after posting his list of users to the neunode... * Aristotle Pagaltzis <pagaltzis@...> [2009-01-13 16:09]: > no, there isn’t. That’s exactly what I proposed, and whether you > post the new node to the old one or vice versa doesn’t make any > difference in this regard. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
I'm trying to use RESTful design in my projects now, and I'm finding it easier to implement without a "URI hierarchy" ... For example, all of my resources could be said to be "inside" or "under" just 2 "master" resources - Users and Projects. But some resources belong to both - e.g., a "ticket" submitted by a user to a project. Not only that, but the ticket also belongs to a certain "tracker" of tickets within the project. So the possible hierarchy of a ticket resource is numerous: /users/daveyjones/tickets/1001 /projects/locker/tickets/1001 /tickets/1001 /projects/locker/tracker/bugs/1001 Implementing this many different URI's doesn't sound fun. So I'm thinking to throw out hierarchy altogether and use top-level resource identification only: /users/daveyjones /projects/locker /trackers/bugs /tickets/1001 But here "sub"-resources' namespaces will quickly fill up such that I can't expose Locker project's bugs tracker AND Ship project's bug tracker without using numerical identifiers, which I also don't like very much. :( So I'm just looking for some general thoughts - what are some other important factors for deciding which, if any, URI hierarchy to use?
On Jan 13, 2009, at 4:36 AM, Aristotle Pagaltzis wrote: >> POST /mover > > That’s more SOAPy than the other suggestions so far: you put the > verb in the URI and the addresses in the entity body, so in HTTP > uniform interface terms this is the most opaque of approaches > suggested. Intermediaries would have to parse an XML entity body > to route the request and caches will never know to invalidate > either of the actual resources involved. 1. The POST is to a URI. We can argue whether the word "mover" is a verb or noun, but that is opaque as far as the protocol is concerned. 2. The fact that the representation refers to other resources does not break the uniform interface. 3. Intermediaries do not need to parse the request to route it. The media type, if a proper one is used, will indicate what representation is being exchanged. 4. Caches will not be able to flush, and that is true for any operation that spawns multiple resources resources. The solution by Stefan, by you, and the one I posted have the same characteristic. That's why we have invalidation caching. > I consider explicit addressing the primary criterion to preserve > when straining against the uniform interface. So the solution I > proposed is bad insofar as it puts the verb in the entity body > rather than in the HTTP method, but at least it exposes one of > the resources being operated on at the HTTP level. That’s the > best that can be done within the confines of the HTTP uniform > interface. (If you do it by introducing a new general-purpose > method that covers these semantics then you have done better but > you also have a different uniform interface than the HTTP one – > tradeoffs, tradeoffs…) I could change the URI to the source or target node in this specific example, but once we extend this example to include, say, ten nodes instead of two, the differences between POSTing to one of those nodes vs to an ephemeral/permanent sidekick will not matter much. Subbu --- http://subbu.org
On Jan 13, 2009, at 7:50 AM, Subbu Allamaraju wrote: > 3. Intermediaries do not need to parse the request to route it. The > media type, if a proper one is used, will indicate what representation > is being exchanged. I can't help but cringe at this... A uniform interface means using an application neutral media type. So the media type is irrelevant to the operation. - Steve -------------- Steve G. Bjorg http://mindtouch.com http://twitter.com/bjorg irc.freenode.net #mindtouch
On Jan 13, 2009, at 7:50 AM, Subbu Allamaraju wrote: > 3. Intermediaries do not need to parse the request to route it. The > media type, if a proper one is used, will indicate what > representation is being exchanged. btw, this should have been "what type of representation". Subbu --- http://subbu.org
Sent off-list by accident: On Mon, Jan 12, 2009 at 2:52 PM, Stefan Tilkov <stefan.tilkov@...> wrote: > > You can GET the user list from /node/old and POST it to /node/new. This sounds good. Or posting the URI of the list even (unless that would allow the user to POST something he would otherwise be unable to GET, but our authority model is not so refined yet.) Is there a convention for representing a single (or a list of) URIs in XML by the way? We can't go with MOVE, I already have to emulate PUT and DELETE to suit GWT clients. (we allow POSTing to an existing ID for update, and POSTing an empty body to an existing ID for delete.) We will probably investigate the possibility of an ephemeral "transaction" resource when I need true atomicity. For now, the erasure of the "node" can simply fail if a new "user" has been added to it concurrently since the move. I'd like to thank you all for the enlightening discussion, please carry on :-) Gabor Szokoli
* Subbu Allamaraju <subbu@...> [2009-01-13 17:10]: > 1. The POST is to a URI. We can argue whether the word "mover" > is a verb or noun, but that is opaque as far as the protocol > is concerned. By that token you need only one URI and only POST. Err, wait… > 2. The fact that the representation refers to other resources > does not break the uniform interface. You aren’t talking about anything I said. Since I suggested sending one of the URIs in the entity body myself, I don’t know what argument you think I was making. > 3. Intermediaries do not need to parse the request to route it. > The media type, if a proper one is used, will indicate what > representation is being exchanged. We had the media type proliferation argument in another thread. > 4. Caches will not be able to flush, and that is true for any > operation that spawns multiple resources resources. The > solution by Stefan, by you, and the one I posted have the > same characteristic. That's why we have invalidation > caching. The solutions that POST to one of the nodes will fail to invalidate the other; the solution you proposed will fail to invalidate *any* of them. > I could change the URI to the source or target node in this > specific example, but once we extend this example to include, > say, ten nodes instead of two Seems like architecture astronautics. One specific problem was given. I see no reason to care about how the proposed solution generalises to a problem I don’t have – since any solution to either problem is going to be a compromise anyway. (But in fact it generalises no worse. You simply POST more than one source URI to the target URI. It only breaks down once you posit that there can be more than one target URI, but *that* problem is *so* much more complex (how do you specify which target takes which entries from which source?) that I say YAGNI until it crops up.) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> I'm trying to use RESTful design in my projects now, and I'm finding > it easier to implement without a "URI hierarchy" ... A REST architecture does not mandate cool or hierarchical uris. Actually, for some, it's preferable to have totally opaque uris. This is from Tim Berners-Lee's Axioms of Web Architecture [1]: "The only thing you can use an identifier for is to refer to an object. When you are not dereferencing you should not look at the contents of the URI string to gain other information as little as possible. For the bulk of Web use they are passed around without anyone looking at their internal contents, the content of the string itself. This is known as the opacity. Software should be made to treat URIs as generally as possible, to allow the most reuse of existing or future schemes." [1] http://www.w3.org/DesignIssues/Axioms.html#opaque -v.
> * Subbu Allamaraju <subbu@...> [2009-01-13 17:10]: >> 1. The POST is to a URI. We can argue whether the word "mover" >> is a verb or noun, but that is opaque as far as the protocol >> is concerned. > > By that token you need only one URI and only POST. Err, wait… That's further generalization than I was implying. The resource in my example was solely there to represent some well-defined thing. When it is ephemeral, it follows the web-style redirect-after-post pattern. When it is not, it returns 201, i.e, a new sub-ordinate resource gets created under the URI used to POST. If the operation is done asynchronously, it returns a 202 and a link to something to monitor it. > >> 2. The fact that the representation refers to other resources >> does not break the uniform interface. > > You aren’t talking about anything I said. Since I suggested > sending one of the URIs in the entity body myself, I don’t know > what argument you think I was making. I was referring to the second part of "you put the verb in the URI and the addresses in the entity body". >> 4. Caches will not be able to flush, and that is true for any >> operation that spawns multiple resources resources. The >> solution by Stefan, by you, and the one I posted have the >> same characteristic. That's why we have invalidation >> caching. > > The solutions that POST to one of the nodes will fail to > invalidate the other; the solution you proposed will fail to > invalidate *any* of them. Sorry, but IMHO, that is splitting hairs. This issue crops up whenever a change to one resource affects some other resource. Subbu --- http://subbu.org
> A REST architecture does not mandate cool or hierarchical uris.
> Actually, for some, it's preferable to have totally opaque uris.
However, Jakob Nielsen makes a strong case [1] for meaningful uris:
"The URL will continue to be part of the Web user interface for
several more years, so a usable site requires:
* a domain name that is easy to remember and easy to spell
* short URLs
* easy-to-type URLs
* URLs that visualize the site structure
* URLs that are "hackable" to allow users to move to higher levels
of the information architecture by hacking off the end of the URL
* persistent URLs that don't change"
BTW, a uri does not have to 'embed' its factory uri; i.e. you
could POST to /users/daveyjones/tickets/1001 and have the resource be
created at /tickets/123
Vincent
[1] http://www.useit.com/alertbox/990321.html
vincent
> This is from Tim Berners-Lee's Axioms of Web Architecture [1]:
>
> "The only thing you can use an identifier for is to refer to an
> object. When you are not dereferencing you should not look at the
> contents of the URI string to gain other information as little as
> possible.
> For the bulk of Web use they are passed around without anyone looking
> at their internal contents, the content of the string itself. This is
> known as the opacity. Software should be made to treat URIs as
> generally as possible, to allow the most reuse of existing or future
> schemes."
>
>
> [1] http://www.w3.org/DesignIssues/Axioms.html#opaque
>
> -v.
>
mike amundsen wrote:
> One approach would be to treat the process as a 'job' that might take
> time to complete. In this case, you could use something like the
> following:
>
> POST /job-queue/
> <migrate-document>
> <source-id />
> <destination-id />
> </migrate-document>
>
> RESPONSE 202 Accepted
> Location: /job-queue/{job-id}
>
> GET /job-queue/{job-id}
> would return a document showing the status of the job.
> Once the job is complete, the resource could include a <link
> rel="delete" href="source-id" /> to instruct the client to complete
> the workflow.
>
> The server could to whatever is deemed appropriate to guard against
> broken workflow including locking {source-id} and/or {destination-id}
> while the job is in progress, reporting 405 (Method Not Allowed) for
> DELETE /{source-id} while the job is in process, etc.
Love it.
Bill
Subbu Allamaraju wrote: > > > I agree with the concerns that Bill points out. Translating those > diffs to the database layer can get expensive/complex, unless such > translation is implicit in the programming framework of choice. Before > casting the problem as that of "partially updating *a* resource", it > may be cheaper to either adjust the granularity of resources or > identify special-purpose (i.e. application specific) resources that > can make such updates to resources. The same goes for batch use cases > as well. Isn't this a job for forms posting? - PUT is taken for overwrites - PATCH is type specific (code need for clients and servers) Whereas the one widely deployed means to update some fields and not others is a forms post. Granted it's heavily associated with REST-RPC hybrids, but it would seem to work for most cases, which are updates to specific fields. Bill
Noah Slater wrote: > > > On Sun, Jan 11, 2009 at 10:47:01PM +0100, Aristotle Pagaltzis wrote: > > * Steve Bjorg <steveb@... <mailto:steveb%40mindtouch.com>> > [2009-01-11 22:45]: > > > On Jan 11, 2009, at 1:33 PM, Aristotle Pagaltzis wrote: > > > > My suggestion was > > > > > > > > example.org/en/somedoc > > > > > > > > which is easier to hack and in some web frameworks also > > > > easier to dispatch (because the language is in a fixed place > > > > in the URI). > > > > > > I like this URI structure, but doing so because of "some" web > > > frameworks is about the worst reason imaginable. > > > > It’s bad as a reason but nice as a bonus. > > It seems weird to be encoding language into the URI of a resource when > one of > the goals of HTTP, resource variants, and content negotiation was to > keep the > representation variants collected under a single resource name. Most > commonly, > this includes content type and language. This can be a real rathole. In the last multilingual system I worked on, there was no question that each language variant was a resource and the resources needed to be related. Given that, whether the language code appears in the URL (or the cname, or as a param) is a server detail. Bill
* Bill de hOra <bill@...> [2009-01-14 00:30]: > - PATCH is type specific (code need for clients and servers) Is there any reason application/x-www-form-urlencoded wouldn’t work as a PATCH format? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Depends on how easy / difficult it is to generate a changeset from urlencoded, or any other form of key/value pairs format out there such as multipart. I have a simple implementation of this for ChangeSet<T> in OpenRasta, but this will only ever work in an update model. Providing add / remove semantics on top of key/value pairs is not a simple issue. Seb > To: rest-discuss@yahoogroups.com > From: pagaltzis@... > Date: Wed, 14 Jan 2009 05:09:57 +0100 > Subject: [rest-discuss] Re: Data format of updates > > * Bill de hOra <bill@...> [2009-01-14 00:30]: > > - PATCH is type specific (code need for clients and servers) > > Is there any reason application/x-www-form-urlencoded wouldn’t > work as a PATCH format? > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> > > ------------------------------------ > > Yahoo! Groups Links > > > _________________________________________________________________ Imagine a life without walls. See the possibilities http://clk.atdmt.com/UKM/go/122465943/direct/01/
vincent.lari wrote: >> A REST architecture does not mandate cool or hierarchical uris. >> Actually, for some, it's preferable to have totally opaque uris. > > However, Jakob Nielsen makes a strong case [1] for meaningful uris: > > "The URL will continue to be part of the Web user interface for > several more years, so a usable site requires: > > * a domain name that is easy to remember and easy to spell > * short URLs > * easy-to-type URLs > * URLs that visualize the site structure > * URLs that are "hackable" to allow users to move to higher levels > of the information architecture by hacking off the end of the URL > * persistent URLs that don't change" Whether or not you do use meaningful URIs, there will always be people who attempt to attribute meaning to them: this is a simple principle of human informatics, just as you find numerologists attributing meaning to arbitrary symbols and numbers. Quite a few malware and porn sites have subverted this principle, using inoffensive and unrelated keywords in URLs to conceal more sinister content, or simply as practical jokes, e.g. http://www2.b3ta.com/top-10-cutest-kittens/ (not work safe) The real question here is, do you exploit this mechanism, thereby increasing users' exposure to "readable" URLs and their expectation that the URL should somehow summarise its content (and therefore their vulnerability to subversion of that expectation), or do you just make arbitrary URLs in the ivory tower conceit that "the specification says that there is no correlation between the form of the URL and its content", which risks firstly alienating users, and secondly them attributing some other, unintended semantics. -- Chris Burdess
groovepapa82 wrote: > I'm trying to use RESTful design in my projects now, and I'm finding > it easier to implement without a "URI hierarchy" ... URI hierarchies are tools (not tools that are anything to do with REST for that matter). If the tool suits a job then use it, if it doesn't then don't.
vincent.lari wrote: >> I'm trying to use RESTful design in my projects now, and I'm finding >> it easier to implement without a "URI hierarchy" ... > > A REST architecture does not mandate cool or hierarchical uris. REST architecture does benefit from cool URIs. Caching restarts from scratch whenever a URI changes, so maintaining URIs is a good thing. REST itself doesn't benefit from hierarchical URIs, but URI references in hypermedia can. > Actually, for some, it's preferable to have totally opaque uris. It's not so much that URIs should or should not be opaque as that URIs *are* opaque. http://example.net/resources/resource is opaque. The hyperlink ".." tells use about a relationship between it and http://example.net/resources/resource but it is the hyperlink that informs of that relationship, not the format of the URI. The hyperlink is shorter, and perhaps can be hard-coded in the code that produced the document, because of the hierarchy. There is also no harm in being guessable. But http://example.net/resources/resource remains as opaque as http://example.net/sdfaoijwfe/fasd.sdfawifea?fds+asd and software must not assume otherwise until a hyperlink suggests otherwise.
We're discussing architecture choices in our team at the moment, and while I'm quite strongly in favour of a RESTful approach, there's some (not unexpected) concerns about efficiency etc. Does anyone have any references to (ideally empirical) studies or comparisons of deployed RESTful systems vs. other approaches? Thanks, Ian
Hi I am not sure where you can get it , but the proceedings of http://eswsa.cs.uiuc.edu/index.html might be useful. (Although I haven't seen them myself) Cheers Devdatta 2009/1/14 Ian Dickinson <i.j.dickinson@...>: > We're discussing architecture choices in our team at the moment, and > while I'm quite strongly in favour of a RESTful approach, there's some > (not unexpected) concerns about efficiency etc. Does anyone have any > references to (ideally empirical) studies or comparisons of deployed > RESTful systems vs. other approaches? > > Thanks, > Ian > >
On 14.01.2009, at 18:54, Ian Dickinson wrote: > while I'm quite strongly in favour of a RESTful approach, there's some > (not unexpected) concerns about efficiency etc No pointers to empirical studies, sorry. But out of curiosity, what are the efficiency concerns? Compared with what? Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
[ Attachment content not displayed ]
On Wed, Jan 14, 2009 at 6:49 PM, Devdatta <dev.akhawe@...> wrote: > I am not sure where you can get it , but the proceedings of > http://eswsa.cs.uiuc.edu/index.html might be useful. (Although I > haven't seen them myself) Thanks Devdetta - this looks really interesting. I've emailed the site maintainer to see if the proceedings are available anywhere. If I find anything out, I'll post it here. Ian
Hi Stefan, On Wed, Jan 14, 2009 at 7:02 PM, Stefan Tilkov <stefan.tilkov@...> wrote: > No pointers to empirical studies, sorry. But out of curiosity, what are the > efficiency concerns? Compared with what? Well, the issues aren't always that precisely articulated, more just background concerns. However, as part of the project we will be integrating query services and metadata stores produced by independent teams. One of the teams is successfully using SQL over ODBC as their current query interface, and they're wondering what impact moving to a resource-oriented approach will have. This is an enterprise/intranet project, so we have more-or-less complete control over the inter-process interfaces. However, as a design goal we want to be able to bring new types and instances of stores and query services online with minimal effort, which is one reason why I favour a uniform abstraction like REST. The other candidate area for a RESTful interaction is between the rich-client (Flex) UI and the presentation tier. Here the comparison would be against something like Flex Data Services. Regards, Ian
Ian: Just a 'heads-up'... So far I've not found an RIA environment that supports PUT, DELETE or OPTIONS, just GET and POST (sometimes HEAD). Pretty sure FDS falls into that space, too (or it did a while ago). You can get around this limitation by 'overloading POST' w/ custom headers (x-http-method="PUT") but that gets pretty messy (and, IMO, very frustrating). mca http://amundsen.com/blog/ On Wed, Jan 14, 2009 at 18:04, Ian Dickinson <i.j.dickinson@...> wrote: > Hi Stefan, > > On Wed, Jan 14, 2009 at 7:02 PM, Stefan Tilkov <stefan.tilkov@...> wrote: >> No pointers to empirical studies, sorry. But out of curiosity, what are the >> efficiency concerns? Compared with what? > Well, the issues aren't always that precisely articulated, more just > background concerns. However, as part of the project we will be > integrating query services and metadata stores produced by independent > teams. One of the teams is successfully using SQL over ODBC as their > current query interface, and they're wondering what impact moving to a > resource-oriented approach will have. This is an enterprise/intranet > project, so we have more-or-less complete control over the > inter-process interfaces. However, as a design goal we want to be able > to bring new types and instances of stores and query services online > with minimal effort, which is one reason why I favour a uniform > abstraction like REST. > > The other candidate area for a RESTful interaction is between the > rich-client (Flex) UI and the presentation tier. Here the comparison > would be against something like Flex Data Services. > > Regards, > Ian > > ------------------------------------ > > Yahoo! Groups Links > > > >
vincent.lari wrote:
>
>
>
> > I'm trying to use RESTful design in my projects now, and I'm finding
> > it easier to implement without a "URI hierarchy" ...
>
> A REST architecture does not mandate cool or hierarchical uris.
> Actually, for some, it's preferable to have totally opaque uris.
> This is from Tim Berners-Lee's Axioms of Web Architecture [1]:
>
> "The only thing you can use an identifier for is to refer to an
> object. When you are not dereferencing you should not look at the
> contents of the URI string to gain other information as little as
> possible.
> For the bulk of Web use they are passed around without anyone looking
> at their internal contents, the content of the string itself. This is
> known as the opacity. Software should be made to treat URIs as
> generally as possible, to allow the most reuse of existing or future
> schemes."
>
Is it ok to embed identity inside the data format?
For example:
<order id="333">
<atom:link rel="self" href="http..." type=".."/>
</order>
I know the "self" link is ok, but what about id?
Bill
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
I think you may get varied opinions on this question, but my experience has been that self links don't always help clients deal with resource identity. I have some examples at http://www.subbu.org/blog/2008/12/resource-identity-and-cool-uris . Please note that both Atom and RSS do include resource identifiers separately from links. Also see Bill de hÓra's http://www.dehora.net/journal/2008/10/07/magnificent-seven-the-value-of-atom/ where he includes atom:id as one of the primitives that needs to be included in representations irrespective of the choice of the format. Subbu On Jan 14, 2009, at 3:24 PM, Bill Burke wrote: > > > vincent.lari wrote: >> >> >> >>> I'm trying to use RESTful design in my projects now, and I'm finding >>> it easier to implement without a "URI hierarchy" ... >> >> A REST architecture does not mandate cool or hierarchical uris. >> Actually, for some, it's preferable to have totally opaque uris. >> This is from Tim Berners-Lee's Axioms of Web Architecture [1]: >> >> "The only thing you can use an identifier for is to refer to an >> object. When you are not dereferencing you should not look at the >> contents of the URI string to gain other information as little as >> possible. >> For the bulk of Web use they are passed around without anyone looking >> at their internal contents, the content of the string itself. This is >> known as the opacity. Software should be made to treat URIs as >> generally as possible, to allow the most reuse of existing or future >> schemes." >> > > Is it ok to embed identity inside the data format? > > For example: > > <order id="333"> > <atom:link rel="self" href="http..." type=".."/> > </order> > > I know the "self" link is ok, but what about id? > > Bill > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com --- http://subbu.org
OK, so I asked about the proceedings of the Empirical Studies of Web Service Architectures. Here's the reply: > The workshop got canceled in the last moment for unavoidable > reasons, lack of interest being one of them. We did not have any > proceedings. However, I would be interested in talking with you > about research options, if you want. So that's a shame. I'll follow up with Munawar and see if there are any other, er, resources to investigate. Ian On Wed, Jan 14, 2009 at 10:48 PM, Ian Dickinson <i.j.dickinson@...> wrote: > On Wed, Jan 14, 2009 at 6:49 PM, Devdatta <dev.akhawe@...> wrote: >> I am not sure where you can get it , but the proceedings of >> http://eswsa.cs.uiuc.edu/index.html might be useful. (Although I >> haven't seen them myself) > Thanks Devdetta - this looks really interesting. I've emailed the site > maintainer to see if the proceedings are available anywhere. If I find > anything out, I'll post it here. > > Ian >
Solomon Duskis wrote: > What's the "best practice" for a custom media type for this case? Is > the following message RESTful enough: > > GET/POST /meal/2009-01-14/breakfast > <meal> > <date>2009-01-14</date> > <mealType>breakfast</mealType> > <foods> > <meal_food> > <food id="333"><some_url_namespace_possibly_atom > href="http://myserver.com/food/333" /></food> > <quantity>3</quantity> > </meal_food> > </food> > </meal> It looks like the id of "333" is duplicating the (behind the scenes) information that goes to make http://myserver.com/food/333 the URI for the item. If this is the case, I would remove that id.
[ Attachment content not displayed ]
Solomon Duskis wrote: > Jon, > > I do understand that id redundant, and therefore should probably be > taken out. However, the id=333 is useful for server processing. It's a > bit easier for the server-side to use the id instead of the url as part > of a POST. You find the string after "/food/" and parse it to an integer. This is pretty trivial in most languages, differing from that to take just the string "333" and obtain an integer (assuming your DB keys are integers rather than strings where that one just happens to be an integer) only by a substring or reg-exp operation. Conversely, if you have both, which am I meant to use where as a client? If I can use either, you have to do the same work for some requests anyway, so you haven't gained anything. If I am meant to use id some places and the URI other places, then you've complicated things for me, the person who knows the least about your system (the documentation of the API probably being the first I've ever heard about it). Using just the ID takes us away from HATEOS, but I'd say this is better than having both, though I'd generally only ever do that in an AJAX situation where the XML was being processed by javascript obtained from the same server (while the javascript is client code, it's client-code that comes from the server, so the dependencies not using HATEOS brings aren't as damaging). Using just the URI considerably increases the likelihood of working out what on earth I should be doing. A nice, though sometimes rare, thing to experience when suddenly given an API doc and a deadline :) Also, if there is more than one possible source of such information, then I'm going to have to introduce something to my records of your identifiers to identify that it isn't just food ID 333 but YOUR food ID 333. So I'm going to have to turn it into something like the URI anyway. > Are there any development frameworks that do a good job of translating > between relational data (id = 333) and RESTful data (/food/333)? If I can afford to make an assumption or two then: What doesn't substring do here? It can be even easier in responding to GET if you used http://myserver.com/food?id=333, since most frameworks come with some sort of dictionary view on query-string parameters. Considering that whichever way you do it you're either going to handle a bunch of text with a number (for the time being - a bonus of the URI approach is it's easier to change key formats if needs be) in a particular place, I don't see much advantage of the custom ID format over the URI.
On Thu, Jan 15, 2009 at 11:15 AM, Jon Hanna <jon@...> wrote: > Solomon Duskis wrote: >> Jon, >> >> I do understand that id redundant, and therefore should probably be >> taken out. However, the id=333 is useful for server processing. It's a >> bit easier for the server-side to use the id instead of the url as part >> of a POST. > > You find the string after "/food/" and parse it to an integer. This is > pretty trivial in most languages, differing from that to take just the > string "333" and obtain an integer (assuming your DB keys are integers > rather than strings where that one just happens to be an integer) only > by a substring or reg-exp operation. > > Conversely, if you have both, which am I meant to use where as a client? > FWIW, I have recently moved completely away from exposing internal unique identifiers for resources. An RPC-ish mindset had me always mapping back and forth between urls and id numbers, but a more RESTful approach has me now identifying things at most layers by the uri. This has simplified things greatly. It basically entails a static method on your model objects for instantiation: MyObject::get(<the uri>), and an object method that produces the uri: $obj->getUri. Among other things, 'text/uri-list' has become a favorite for command line batch operations :). --peter keane > If I can use either, you have to do the same work for some requests > anyway, so you haven't gained anything. > > If I am meant to use id some places and the URI other places, then > you've complicated things for me, the person who knows the least about > your system (the documentation of the API probably being the first I've > ever heard about it). > > Using just the ID takes us away from HATEOS, but I'd say this is better > than having both, though I'd generally only ever do that in an AJAX > situation where the XML was being processed by javascript obtained from > the same server (while the javascript is client code, it's client-code > that comes from the server, so the dependencies not using HATEOS brings > aren't as damaging). > > Using just the URI considerably increases the likelihood of working out > what on earth I should be doing. A nice, though sometimes rare, thing to > experience when suddenly given an API doc and a deadline :) > > Also, if there is more than one possible source of such information, > then I'm going to have to introduce something to my records of your > identifiers to identify that it isn't just food ID 333 but YOUR food ID > 333. So I'm going to have to turn it into something like the URI anyway. > >> Are there any development frameworks that do a good job of translating >> between relational data (id = 333) and RESTful data (/food/333)? > > If I can afford to make an assumption or two then: > > What doesn't substring do here? > > It can be even easier in responding to GET if you used > http://myserver.com/food?id=333, since most frameworks come with some > sort of dictionary view on query-string parameters. > > Considering that whichever way you do it you're either going to handle a > bunch of text with a number (for the time being - a bonus of the URI > approach is it's easier to change key formats if needs be) in a > particular place, I don't see much advantage of the custom ID format > over the URI. > >
[ Attachment content not displayed ]
Subbu Allamaraju wrote: > > > I think you may get varied opinions on this question, but my > experience has been that self links don't always help clients deal > with resource identity. I have some examples at > http://www.subbu.org/blog/2008/12/resource-identity-and-cool-uris > <http://www.subbu.org/blog/2008/12/resource-identity-and-cool-uris> > . Come to think of it, database IDs *never* change. URIs/URLs, although we don't want them to change, they sometimes do. i.e. a stupid admin. Thanks Subbu -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On 15.01.2009, at 18:29, Peter Keane wrote: > FWIW, I have recently moved completely away from exposing internal > unique identifiers for resources. An RPC-ish mindset had me always > mapping back and forth between urls and id numbers, but a more RESTful > approach has me now identifying things at most layers by the uri. > This has simplified things greatly. It basically entails a static > method on your model objects for instantiation: MyObject::get(<the > uri>), and an object method that produces the uri: $obj->getUri. While I like this approach, one downside I find is that I have to push down information from my web tier to the model layer (protocol, host, and possibly some elements of the path). How do you address this? Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Fri, Jan 16, 2009 at 2:25 AM, Stefan Tilkov <stefan.tilkov@...> wrote: > On 15.01.2009, at 18:29, Peter Keane wrote: > >> FWIW, I have recently moved completely away from exposing internal >> unique identifiers for resources. An RPC-ish mindset had me always >> mapping back and forth between urls and id numbers, but a more RESTful >> approach has me now identifying things at most layers by the uri. >> This has simplified things greatly. It basically entails a static >> method on your model objects for instantiation: MyObject::get(<the >> uri>), and an object method that produces the uri: $obj->getUri. > > While I like this approach, one downside I find is that I have to push > down information from my web tier to the model layer (protocol, host, > and possibly some elements of the path). How do you address this? > That's true. I typical set a registry variable with that information (protocol + host + base_path). Not ideal, but it seems to work out OK. --peter > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > >
Peter Keane <pkeane@...> writes: > On Fri, Jan 16, 2009 at 2:25 AM, Stefan Tilkov <stefan.tilkov@...> wrote: >> On 15.01.2009, at 18:29, Peter Keane wrote: >> >>> FWIW, I have recently moved completely away from exposing internal >>> unique identifiers for resources. An RPC-ish mindset had me always >>> mapping back and forth between urls and id numbers, but a more RESTful >>> approach has me now identifying things at most layers by the uri. >>> This has simplified things greatly. It basically entails a static >>> method on your model objects for instantiation: MyObject::get(<the >>> uri>), and an object method that produces the uri: $obj->getUri. >> >> While I like this approach, one downside I find is that I have to push >> down information from my web tier to the model layer (protocol, host, >> and possibly some elements of the path). How do you address this? >> > > That's true. I typical set a registry variable with that information > (protocol + host + base_path). Not ideal, but it seems to work out > OK. Why don't you just have a separate URI and URL instead of assuming the URL to be the equivalent of URI? If I were to go down this path, I'd have a URI->URL and URL->URI mappers. YS
On 15 Jan 2009, at 16:23, Solomon Duskis wrote: > > Are there any development frameworks that do a good job of > translating between relational data (id = 333) and RESTful data (/ > food/333)? I'm looking for advice on how to extend a Java-based > framework to deal with this issue. Most of the development > frameworks that I've seen do put in id and don't even bother to put > in a URL. Solomon, This class is used by the Restlet framework internally to match embedded ids in URLs, but there's no reason why you can't just use it directly or just look at the source. http://www.restlet.org/documentation/1.1/api/org/restlet/util/Template.html Malcolm
Converted my response to a blog post at http://alandean.blogspot.com/2009/01/http-i18n-patterns.html Regards, Alan Dean http://twitter.com/adean On Sun, Jan 11, 2009 at 11:00 AM, Alan Dean <alan.dean@...> wrote: > I haven't read the full thread in all details but here are my thoughts > on internationalization (i18N) based upon over a decade of experience. > > 1. Don't confuse translation with localization (l10N). > > 2. If you have a page resource which has had it's UI elements > localized, for example a data entry form, then you are looking at one > resource with multiple language-specific representation that you can > conneg to. > > 3. l10N conneg can be either language-driven from the Accept-Language > header and/or country-driven from the Request IP address. Both are > valid localizations and both can be used in concert. For example, the > price of goods and services are country-specific localizations not > language-specific (both in terms of the currency used and the tax > regime to be applied). > > 4. If you are looking at translations, then you have different > resources. For example, Tolstoy wrote in Russian. If I don't speak > Russian or read Cyrillic, then I should have a "Read the English > translation link" I can click. Notice that the UI elements within > which the Tolstoy text appears can still be localized (so I see the > available translations listed in a language I can understand). > > 5. With the health warning that URI structure is orthogonal to REST, > and whilst the URI is opaque architecturally nonetheless > human-readable URIs are good; this is my personal preference for URI > structure: > > example.com/localized-resource => the resource URI (no representation) > example.com/localized-resource.en | .en-gb | .fr | .de => the > language conneg URI (no representation) > example.com/localized-resource.en-gb.html | .fr.html => the > localized html representation > > if there is an IP-driven country-specific localization: > > example.co.uk/localized-resource.en-gb.html | .fr.html => the > localized html representation for the UK > > when it comes to translations, use the same structure as above except > that each translation will have a different name, e.g. > > example.co.uk/tolstoy-in-russian.en-gb.html | .fr.html => the > localized html representation for the UK but with Russian content > example.co.uk/tolstoy-in-english.en-gb.html | .fr.html => the > localized html representation for the UK but with the english content > translation shown > > Note to self - I should blog this. > > Regards, > Alan Dean > http://twitter.com/adean > > On Sun, Jan 11, 2009 at 8:34 AM, Subbu Allamaraju <subbu@...> wrote: >> >> On Jan 10, 2009, at 5:04 PM, Aristotle Pagaltzis wrote: >> >>> All of your charges are valid insofar as my argument wasn't >>> particularly solidly constructed, but I stand by my overall >>> thrust that language conneg as per RFC 2616 is a complicated but >>> insufficient solution to a non-problem that is actually a feature >>> with many advantages. >> >> It is actually the other way around. Changing the language in a UA is >> the inconvenient part, and it is likely because users don't change >> their language selection at the OS or UA level often. Language >> negotiation itself is not the complicated part. >> >> To answer the original question, the solution really depends on the >> client-side usage. If the clients are machines, and are capable of >> negotiation, I would stick with language negotiation. If, on the other >> hand, the application is user-facing, and users are required to switch >> between languages often for this specific application (for whatever >> reasons), then provide links to switch between languages. >> >> Subbu >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> >
Also added to rest patterns wiki now http://restpatterns.org/Articles/HTTP_i18N_Patterns On Sun, Jan 11, 2009 at 8:24 PM, Alan Dean <alan.dean@...> wrote: > Note to self: add to RestPatterns wiki also! > > On Sun, Jan 11, 2009 at 8:04 PM, Steve Bjorg <steveb@...> wrote: >> Alan, >> >> Great write up! You should wiki it! ;) >> >> - Steve >> >> -------------- >> Steve G. Bjorg >> http://mindtouch.com >> http://twitter.com/bjorg >> irc.freenode.net #mindtouch >> >> On Jan 11, 2009, at 3:00 AM, Alan Dean wrote: >> >>> I haven't read the full thread in all details but here are my thoughts >>> on internationalization (i18N) based upon over a decade of experience. >>> >>> 1. Don't confuse translation with localization (l10N). >>> >>> 2. If you have a page resource which has had it's UI elements >>> localized, for example a data entry form, then you are looking at one >>> resource with multiple language-specific representation that you can >>> conneg to. >>> >>> 3. l10N conneg can be either language-driven from the Accept-Language >>> header and/or country-driven from the Request IP address. Both are >>> valid localizations and both can be used in concert. For example, the >>> price of goods and services are country-specific localizations not >>> language-specific (both in terms of the currency used and the tax >>> regime to be applied). >>> >>> 4. If you are looking at translations, then you have different >>> resources. For example, Tolstoy wrote in Russian. If I don't speak >>> Russian or read Cyrillic, then I should have a "Read the English >>> translation link" I can click. Notice that the UI elements within >>> which the Tolstoy text appears can still be localized (so I see the >>> available translations listed in a language I can understand). >>> >>> 5. With the health warning that URI structure is orthogonal to REST, >>> and whilst the URI is opaque architecturally nonetheless >>> human-readable URIs are good; this is my personal preference for URI >>> structure: >>> >>> example.com/localized-resource => the resource URI (no representation) >>> example.com/localized-resource.en | .en-gb | .fr | .de => the >>> language conneg URI (no representation) >>> example.com/localized-resource.en-gb.html | .fr.html => the >>> localized html representation >>> >>> if there is an IP-driven country-specific localization: >>> >>> example.co.uk/localized-resource.en-gb.html | .fr.html => the >>> localized html representation for the UK >>> >>> when it comes to translations, use the same structure as above except >>> that each translation will have a different name, e.g. >>> >>> example.co.uk/tolstoy-in-russian.en-gb.html | .fr.html => the >>> localized html representation for the UK but with Russian content >>> example.co.uk/tolstoy-in-english.en-gb.html | .fr.html => the >>> localized html representation for the UK but with the english content >>> translation shown >>> >>> Note to self - I should blog this. >>> >>> Regards, >>> Alan Dean >>> http://twitter.com/adean >>> >>> On Sun, Jan 11, 2009 at 8:34 AM, Subbu Allamaraju <subbu@...> wrote: >>>> >>>> On Jan 10, 2009, at 5:04 PM, Aristotle Pagaltzis wrote: >>>> >>>>> All of your charges are valid insofar as my argument wasn't >>>>> particularly solidly constructed, but I stand by my overall >>>>> thrust that language conneg as per RFC 2616 is a complicated but >>>>> insufficient solution to a non-problem that is actually a feature >>>>> with many advantages. >>>> >>>> It is actually the other way around. Changing the language in a UA is >>>> the inconvenient part, and it is likely because users don't change >>>> their language selection at the OS or UA level often. Language >>>> negotiation itself is not the complicated part. >>>> >>>> To answer the original question, the solution really depends on the >>>> client-side usage. If the clients are machines, and are capable of >>>> negotiation, I would stick with language negotiation. If, on the other >>>> hand, the application is user-facing, and users are required to switch >>>> between languages often for this specific application (for whatever >>>> reasons), then provide links to switch between languages. >>>> >>>> Subbu >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>>> >> >> >
> Come to think of it, database IDs *never* change.
Within the context of one database and one version of your app, this may
well be right. However, your resource design should be able to survive a
migration / new scheme / new database etc.
I tend to persist URI fragments that make sense from a URI template POV, and
rely on the handler mechanism to resolve those. So instead of relying on an
ID, for customers I'd have /film/{name} and {name} is a URI specific
identifier that helps resolving the URI.
I'm not a big fan of persisting full URIs as most of the work done in
OpenRasta is to keep URI maintenance to a minimum.
Seb
All, I have now got the video of the "Separating REST Facts from Fallacies" [1] presentation I gave at DDD7 [2] in November. Microsoft hosted the conference and provided the video technicians (which is why there are MS disclaimers plastered on the video). There is an embedded flash stream video [3] version, weighing in at 178MB (so please be a little patient whilst it loads) and a WMV version (290MB) for easy download [4]. I am interested in feedback as I have four confirmed user group bookings for this talk in the UK in the next 2/3 months and have proposed it for two further conferences (alongside "Delivering RESTful systems with Microsoft Azure" as a double session). [1] http://www.slideshare.net/alan.dean/separating-rest-facts-from- fallacies-presentation [2] http://www.developerday.co.uk/ddd/agendaddd7lineup.asp [3] http://thoughtpad.net/alan-dean/separating-rest-facts-from- fallacies/ddd7.htm [4] http://thoughtpad.net/alan-dean/separating-rest-facts-from- fallacies/ddd7.wmv Regards, Alan Dean http:twitter.com/adean
Update: I have posted this on my blog now: http://alandean.blogspot.com/2009/01/video-of-rest-at-ddd7.html If you share a link, can I request that you share the blog link please :-) Alan On Sun, Jan 18, 2009 at 2:12 PM, Alan Dean <alan.dean@...> wrote: > All, > > I have now got the video of the "Separating REST Facts from Fallacies" > [1] presentation I gave at DDD7 [2] in November. Microsoft hosted the > conference and provided the video technicians (which is why there are > MS disclaimers plastered on the video). > > There is an embedded flash stream video [3] version, weighing in at > 178MB (so please be a little patient whilst it loads) and a WMV > version (290MB) for easy download [4]. > > I am interested in feedback as I have four confirmed user group > bookings for this talk in the UK in the next 2/3 months and have > proposed it for two further conferences (alongside "Delivering RESTful > systems with Microsoft Azure" as a double session). > > [1] http://www.slideshare.net/alan.dean/separating-rest-facts-from- > fallacies-presentation > > [2] http://www.developerday.co.uk/ddd/agendaddd7lineup.asp > > [3] http://thoughtpad.net/alan-dean/separating-rest-facts-from- > fallacies/ddd7.htm > > [4] http://thoughtpad.net/alan-dean/separating-rest-facts-from- > fallacies/ddd7.wmv > > Regards, > Alan Dean > http:twitter.com/adean > >
Stefan Tilkov wrote: > > > On 15.01.2009, at 18:29, Peter Keane wrote: > > > FWIW, I have recently moved completely away from exposing internal > > unique identifiers for resources. An RPC-ish mindset had me always > > mapping back and forth between urls and id numbers, but a more RESTful > > approach has me now identifying things at most layers by the uri. > > This has simplified things greatly. It basically entails a static > > method on your model objects for instantiation: MyObject::get(<the > > uri>), and an object method that produces the uri: $obj->getUri. > > While I like this approach, one downside I find is that I have to push > down information from my web tier to the model layer (protocol, host, > and possibly some elements of the path). How do you address this? I've use config files with a base URL which can be based on URI templates . Or with more modern APIs* like JSR-311 you have an annotation that is populated with the base URL and can be passed along. Iirc, Django had to do some work to allow URL resolution in view templates. Great question, we should have a pattern for how this works with standard software engineering layering practices, as it can mean opening a hole in the domain/facade/repository layers. Bill * can this be the year we drop "API" from "ReST API"? Please? ;)
On Fri, Jan 16, 2009 at 8:34 AM, Peter Keane <pkeane@...> wrote: > On Fri, Jan 16, 2009 at 2:25 AM, Stefan Tilkov <stefan.tilkov@...> wrote: >> On 15.01.2009, at 18:29, Peter Keane wrote: >> >>> FWIW, I have recently moved completely away from exposing internal >>> unique identifiers for resources. An RPC-ish mindset had me always >>> mapping back and forth between urls and id numbers, but a more RESTful >>> approach has me now identifying things at most layers by the uri. >>> This has simplified things greatly. It basically entails a static >>> method on your model objects for instantiation: MyObject::get(<the >>> uri>), and an object method that produces the uri: $obj->getUri. >> >> While I like this approach, one downside I find is that I have to push >> down information from my web tier to the model layer (protocol, host, >> and possibly some elements of the path). How do you address this? >> > > That's true. I typical set a registry variable with that information > (protocol + host + base_path). Not ideal, but it seems to work out > OK. > Actually, you are right to ask -- I am finding it not so simple as it first appears in many cases (my config not abstracted enough). I'll look forward to seeing good practices/patterns emerge. --peter > --peter > >> Stefan >> -- >> Stefan Tilkov, http://www.innoq.com/blog/st/ >> >> >
the homebrew library i use (C# on windows) has a dispatcher, a class for each resource, and uses XSLT transforms as views to produce the requested representation. The classes have attributes for URI templates and media types (similar to the Jersey pattern). the XSLT calls send a full set of the current Request meta data (headers, URI w/ some pre-parsed bits for host, etc.) to the view and that's where i produce my ID/URI details. this gives me a nice abstraction of the meta data that is neatly resolved at runtime. mca http://amundsen.com/blog/ On Wed, Jan 21, 2009 at 18:53, Peter Keane <pkeane@...> wrote: > On Fri, Jan 16, 2009 at 8:34 AM, Peter Keane <pkeane@...> wrote: >> On Fri, Jan 16, 2009 at 2:25 AM, Stefan Tilkov <stefan.tilkov@...> wrote: >>> On 15.01.2009, at 18:29, Peter Keane wrote: >>> >>>> FWIW, I have recently moved completely away from exposing internal >>>> unique identifiers for resources. An RPC-ish mindset had me always >>>> mapping back and forth between urls and id numbers, but a more RESTful >>>> approach has me now identifying things at most layers by the uri. >>>> This has simplified things greatly. It basically entails a static >>>> method on your model objects for instantiation: MyObject::get(<the >>>> uri>), and an object method that produces the uri: $obj->getUri. >>> >>> While I like this approach, one downside I find is that I have to push >>> down information from my web tier to the model layer (protocol, host, >>> and possibly some elements of the path). How do you address this? >>> >> >> That's true. I typical set a registry variable with that information >> (protocol + host + base_path). Not ideal, but it seems to work out >> OK. >> > > Actually, you are right to ask -- I am finding it not so simple as it > first appears in many cases (my config not abstracted enough). I'll > look forward to seeing good practices/patterns emerge. > > --peter > > >> --peter >> >>> Stefan >>> -- >>> Stefan Tilkov, http://www.innoq.com/blog/st/ >>> >>> >> > > ------------------------------------ > > Yahoo! Groups Links > > > >
I've been reading the Data format of updates
<http://tech.groups.yahoo.com/group/rest-discuss/message/11940> thread
with interest because it touches on something I think is quite
fundamental to a certain class of RESTful applications (namely, ones
that operate very frequently on many different resources with lots and
lots of modifications).
In some ways it's kind of a "last mile" problem of URI design because it
deals with figuring out where to draw the line between what's
addressable and what's not. The key question seems to be: how does one
deal in a RESTful way with resources that are (a) potentially large
and/or processing-intensive to produce representations for, and that (b)
clients probably only need to operate on small parts of at one time.
Take the common example of a customer resource represented as a
custom-defined XML document:
http://example.org/customer/123
<customer>
<name>
<first-name>Bill</first-name>
<last-name>Brasky</last-name>
</name>
<address>
<street>123 Nowhere Lane</street>
<city>Metropolis</city>
<country>Canada</country>
</address>
...
...
</customer>
There could be any amount of information in this Customer document.
Imagine that this document does not map 100% to persisted storage but
rather some or all of it may be dynamically generated by the server at
GET time. Some of the dynamically calculated properties are relatively
expensive to calculate so you only want to ask for them when know for
sure that you need to see them.
The logical answer seems to be to rethink the resource breakdown and,
perhaps, move the addressable boundary one or more levels lower to
address not just customers, but specific characteristics (or properties)
of customers. Take the following:
1) http://example.org/customer/123 ==> default view gets you
everything
2) http://example.org/customer/123/name ==> returns just
<customer><name>...</name></customer>
3) http://example.org/customer/123/address ==> same as #2
except returns address intead of name
4) http://example.org/customer/123/name;address ==> returns
both name and address
5) http://example.org/customer/123/address;name ==> same as #4
To some degree, this could be viewed as form of batched GET, but as long
as all of the elements in the customer document are optional (a customer
is still a customer even with just a fragment of his/her information)
the representation returned is still a valid customer document and can
be treated as such.
The first thing that jumps out at me here is that there are many
different URIs that address exactly the same view of exactly the same
resource. Based on my understanding of REST, this is an OK thing to do
as long as you have good hypermedia support assisting clients with the
state transitions. Something based on URI templates might be a good fit
here.
But what about modifications? If I'm just changing the customer name I
don't want to have to GET the entire customer representation, nor do I
want to have to PUT the entire thing back with only the name changed.
Doing these things seems hugely inefficient especially when extrapolated
to a large scale.
I've seen PATCH recommended for use, but this only helps you with the
partial update problem. There are really two important sides to this
coin that both need to be considered:
1) Limiting the resource view to just the part you care about when
GETting.
2) PUTting only what you actually change (with some sensible
boundary constraints in the design)
PATCH seems to only address #2. I guess you could use the URI scheme I
lay out above and make anything below "123" GET-only resources. Then
any PATCH would be sent to the customer-level resource itself
(.../customer/123). Seems a bit inconsistent. Not to mention the fact
that PATCH doesn't seem to actually exist in practice yet.
It seems like allowing both PUT and GET all the way down to the property
level is better, but it's not without its own complications. For
example, can I even use PUT correctly here? Modifying a property (say,
the name property) is actually modifying the customer resource itself.
Maybe it updates a timestamp property or version property on the
customer resource automatically. Does this mean that we now have to
fall back to POST instead of PUT? I'm not sure.
The bottom line is that I think this concept of (for lack of a better
term) "partial resource views" is closely related to the contentious
issue of partial resource updates. It also seems closely related to the
equally contentious issue of batching operations in general. In systems
where a large number of resources are being operated on and resource
changes are happening frequently, there's no choice but to address these
issues. Pushing the batching into the resource addressing seems RESTful
to me, but I'm not sure.
I know Joe Gregorio's proposal for partial updates
<http://bitworking.org/news/296/How-To-Do-RESTful-Partial-Updates>
received some heat when it was originally posted but, personally, I
thought it was a very clever approach. I understood why people didn't
like him mucking with Atom in his example, but I didn't really get why
it wasn't considered RESTful as a more general solution. It seemed very
aligned with the principles of hypermedia to me. One thing in
particular I liked about it is that is has the potential to address both
sides of the issue I'm describing here. You could apply this same
principle to partial reads as well as partial updates.
Your thoughts on this subject are greatly appreciated. Thanks for your
time!
scott
On 23.01.2009, at 00:27, scameron02 wrote: > It seems like allowing both PUT and GET all the way down to the > property level is better, but it's not without its own > complications. For example, can I even use PUT correctly here? > Modifying a property (say, the name property) is actually modifying > the customer resource itself. Maybe it updates a timestamp property > or version property on the customer resource automatically. Does > this mean that we now have to fall back to POST instead of PUT? I'm > not sure. It seems perfectly fine to me to modify a resource via PUT (or POST), and have another resource get updated as a side effect. While this reduces visibility (e.g. in terms of cache invalidation), and the two resources have to be under the control of the same server for this to work, I don't see a way around this in practice. So I think it's unusual at all to to PUT new data to /customer/123/ address and find the time stamp returned as part of a GET on /customer/ 123 has changed. In fact this is what happens when you update a file in a directory, and data contained in the directly listing (such as the file's time stamp) changes. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On 23.01.2009, at 08:30, Stefan Tilkov wrote: > So I think it's unusual at all to to PUT new data to /customer/123/ > address and find the time stamp returned as part of Of course I meant "So I think it's NOT unusual at all to to PUT new data to /customer/123/ address and find the time stamp returned as part of ..." Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
> This has simplified things greatly. It basically entails a static > method on your model objects for instantiation: MyObject::get(<the > uri>), and an object method that produces the uri: $obj->getUri. In OpenRasta, getting the URI for any resource uses an extension method CreateUri() that automatically resolves the base address and find the most probable URI. It leaves URI concerns at the web layer, where I believe they belong. Seb
scameron02 wrote: > But what about modifications? If I'm just changing the customer name I > don't want to have to GET the entire customer representation, nor do I > want to have to PUT the entire thing back with only the name changed. > Doing these things seems hugely inefficient especially when extrapolated > to a large scale. > > I've seen PATCH recommended for use, but this only helps you with the > partial update problem. There are really two important sides to this > coin that both need to be considered: > > 1) Limiting the resource view to just the part you care about when > GETting. Query parameters are one option. There is a downside to allowing clients to negotiate subsets of representations. It can put a lot of pressure on the database/fs/whatever backend. This has nothing to do with the network interface, but it's good to have an interface that will scale. > 2) PUTting only what you actually change (with some sensible > boundary constraints in the design) Forms POST is an option; I'd argue it's the de-facto standard for partial updates. One of the reasons why formats like Atom lean to a patch-like update is because their schema can requires you send data whether or not it's being changed. The logic can be a bit goofy to implement for deletion - eg, does the absence of a atom:category or a geo:* tag mean removal? PUT semantics suggest yes it does, but it's not intuitive to many developers (client or server side) who were brought up on forms. I'd go as far as saying that supporting PUT on a LAMP stack isn't much fun. > PATCH seems to only address #2. I guess you could use the URI scheme I > lay out above and make anything below "123" GET-only resources. Then > any PATCH would be sent to the customer-level resource itself > (.../customer/123). Seems a bit inconsistent. Not to mention the fact > that PATCH doesn't seem to actually exist in practice yet. I'm not sold on sub-resource thing at all. I think if you want that, then use RDF, which has the sufficient conceptual weight to decompose data, and isn't muddied by syntax specifics. > It seems like allowing both PUT and GET all the way down to the property > level is better, but it's not without its own complications. For > example, can I even use PUT correctly here? Modifying a property (say, > the name property) is actually modifying the customer resource itself. > Maybe it updates a timestamp property or version property on the > customer resource automatically. Does this mean that we now have to > fall back to POST instead of PUT? I'm not sure. > > The bottom line is that I think this concept of (for lack of a better > term) "partial resource views" is closely related to the contentious > issue of partial resource updates. It also seems closely related to the > equally contentious issue of batching operations in general. In systems > where a large number of resources are being operated on and resource > changes are happening frequently, there's no choice but to address these > issues. Pushing the batching into the resource addressing seems RESTful > to me, but I'm not sure. Batch update I think is different from partial update. Give that HTTP+URIs result in a kind of distributed hashmap, it's safe to say it's just not supported in the architecture. It took years to figure out how to do iterators and aggregation in HTTP (batch reads, if you like). Bill
On Jan 24, 2009, at 5:16 AM, Bill de hOra wrote: > Forms POST is an option; I'd argue it's the de-facto standard for > partial updates. Could you explain which part of HTML Forms or POST makes it the de- facto standard for partial updates? From the client's point of view, even when it is an HTML forms-capable application, application/x-www- form-urlencoded encoding does not imply anything about optionality of encoded parameters. Whether a partially filled form submit succeeds is application specific, and even us mortals can't deal with partial forms correctly without explicit hints in the markup. > I'm not sold on sub-resource thing at all. I think if you want that, > then use RDF, which has the sufficient conceptual weight to decompose > data, and isn't muddied by syntax specifics. Also curious about your comment on sub-resources. A sub-resource is a resource on its own, and a PUT should be fine to update it. By way of links, the server can decouple the client from having to assume that a resource is part of another resource. Subbu
Subbu Allamaraju wrote: > > On Jan 24, 2009, at 5:16 AM, Bill de hOra wrote: > >> Forms POST is an option; I'd argue it's the de-facto standard for >> partial updates. > > Could you explain which part of HTML Forms or POST makes it the de-facto > standard for partial updates? What else is as widely deployed that is used to solve this problem? > From the client's point of view, even when > it is an HTML forms-capable application, > application/x-www-form-urlencoded encoding does not imply anything about > optionality of encoded parameters. Whether a partially filled form > submit succeeds is application specific, and even us mortals can't deal > with partial forms correctly without explicit hints in the markup. I didn't say it was ideal, I said that it's the most widely deployed mechanism. >> I'm not sold on sub-resource thing at all. I think if you want that, >> then use RDF, which has the sufficient conceptual weight to decompose >> data, and isn't muddied by syntax specifics. > > Also curious about your comment on sub-resources. A sub-resource is a > resource on its own, and a PUT should be fine to update it. By way of > links, the server can decouple the client from having to assume that a > resource is part of another resource. I don't see this being any better than what forms does today. Are we going to put rel attributes on every field to reduce coupling? The XML approaches I've seen are very messy. One of the thoughts I've had on this is that Data APIs do not need to be "symmetric". Having a different representation for serving needs and posting can reduce complexity. Bill
The video of an introductory REST talk I did at QCon London 2008 is up on InfoQ: http://www.infoq.com/presentations/qcon-tilkov-rest-intro Feedback is very welcome, either here, there, or offline. Thanks, Stefan
Please forgive this intrusion from an "outsider" to this list. I've followed the list for over a year now, but I've never posted, and I know the subject of this message may cause some commotion, but that's not my intent, so please don't mistake me for a troll. This is a genuine question. Is the design of the URL important to the REST architectural style? For a long time, I thought it was because I heard others saying it was, and I see a lot of people talking about designing "RESTful URLs," but the more I dig into Roy's dissertation and the more I think about it, can a URL even be called "RESTful?" It seems to me that the term doesn't apply to URLs. Sure, REST has the concept of resources, and resources have addresses, and those addresses on the Web are Uniform Resource Identifiers, but is the design of those identifiers important to REST? I'm asking because I truly want to know if, over time, the design of the identifier has become a part of what it means to be RESTful. Perhaps it's become part of the concepts of REST over time as the community has evolved and discourse has taken place to define what is and isn't RESTful. That's why I'm asking here. When I discuss REST on my blog, in articles, in presentations, etc., is it important for me to also discuss the design of the URL? By the way, I do think well-designed URLs are important. That's not the question I'm asking. I'm asking whether they are part of REST. -Ben
On Sat, Jan 31, 2009 at 10:17 PM, Ben Ramsey <benramsey.lists@...> wrote: ... > Is the design of the URL important to the REST architectural style? > > For a long time, I thought it was because I heard others saying it > was, and I see a lot of people talking about designing "RESTful URLs," > but the more I dig into Roy's dissertation and the more I think about > it, can a URL even be called "RESTful?" It seems to me that the term > doesn't apply to URLs. > If your webapp's URLs look like verbs, generally, it's not likely to end up a RESTful application. As far as the structure of the URL goes, there's nothing wrong with URL query parameters strictly speaking, but you do tend to find them in apps that have verby base URLs. I use them from time to time. Hugh
On 1/31/09 11:27 PM, Hugh Winkler wrote: > On Sat, Jan 31, 2009 at 10:17 PM, Ben Ramsey <benramsey.lists@...> wrote: > ... >> Is the design of the URL important to the REST architectural style? >> >> For a long time, I thought it was because I heard others saying it >> was, and I see a lot of people talking about designing "RESTful URLs," >> but the more I dig into Roy's dissertation and the more I think about >> it, can a URL even be called "RESTful?" It seems to me that the term >> doesn't apply to URLs. >> > > If your webapp's URLs look like verbs, generally, it's not likely to > end up a RESTful application. Just for the sake of argument, why does it matter that the URL has verbs in it? After all, HTTP is really the RESTful protocol being used, so the verbs are GET, POST, PUT, DELETE, etc., and even though the URL has verbs in it, it's still just the address for the resource, right? I can, however, see the argument against verbs in the URL (I think), since the implication is that the URLs then become the API to the application, causing a diversity of actions, rather than a diversity of resources. Is this the main argument against using verbs in the URL? -Ben
My admittedly non-expert take is that while URL design is indeed something important to think about in designing an application, it also presents a temptation to ignore the HATEOS (Hypertext as the Engine of State) constraint of REST. This is something I have come to understand/appreciate more fully recently. If your API docs describe a bunch of URL patterns, there's code smell (in the sense of REST), since interactions should all be hypertext driven, not URL construction driven. The client should only need to know about the mime types and the link relations offered by the service to make use of it. My own take is that while well-constructed URLs may be the mark of a well-design application, they do not have much to do with REST. I highly recommend a recent blog post by Roy Fielding [1] on this matter. --peter keane [1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven On Sat, Jan 31, 2009 at 10:17 PM, Ben Ramsey <benramsey.lists@...> wrote: > Please forgive this intrusion from an "outsider" to this list. I've > followed the list for over a year now, but I've never posted, and I > know the subject of this message may cause some commotion, but that's > not my intent, so please don't mistake me for a troll. This is a > genuine question. > > Is the design of the URL important to the REST architectural style? > > For a long time, I thought it was because I heard others saying it > was, and I see a lot of people talking about designing "RESTful URLs," > but the more I dig into Roy's dissertation and the more I think about > it, can a URL even be called "RESTful?" It seems to me that the term > doesn't apply to URLs. > > Sure, REST has the concept of resources, and resources have addresses, > and those addresses on the Web are Uniform Resource Identifiers, but > is the design of those identifiers important to REST? > > I'm asking because I truly want to know if, over time, the design of > the identifier has become a part of what it means to be RESTful. > Perhaps it's become part of the concepts of REST over time as the > community has evolved and discourse has taken place to define what is > and isn't RESTful. That's why I'm asking here. > > When I discuss REST on my blog, in articles, in presentations, etc., > is it important for me to also discuss the design of the URL? > > By the way, I do think well-designed URLs are important. That's not the > question I'm asking. I'm asking whether they are part of REST. > > -Ben > >
On Sat, Jan 31, 2009 at 10:35 PM, Ben Ramsey <benramsey.lists@...> wrote: > On 1/31/09 11:27 PM, Hugh Winkler wrote: >> >> On Sat, Jan 31, 2009 at 10:17 PM, Ben Ramsey <benramsey.lists@...> >> wrote: >> ... >>> >>> Is the design of the URL important to the REST architectural style? >>> >>> For a long time, I thought it was because I heard others saying it >>> was, and I see a lot of people talking about designing "RESTful URLs," >>> but the more I dig into Roy's dissertation and the more I think about >>> it, can a URL even be called "RESTful?" It seems to me that the term >>> doesn't apply to URLs. >>> >> >> If your webapp's URLs look like verbs, generally, it's not likely to >> end up a RESTful application. > > Just for the sake of argument, why does it matter that the URL has verbs in > it? After all, HTTP is really the RESTful protocol being used, so the verbs > are GET, POST, PUT, DELETE, etc., and even though the URL has verbs in it, > it's still just the address for the resource, right? > > I can, however, see the argument against verbs in the URL (I think), since > the implication is that the URLs then become the API to the application, > causing a diversity of actions, rather than a diversity of resources. Is > this the main argument against using verbs in the URL? Yep. If your URLs describe a suite of actions, with parameters in the query string, then you're just building a RPC application. You're defining a custom interface for making your app do things, rather than sticking to the uniform interface defined by HTTP. > > -Ben > >
On Sat, Jan 31, 2009 at 11:02:59PM -0600, Hugh Winkler wrote: > Yep. If your URLs describe a suite of actions, with parameters in the > query string, then you're just building a RPC application. You're > defining a custom interface for making your app do things, rather than > sticking to the uniform interface defined by HTTP. I think you've thrown the baby out with the bath water. URIs are opaque. That means that, technically, it doesn't matter what you use at all. You could design the most insanely RESTful interface using only verbs for your URIS, or using GUIDs. I think the important piece of advice would be that you should strongly consider why you want to use verbs in the first place, given the HTTP method is already doing that for you. -- Noah Slater, http://tumbolia.org/nslater
On 01.02.2009, at 05:17, Ben Ramsey wrote: > Please forgive this intrusion from an "outsider" to this list. I've > followed the list for over a year now, but I've never posted, and I > know the subject of this message may cause some commotion, but that's > not my intent, so please don't mistake me for a troll. This is a > genuine question. > > Is the design of the URL important to the REST architectural style? > > No. [...] > > By the way, I do think well-designed URLs are important. That's not > the > question I'm asking. I'm asking whether they are part of REST. > > I think your understanding is perfectly right: Well-designed URIs are great, but they don't influence whether something is RESTful or not. Using a URI like http://example.com/SomeService?methodName=launchMissiles to identify a customer resource may be bad URI design, and indicate a design smell, but it says nothing about the RESTfulness of the system - it's equivalent to http://example.com/1231546543213212 from a REST perspective. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Ben, In my HTTP i18N patterns article [1], I deal with this question as follows: "Health warning: URI structure is orthogonal to REST Before I go into more detail with my examples, I want to make it clear that what URIs look like is, prima facie, unimportant to REST. This is sometimes referred to as URI opacity or opaqueness. Nonetheless, there are good patterns we can employ for URI structure and human-readable URIs are considered to be 'a good thing'." [1] http://alandean.blogspot.com/2009/01/http-i18n-patterns.html Regards, Alan Dean http://twitter.com/adean On Sun, Feb 1, 2009 at 4:17 AM, Ben Ramsey <benramsey.lists@...> wrote: > Please forgive this intrusion from an "outsider" to this list. I've > followed the list for over a year now, but I've never posted, and I > know the subject of this message may cause some commotion, but that's > not my intent, so please don't mistake me for a troll. This is a > genuine question. > > Is the design of the URL important to the REST architectural style? > > For a long time, I thought it was because I heard others saying it > was, and I see a lot of people talking about designing "RESTful URLs," > but the more I dig into Roy's dissertation and the more I think about > it, can a URL even be called "RESTful?" It seems to me that the term > doesn't apply to URLs. > > Sure, REST has the concept of resources, and resources have addresses, > and those addresses on the Web are Uniform Resource Identifiers, but > is the design of those identifiers important to REST? > > I'm asking because I truly want to know if, over time, the design of > the identifier has become a part of what it means to be RESTful. > Perhaps it's become part of the concepts of REST over time as the > community has evolved and discourse has taken place to define what is > and isn't RESTful. That's why I'm asking here. > > When I discuss REST on my blog, in articles, in presentations, etc., > is it important for me to also discuss the design of the URL? > > By the way, I do think well-designed URLs are important. That's not the > question I'm asking. I'm asking whether they are part of REST. > > -Ben > >
Hi, On Sun, Feb 1, 2009 at 5:17 AM, Ben Ramsey <benramsey.lists@...> wrote: > Is the design of the URL important to the REST architectural style? Our in-house architectural style (which I like to think of as more or less RESTful) mandates URLs to be hierarchical with authority boundaries in mind. This way we can separate access control functionality (otherwise dumb HTTP proxy) from service implementation (may assume all requests reaching it are legitimate) if need be. So our URL design is not for the benefit of the client, but to allow separation of concerns on the server side. Where I think we do violate the HATEOS principle a bit with URLs is how clients must know about search- and sort control parameters to manage over-sized resource listings. Gabor Szokoli
On Sun, Feb 1, 2009 at 1:49 AM, Noah Slater <nslater@...> wrote: > On Sat, Jan 31, 2009 at 11:02:59PM -0600, Hugh Winkler wrote: >> Yep. If your URLs describe a suite of actions, with parameters in the >> query string, then you're just building a RPC application. You're >> defining a custom interface for making your app do things, rather than >> sticking to the uniform interface defined by HTTP. > > I think you've thrown the baby out with the bath water. > I haven't thrown any babies out with any bathwater. > URIs are opaque. That means that, technically, Right. Technically. Or as I put it in my first reply, "strictly speaking." Then I went on to offer the practical advice that, even though technically, or theoretically, or strictly speaking, you could design a REStful app with URLs having these smells, you probably wouldn't. > it doesn't matter what you use at > all. You could design the most insanely RESTful interface using only verbs for > your URIS, or using GUIDs. I think the important piece of advice would be that > you should strongly consider why you want to use verbs in the first place, given > the HTTP method is already doing that for you. > > -- > Noah Slater, http://tumbolia.org/nslater >
* Ben Ramsey <benramsey.lists@...> [2009-02-01 05:20]: > It seems to me that the term doesn't apply to URLs. Correct. > Perhaps it's become part of the concepts of REST over time as > the community has evolved and discourse has taken place to > define what is and isn't RESTful. No. The confusion you refer to isn’t new. It has existed for as long as the term “REST†has and is the mark of a certain incomplete understanding of REST that also includes such things as taking the CRUD analogy way too seriously and being ignorant of the hypermedia constraint. (I went through that phase too.) I think it’s the “REST support†in Rails that’s currently doing the most to spread this mistaken conception of REST. > When I discuss REST on my blog, in articles, in presentations, > etc., is it important for me to also discuss the design of the > URL? If anything is important in that sense, then it’s the design of your resource representations: where do hyperlinks go? What does a particular form of link mean? Ie. what does it imply about the operations you can expect to be able to perform on the target of the link? These are the things that a client has to know in order to operate a REST service. The structure of URIs, in contrast, is a server implementation detail that the client neither need nor should care about. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On Mon, Feb 2, 2009 at 8:42 AM, Aristotle Pagaltzis <pagaltzis@...> wrote: > If anything is important in that sense, then it's the design of > your resource representations: where do hyperlinks go? What does > a particular form of link mean? Ie. what does it imply about the > operations you can expect to be able to perform on the target of > the link? These are the things that a client has to know in order > to operate a REST service. > > The structure of URIs, in contrast, is a server implementation > detail that the client neither need nor should care about. (I know I'm going to regret asking this, but it's been bothering me for so long that I have to ask it.) I think the concept of (promiscuous) bookmarking means that the client effectively does care (or at least becomes dependent upon) the structure of URIs. What's been bothering me for a long time is that I think there is a fundamental tension between REST's concept of "bookmarks as limited set of entry points" and the Web's concept of "URIs for everything". When I read some of the posts in this list, including those by Roy Fielding himself, I sometimes get the sense that "entry point" URIs should be kept to a bare minimum in order to minimize the "dependency surface area" between clients and servers. For example, Roy made this comment about bookmarks<http://tech.groups.yahoo.com/group/rest-discuss/message/10740> : "REST is limited to the client being told what to do next by the current state of where they are now, aside from the entry point(s) we call a bookmark." In other words, bookmarks are an aside -- only a relative handful of URIs should be "bookmarked" by clients. But isn't one of the core principles of the Web the idea that any URI should be bookmarkable and that bookmarking is to be encouraged? And remember, bookmarking doesn't just mean putting the URI into a list of favorites in a browser. It also means the client embedding that URI into a representation that it passes along to some other client. Sometimes the REST talk of URI "entry points" seems like an implicit rejection of deep linking <http://en.wikipedia.org/wiki/Deep_linking>, ie forbidding linking to any site page other than a site's main or home page. And a rejection of deep linking is fundamentally at odds with the core principles of the Web. If promiscuous bookmarking is indeed encouraged by the Web (and by REST), then a well designed system must assume that any URI that appears in any representation it returns could someday be used as an "entry point". In which case the system must assume that the world of clients is potentially dependent on the entire structure of its initial "network" of URIs -- not just a handful of designated "entry point" URIs. In other words the ratio of bookmarked (entry point) URIs to all the URIs returned in representations isn't extremely small, its potentially one-to-one. To put it another way, what's the difference in the degree or nature of the dependency, from the server's POV, between: 1. The entire set of URIs it has ever returned in representations being bookmarked by clients and then used later as entry points; and 2. Clients generating such entry-point URIs via URI templates and a scripting language Conceptually the difference is that (2) can generate novel URIs that were never returned in any representation. For example, a geospatial system could have returned millions of lat/long URIs in various representations over the years, but never returned one with the specific lat/long that a client script generates, eg no one had ever asked about THAT part of the Pacific yet. But pragmatically there seems to me to be no difference at all. In other words, the idea that one can substantially reduce the dependencies between clients and servers by returning a network of URIs in representations (HATEOAS) instead of explicitly documenting the URI templates that could generate them, seems only to work if one prohibits or at least discourages promiscuous bookmarking of URIs, ie prohibits deep linking by REST clients. This is why I think the structure of URIs IS important and the use of URI templates is NOT suspect. Do others see this as a tension as well, or I am just misunderstanding something?
On Mon, Feb 2, 2009 at 9:36 AM, Nick Gall <nick.gall@...> wrote: > On Mon, Feb 2, 2009 at 8:42 AM, Aristotle Pagaltzis <pagaltzis@...> > wrote: .... > To put it another way, what's the difference in the degree or nature of the > dependency, from the server's POV, between: > > The entire set of URIs it has ever returned in representations being > bookmarked by clients and then used later as entry points; and > Clients generating such entry-point URIs via URI templates and a scripting > language > > Conceptually the difference is that (2) can generate novel URIs that were > never returned in any representation. For example, a geospatial system could > have returned millions of lat/long URIs in various representations over the > years, but never returned one with the specific lat/long that a client > script generates, eg no one had ever asked about THAT part of the Pacific > yet. > But pragmatically there seems to me to be no difference at all. In other > words, the idea that one can substantially reduce the dependencies between > clients and servers by returning a network of URIs in representations > (HATEOAS) instead of explicitly documenting the URI templates that could > generate them, seems only to work if one prohibits or at least discourages > promiscuous bookmarking of URIs, ie prohibits deep linking by REST clients. > This is why I think the structure of URIs IS important and the use of URI > templates is NOT suspect. Do others see this as a tension as well, or I am > just misunderstanding something? > If a server returns to the client a URI template -- and assuming clients have a way to identify URI templates in the returned hypermedia e.g. the definition of the Content-type defines <uri-template> tags -- then completing a URI template isn't any different than completing a HTML form. So, URI templates need not conflict with HATEOAS. Hugh
OpenSocial REST API - http://docs.google.com/View?docid=dcc2jvzt_37hdzwkmf8
If I understand it right, the above URL is not RESTful as it's not denoting a resource rather an action (view). Are we in agreement? If so, could the following be considered RESTful?
http://docs.google.com/doc/dcc2jvzt_37hdzwkmf8
Now, how do specify the action view against let's say download or whatever. In other words, it's not a simple GET. I guess, we can specify this in an XML or such right?
thanks,
-rama
________________________________
From: Hugh Winkler <hughw@...>
To: Ben Ramsey <benramsey.lists@...>
Cc: rest-discuss@yahoogroups.com
Sent: Saturday, January 31, 2009 9:02:59 PM
Subject: Re: [rest-discuss] RESTful URLs?
On Sat, Jan 31, 2009 at 10:35 PM, Ben Ramsey <benramsey.lists@ gmail.com> wrote:
> On 1/31/09 11:27 PM, Hugh Winkler wrote:
>>
>> On Sat, Jan 31, 2009 at 10:17 PM, Ben Ramsey <benramsey.lists@ gmail.com>
>> wrote:
>> ...
>>>
>>> Is the design of the URL important to the REST architectural style?
>>>
>>> For a long time, I thought it was because I heard others saying it
>>> was, and I see a lot of people talking about designing "RESTful URLs,"
>>> but the more I dig into Roy's dissertation and the more I think about
>>> it, can a URL even be called "RESTful?" It seems to me that the term
>>> doesn't apply to URLs.
>>>
>>
>> If your webapp's URLs look like verbs, generally, it's not likely to
>> end up a RESTful application.
>
> Just for the sake of argument, why does it matter that the URL has verbs in
> it? After all, HTTP is really the RESTful protocol being used, so the verbs
> are GET, POST, PUT, DELETE, etc., and even though the URL has verbs in it,
> it's still just the address for the resource, right?
>
> I can, however, see the argument against verbs in the URL (I think), since
> the implication is that the URLs then become the API to the application,
> causing a diversity of actions, rather than a diversity of resources. Is
> this the main argument against using verbs in the URL?
Yep. If your URLs describe a suite of actions, with parameters in the
query string, then you're just building a RPC application. You're
defining a custom interface for making your app do things, rather than
sticking to the uniform interface defined by HTTP.
>
> -Ben
>
>
On Mon, Feb 02, 2009 at 08:32:42AM -0800, Ramamoorthy Subramanian wrote: > OpenSocial REST API - http://docs.google.com/View?docid=dcc2jvzt_37hdzwkmf8 > > If I understand it right, the above URL is not RESTful as it's not denoting a > resource rather an action (view). Are we in agreement? If so, could the > following be considered RESTful? > > http://docs.google.com/doc/dcc2jvzt_37hdzwkmf8 If both of these URIs dereference to the same representation, everything else being equal, what difference does it matter if one of them uses a query string and the other one doesn't? It's only when you try to tunnel HTTP verbs via the URI that you're starting to get into dangerous territory. More often that not, that means using a query parameter such as "&delete=true" along with a GET request. That does not mean that query strings are bad in general though. > Now, how do specify the action view against let's say download or whatever. Could you explain what "action", "view", "against", and "download" mean? > In other words, it's not a simple GET. I guess, we can specify this in an XML > or such right? What resource do you want to operate on, and what is the operation? -- Noah Slater, http://tumbolia.org/nslater
Here is a way to look at this. Is the client required to understand that it needs to pass "/view" in the URI to "view" this document, and, perhaps, add "/edit" in the URI to "edit" this document? If so, that is not RESTful as it violates the uniform interface. If, on the otherhand, the server is providing URIs for these actions within the hypermedia, that will then drive the client to use GET and POST/PUT to do these actions, then it may not be violating the uniform interface. To be more specific, the HTML might contain a link to "/edit? docid=...", a GET to which would render a HTML page with a form, and form submission would do a POST to "/edit?docid=...". Some may argue that this is incorrect (since the URIs to get and edit the resource are not the same), but that is where we are with HTML. Subbu On Feb 2, 2009, at 8:32 AM, Ramamoorthy Subramanian wrote: > OpenSocial REST API - http://docs.google.com/View?docid=dcc2jvzt_37hdzwkmf8 > > If I understand it right, the above URL is not RESTful as it's not > denoting a resource rather an action (view). Are we in agreement? If > so, could the following be considered RESTful? > > http://docs.google.com/doc/dcc2jvzt_37hdzwkmf8 > > Now, how do specify the action view against let's say download or > whatever. In other words, it's not a simple GET. I guess, we can > specify this in an XML or such right? > > thanks, > > -rama > > > > > ________________________________ > From: Hugh Winkler <hughw@...> > To: Ben Ramsey <benramsey.lists@...> > Cc: rest-discuss@yahoogroups.com > > Sent: Saturday, January 31, 2009 9:02:59 PM > Subject: Re: [rest-discuss] RESTful URLs? > > > On Sat, Jan 31, 2009 at 10:35 PM, Ben Ramsey <benramsey.lists@ > gmail.com> wrote: >> On 1/31/09 11:27 PM, Hugh Winkler wrote: >>> >>> On Sat, Jan 31, 2009 at 10:17 PM, Ben Ramsey <benramsey.lists@ >>> gmail.com> >>> wrote: >>> ... >>>> >>>> Is the design of the URL important to the REST architectural style? >>>> >>>> For a long time, I thought it was because I heard others saying it >>>> was, and I see a lot of people talking about designing "RESTful >>>> URLs," >>>> but the more I dig into Roy's dissertation and the more I think >>>> about >>>> it, can a URL even be called "RESTful?" It seems to me that the >>>> term >>>> doesn't apply to URLs. >>>> >>> >>> If your webapp's URLs look like verbs, generally, it's not likely to >>> end up a RESTful application. >> >> Just for the sake of argument, why does it matter that the URL has >> verbs in >> it? After all, HTTP is really the RESTful protocol being used, so >> the verbs >> are GET, POST, PUT, DELETE, etc., and even though the URL has verbs >> in it, >> it's still just the address for the resource, right? >> >> I can, however, see the argument against verbs in the URL (I >> think), since >> the implication is that the URLs then become the API to the >> application, >> causing a diversity of actions, rather than a diversity of >> resources. Is >> this the main argument against using verbs in the URL? > > Yep. If your URLs describe a suite of actions, with parameters in the > query string, then you're just building a RPC application. You're > defining a custom interface for making your app do things, rather than > sticking to the uniform interface defined by HTTP. > >> >> -Ben >> >> > > > --- http://subbu.org
On Mon, Feb 2, 2009 at 10:32 AM, Ramamoorthy Subramanian <ramsub4@...> wrote: > OpenSocial REST API - http://docs.google.com/View?docid=dcc2jvzt_37hdzwkmf8 > > If I understand it right, the above URL is not RESTful as it's not denoting > a resource rather an action (view). Are we in agreement? No. You could build a perfectly RESTful app using the above URL design. It just retrieves a resource. As you point out below, it's functionally no different than the alternative URL : > If so, could the > following be considered RESTful? > > http://docs.google.com/doc/dcc2jvzt_37hdzwkmf8 > > Now, how do specify the action view against let's say download or whatever. > In other words, it's not a simple GET. I guess, we can specify this in an > XML or such right? > A good example is Edit. http://docs.google.com/Edit?docid=dcc2jvzt_37hdzwkmf8 could return you an editable form. What you've really got here isn't verbs at all. You're asking for the "viewable" resource or the "editable" resource (i.e. form). Perfectly restful. Your question was about "download". Presumably that URL takes you to a form where you select a download format . That's a legitimate resource itself. Here's an example of a verby URL that smells: http://my.bank.com/transferMoney?from=a&to=b That's what I was advising against. Hugh > thanks, > > -rama >
> Just for the sake of argument, why does it matter that the URL has verbs > in it? It doesn't. It is however a sign that someone may have made some unRESTful decisions elsewhere that is reflected in their choice of the URI. In and of itself though, URIs are to REST purely opaque strings. This does not mean that URI design is irrelevant to the design of a good RESTful system. But generally it isn't important to the RESTfulness so much as other qualities of the system (unless you are building an example to demonstrate REST, RESTfulness will not be your sole concern). That REST doesn't give a damn about the URI itself means it leaves one free to deal with such concerns as one sees fit. There are uses of URIs that are counter to REST, Roy's dissertation gives examples of authorisation and session information being passed in the URI, which is counter to REST. The exact same URI could be used in a RESTful system though (though such a choice of a URI that looks like it contains session information when it fact it doesn't would be idiosyncratic to the point of being bizarre).
Nick, I see where you're getting at. Cool URIs don't change because of linking. But those URIs are opaque, even though they are entry points. The point of the opaqueness is to prevent architectures where the client needs to know in advance the structure of said URIs. As Hugh correctly points out, you're allowed to define your own way of building URIs, such as query string forms in html and uri templates in your custom media type, provided that the knowledge of how to build URIs is specific to a media type, and as such to a client understanding the media type, and the initial template to build is provided by the server. Then you don't assume knowledge beyond the knowledge of how to process the media type. AKA you want the semantics of URIs to stay on the server, or conveyed by a media type, but not propagate out of those boundaries. And because you'd use query strings or uri templates, you still achieve the serendipity role. I don't think having to keep persistent URIs to prevent breaking bookmarks leaks any semantics: the cool URIs don't change moto is orthogonal to the lack of clients understanding anything *but* a media type and the uniform interface. You want new clients to not be tied to a specific URI structure but to a media type telling you how to build the URIs. When you change your address space, you can still provide the old URIs you assigned and redirect to the new ones without fear of new clients not implementing the new behaviour and semantics associated with the new addressing scheme. Changing your URI space after you've assigned URIs to resources is probably already a declaration of intent: you f*cked up the previous one and need to start fresh. Then it is your responsibility to ensure the migration, and to not have to change that space again, as it's a costly exercise, and one that is largely avoidable. Seb From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Nick Gall Sent: 02 February 2009 15:37 To: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: RESTful URLs? On Mon, Feb 2, 2009 at 8:42 AM, Aristotle Pagaltzis <pagaltzis@...> wrote: > If anything is important in that sense, then it's the design of > your resource representations: where do hyperlinks go? What does > a particular form of link mean? Ie. what does it imply about the > operations you can expect to be able to perform on the target of > the link? These are the things that a client has to know in order > to operate a REST service. > > The structure of URIs, in contrast, is a server implementation > detail that the client neither need nor should care about. (I know I'm going to regret asking this, but it's been bothering me for so long that I have to ask it.) I think the concept of (promiscuous) bookmarking means that the client effectively does care (or at least becomes dependent upon) the structure of URIs. What's been bothering me for a long time is that I think there is a fundamental tension between REST's concept of "bookmarks as limited set of entry points" and the Web's concept of "URIs for everything". When I read some of the posts in this list, including those by Roy Fielding himself, I sometimes get the sense that "entry point" URIs should be kept to a bare minimum in order to minimize the "dependency surface area" between clients and servers. For example, Roy made this comment <http://tech.groups.yahoo.com/group/rest-discuss/message/10740> about bookmarks: "REST is limited to the client being told what to do next by the current state of where they are now, aside from the entry point(s) we call a bookmark." In other words, bookmarks are an aside -- only a relative handful of URIs should be "bookmarked" by clients. But isn't one of the core principles of the Web the idea that any URI should be bookmarkable and that bookmarking is to be encouraged? And remember, bookmarking doesn't just mean putting the URI into a list of favorites in a browser. It also means the client embedding that URI into a representation that it passes along to some other client. Sometimes the REST talk of URI "entry points" seems like an implicit rejection of deep linking <http://en.wikipedia.org/wiki/Deep_linking> , ie forbidding linking to any site page other than a site's main or home page. And a rejection of deep linking is fundamentally at odds with the core principles of the Web. If promiscuous bookmarking is indeed encouraged by the Web (and by REST), then a well designed system must assume that any URI that appears in any representation it returns could someday be used as an "entry point". In which case the system must assume that the world of clients is potentially dependent on the entire structure of its initial "network" of URIs -- not just a handful of designated "entry point" URIs. In other words the ratio of bookmarked (entry point) URIs to all the URIs returned in representations isn't extremely small, its potentially one-to-one. To put it another way, what's the difference in the degree or nature of the dependency, from the server's POV, between: 1. The entire set of URIs it has ever returned in representations being bookmarked by clients and then used later as entry points; and 2. Clients generating such entry-point URIs via URI templates and a scripting language Conceptually the difference is that (2) can generate novel URIs that were never returned in any representation. For example, a geospatial system could have returned millions of lat/long URIs in various representations over the years, but never returned one with the specific lat/long that a client script generates, eg no one had ever asked about THAT part of the Pacific yet. But pragmatically there seems to me to be no difference at all. In other words, the idea that one can substantially reduce the dependencies between clients and servers by returning a network of URIs in representations (HATEOAS) instead of explicitly documenting the URI templates that could generate them, seems only to work if one prohibits or at least discourages promiscuous bookmarking of URIs, ie prohibits deep linking by REST clients. This is why I think the structure of URIs IS important and the use of URI templates is NOT suspect. Do others see this as a tension as well, or I am just misunderstanding something?
On Mon, Feb 2, 2009 at 11:05 AM, Hugh Winkler <hughw@...> wrote: > If a server returns to the client a URI template -- and assuming > clients have a way to identify URI templates in the returned > hypermedia e.g. the definition of the Content-type defines > <uri-template> tags -- then completing a URI template isn't any > different than completing a HTML form. So, URI templates need not > conflict with HATEOAS. Given my argument, what's the difference between "bookmarking" a URI template for later use and bookmarking the set of "fully instantiated" URIs the template could generate? In either case, the server is liable (even in the legal sense of the word) to be sent a URI from an old "URI space" and will have to deal with it or risk violating the "cool URIs don't change principle".
> Given my argument, what's the difference between "bookmarking" a URI > template for later use and bookmarking the set of "fully > instantiated" URIs the template could generate? Because templates are not addressable, URIs are. If you want bookmarking of templates, as a specific jump through hypertext to a resource representation, you would need template + identifiers, which results in one URI. Why would you go through the extra effort? You don't bookmark the way to build a search, you bookmark the result of the search. If you wanted to store URI template and associated values, then you effectively have the semantic equivalent of a URI and I would question why you need that. To put it into laymen's terms, while you can choose the name of your kids using whatever process is contextually acceptable in your family, a government form registering your baby's name shouldn't have to care about how you came to the name, but only about the name itself. While the process may be repeatable that by applying it you would end up with the kid, the level of indirection has no value. And it's brittle because for your next kid you may change the way you chose the name, without changing the name of your first kid. And other families would adopt a completely different way. So as far as the government bookmarking people with their name, they treat it as a fairly opaque identifier, even if you don't. Seb
On Mon, Feb 2, 2009 at 10:52 AM, Sebastien Lambla <seb@...> wrote: > > Given my argument, what's the difference between "bookmarking" a URI > > template for later use and bookmarking the set of "fully > > instantiated" URIs the template could generate? > If you grab an HTML page with a form in it and save it for later, you might have just bookmarked a URI template for later. > > Because templates are not addressable, URIs are. If you want bookmarking of > templates, as a specific jump through hypertext to a resource > representation, you would need template + identifiers, which results in one > URI. > > Why would you go through the extra effort? > > You don't bookmark the way to build a search, you bookmark the result of > the > search. I do. Some sites support auto-discovery and OpenSearch, you can find a lot of search templates here [1], or when all else fails, enter the URL template directly. Most of the searches I run are initiated from the address bar or the search bar and go through one of these templates to construct a URL that is then sent to the server. I rarely go to the site itself to initiate a search. Assaf [1] http://mycroft.mozdev.org/search-engines.html > > > If you wanted to store URI template and associated values, then you > effectively have the semantic equivalent of a URI and I would question why > you need that. > > To put it into laymen's terms, while you can choose the name of your kids > using whatever process is contextually acceptable in your family, a > government form registering your baby's name shouldn't have to care about > how you came to the name, but only about the name itself. While the process > may be repeatable that by applying it you would end up with the kid, the > level of indirection has no value. And it's brittle because for your next > kid you may change the way you chose the name, without changing the name of > your first kid. And other families would adopt a completely different way. > > So as far as the government bookmarking people with their name, they treat > it as a fairly opaque identifier, even if you don't. > > Seb > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Mon, Feb 2, 2009 at 3:01 PM, Assaf Arkin <assaf@...> wrote:
> On Mon, Feb 2, 2009 at 10:52 AM, Sebastien Lambla <seb@...>
wrote:
>> You don't bookmark the way to build a search, you bookmark the result of
>> the
>> search.
>
> I do. Some sites support auto-discovery and OpenSearch, you can find a lot
> of search templates here [1], or when all else fails, enter the URL
template
> directly. Most of the searches I run are initiated from the address bar or
> the search bar and go through one of these templates to construct a URL
that
> is then sent to the server. I rarely go to the site itself to initiate a
> search.
> Assaf
>
> [1] http://mycroft.mozdev.org/search-engines.html
Great point Assaf! I'd completely forgotten about OpenSearch. OpenSearch is
effectively an architecture for "bookmarking" search URI templates in
browsers. I currently have about a dozen of so search templates stored in
Firefox.
So I guess the question is whether or not the
OpenSearch<http://www.opensearch.org/>media type and its intended use
is RESTful. When a user agent (eg browser)
retrieves the OpenSearch XML Description Document (media
type application/opensearchdescription+xml), it extracts the URL template
and stores it for use indefinitely. When, for example, it is used to augment
the search edit box in a browser, it is used to generate the search query
URL with the text entered into the search edit box as a parameter.
So, does the semi-permanent storage ("bookmarking") of the template by the
browser violate HATEOAS? Should a browser actually do two HTTP requests each
and every time a user does a search from the search bar:
1. Request the OpenSearch Description Document (ie never store the
description)
2. Extract the URL template from the Description Document and compose the
search query URL
Perhaps a better approach for dealing with an out-of-date Description
Document would be for the OpenSearch resource to do some form of redirect by
returning a new Description Document with the appropriate (303?) status
code. The user agent would then store the new description before using it to
compose a new search query URL.
So perhaps the answer to bookmarking URL templates isn't that it is right or
wrong per se, but that the right way to bookmark URL templates it to provide
a mechanism for informing the client when it should use a new URL template,
eg via a 303 response code? Would such an approach be legal use of HTTP? Can
a 303 response to a search query request be not (just) a new search URL but
also some hypertext with the new template description document?
If all this is indeed RESTful and legal HTTP behavior then HATEOAS could
support a generalized approach to "soft state" URL templates -- persisted by
the client until told by a server response that the template should be
replaced by the response. What would be clearly unRESTful would be a design
based on "hardcoded" URL templates (say in developer documentation) in which
there was no mechanism for such URL templates to be updated automatically by
user agents via normal Web interactions.
> > But pragmatically there seems to me to be no difference at all. In other > words, the idea that one can substantially reduce the dependencies between > clients and servers by returning a network of URIs in representations > (HATEOAS) instead of explicitly documenting the URI templates that could > generate them, seems only to work if one prohibits or at least discourages > promiscuous bookmarking of URIs, ie prohibits deep linking by REST clients. > > This is why I think the structure of URIs IS important and the use of URI > templates is NOT suspect. Do others see this as a tension as well, or I am > just misunderstanding something? > I agree with this. Yes, the problems of URI design (opaque vs "hackable") and of REST API design are mostly orthogonal because the hypermedia defines the primary points of client interaction with the application. But, the correct use of hypermedia to drive clients should in no way *preclude* the use of URIs directly, IMO. Part of the power of a stateless API is the ability for a client to enter the application at practically any starting point. If clients are forced to always begin and the system specified "beginning" URI and then do all subsequent navigations via hypermedia embedded inside representations, we aren't (at least in one way) that much better off than we were with RPC or a distributed object system where getting anywhere interesting required navigating an object model. All I wanted was to hack together a little script to look at a customer record but I had to make 6 round trips to the server and interrogate 5 representations to get it! Part of the beauty of a RESTful architecture is that the level of complexity imposed by the API matches the level of complexity and robustness required by the client app. With REST this choice can be, in many ways, left to the client developer. As a slight aside, another area where human-readable URIs are useful in RESTful design is in applications that have open-ended state transitions. Specific web applications tend to be fairly restricted in the number of things it makes sense to do next from a particular resource. But applications that are more like frameworks themselves (ie. databases to some degree; or content repositories) can have a practically unlimited number of potential state transitions. The application will provide the most common ones as hypermedia in the representation, and maybe it has some forms or templates for parametrized extensions. But for some application types, it will just not be practical or feasible for this list to be exhaustive. URI construction can be a second-tier back-door when hypermedia hits a wall between the known next states and the uncertain (but still valid) ones. scott
scameron02 wrote: > (opaque vs > "hackable") There is no opaque vs hackable. All URIs are opaque to all but a couple of processes. It's not that URIs arguably should be opaque. Every single one of them is.
Jon Hanna <jon@...> wrote: > > There is no opaque vs hackable. All URIs are opaque to all but a couple > of processes. It's not that URIs arguably should be opaque. Every single > one of them is. > If a URI scheme is human-readable, clearly documented, and guaranteed by the server to never change, how is that opaque? I'm not saying it's necessarily (or not) RESTful, but it certainly doesn't seem to align with your statement that all URIs are opaque by definition.
My answer to the 'REST-ful URI question is this: URIs are the *product* of a REST-ful implementation, not the initiator. IOW, URI design is that *last* step in building a REST-ful app[1]. the first step is to design the application workflow using HATEOAS within Resouces. once you have a clear workflow and well-designed Resources, the URIs will work themselves out. In fact, by starting with HATEOAS, you can (if you must) change some of the URIs within your running app without suffering a fatal blow to your implementation. Finally, if you are squeamish about changing URIs in an already-released app, employ a server-side URI redirector to keep stale URIs from killing bookmarks. mca http://amundsen.com/blog/ [1] - REST Upside Down (http://www.amundsen.com/blog/archives/885) On Mon, Feb 2, 2009 at 19:04, scameron02 <scott.cameron@...> wrote: > > Jon Hanna <jon@...> wrote: >> >> There is no opaque vs hackable. All URIs are opaque to all but a > couple >> of processes. It's not that URIs arguably should be opaque. Every > single >> one of them is. >> > > If a URI scheme is human-readable, clearly documented, and guaranteed by > the server to never change, how is that opaque? I'm not saying it's > necessarily (or not) RESTful, but it certainly doesn't seem to align > with your statement that all URIs are opaque by definition. > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
* Subbu Allamaraju <subbu@...> [2009-02-02 17:50]: > Some may argue that this is incorrect (since the URIs to get > and edit the resource are not the same), but that is where we > are with HTML. It’s hard to argue that it is incorrect per se. As long as state transitions are driven by links and forms, and you respect the uniform interface, then the system is RESTful no matter what the URIs look like. It’s certainly suboptimal to GET one URI and POST to another, though – cache invalidation comes to mind. (Generally I am finding that intermediaries tend to be the tie-breaker in many design choices which are orthogonal to REST.) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
<snip> Generally I am finding that intermediaries tend to be the tie-breaker in many design choices which are orthogonal to REST. </snip> +1 mca http://amundsen.com/blog/ On Tue, Feb 3, 2009 at 08:09, Aristotle Pagaltzis <pagaltzis@...> wrote: > * Subbu Allamaraju <subbu@...> [2009-02-02 17:50]: >> Some may argue that this is incorrect (since the URIs to get >> and edit the resource are not the same), but that is where we >> are with HTML. > > It's hard to argue that it is incorrect per se. As long as state > transitions are driven by links and forms, and you respect the > uniform interface, then the system is RESTful no matter what the > URIs look like. > > It's certainly suboptimal to GET one URI and POST to another, > though – cache invalidation comes to mind. (Generally I am > finding that intermediaries tend to be the tie-breaker in many > design choices which are orthogonal to REST.) > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> > > ------------------------------------ > > Yahoo! Groups Links > > > >
* mike amundsen <mamund@...> [2009-02-03 05:40]: > In fact, by starting with HATEOAS, you can (if you must) change > some of the URIs within your running app without suffering a > fatal blow to your implementation. Which is, of course, *the whole point*: the server can evolve without having to inform the clients – the resources created today can be put into a different URI structure than the resources created yesterday, and as long as the clients stick to hypermedia instead of hard-wired URI construction rules, they will continue working as if nothing ever happened. (Because in fact nothing ever did happen.) If you want to change the URIs of the old resources you can put redirects in place to keep old bookmarks working, but you could equally well keep the old URIs canonical for old resources. Also, you can interconnect two different applications, or even ditch the server code of one of them and configure the other to continue to provide its functionality, without one of the apps having to subsume the URI space of the other. There is no forced homogenisation of systems. You can actually, in the true meaning of the word, *integrate* services! Who’da thunk. The brilliance of REST is all about the hypermedia constraint. The other constraints are only loosely dependent on each other and individually negotiable. But together they form the structure that supports the hypermedia constraint, and the hypermedia constraint in turn multiplies the value of the other constraints. It is the hypermedia constraint that makes REST as a style greater than the sum of its constraints. It is the focal point and amplifier of all the constraints. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
scameron02 wrote: > If a URI scheme is human-readable, clearly documented, and guaranteed by > the server to never change, how is that opaque? The things reading it not being human, http://example.net/main_api/users/user23 is no less opaque than http://example.net/ohiuwftojiics09fsdao. The advantage of the latter is that nobody will make the mistake of thinking it isn't opaque, and treating it as such. Relatedly, guarantees of never changing are not always desirable and much less often possible. It's good for URIs to change, it's better (and easier) for systems to know how to find them URIs they need starting with a very small number (1 being a good example of a very small number) of URIs.
Jon Hanna <jon@...> wrote: > > The things reading it not being human, > http://example.net/main_api/users/user23 is no less opaque than > http://example.net/ohiuwftojiics09fsdao. > > The advantage of the latter is that nobody will make the mistake of > thinking it isn't opaque, and treating it as such. > I think maybe we're arguing semantics here, but your first example most certainly has the potential to be what I would consider transparent or "hackable" (as opposed to opaque). In fact, I would argue that both of your examples could potentially be either. If "ohiuwftojiics09fsdao" is, say, a globally unique ID referencing a resource then I can (assuming the server says I can) take a analagous ID from another object and put it into the same place in a URI so that a GET will return me the object I'm asking for. Of course, if "ohiuwftojiics09fsdao" is some kind of a proprietary hash then this is truly opaque. I suppose I define transparency to be the degree to which the location of one resource can be inferred from the location of a another resource. The robustness of this over time is completely determined by the server implementation. Again, this does not say anything about whether or not this is generally recommended or even RESTful. I think it's been pretty well established by many replies to this thread that URI design is quite a minor part of whether or not a design is RESTful. There may be many other reasons for choosing opaque v.s. transparent. My original reply was simply making the point that there are certain types of applications where hypermedia can not be the sole driver of state, so URI transparency may be a good second-tier alternative. scott
scameron02 wrote: > I think maybe we're arguing semantics here, Yes, but not unimportant semantics. It's an important principle of web architecture that URIs are opaque. Using "opaque" differently doesn't change that. > but your first example most > certainly has the potential to be what I would consider transparent or > "hackable" Often a plus, but a plus irrelevant to REST. A minus if you abuse the advantages in ways that lead to brittleness. > (as opposed to opaque). No, as well as opaque to most processes. > If "ohiuwftojiics09fsdao" > is, say, a globally unique ID referencing a resource then I can > (assuming the server says I can) take a analagous ID from another object > and put it into the same place in a URI so that a GET will return me the > object I'm asking for. Whether ohiuwftojiics09fsdao is a globally unique ID or not, http://example.net/ohiuwftojiics09fsdao is a globally unique ID, as is http://example.net/main_api/users/user23, so why bother? > I suppose I define transparency to be the degree to which the location > of one resource can be inferred from the location of a another > resource. The robustness of this over time is completely determined by > the server implementation. No it's not. The robustness over time is determined by the fact that if one uses HATEOS then it doesn't matter which URIs are used. Structured URIs can aid this by allowing for relative links and by making spot-hacks by humans easier, but these are conveniences. Building on assumptions about structure will always be less robust than not doing so, no matter how good or bad those assumptions are and whether the winds of fate blow with you or against. Its continuing to work is a matter of luck. And those assumptions would be present or absent in the client, not the server. > My original reply was simply making > the point that there are certain types of applications where hypermedia > can not be the sole driver of state, so URI transparency may be a good > second-tier alternative. They aren't RESTful. They may be useful, they may be great in many ways, and when I control the client and the server I often build them, but they aren't RESTful.
--- In rest-discuss@yahoogroups.com, Jon Hanna <jon@...> wrote:
>
> > but your first example most
> > certainly has the potential to be what I would consider transparent
or
> > "hackable"
>
> Often a plus, but a plus irrelevant to REST.
>
> A minus if you abuse the advantages in ways that lead to brittleness.
Yes, I agree that abuse can lead to brittleness. But I would apply that
same statement against the semantics exposed by pretty much any API. In
fact, I don't think there is such a thing as a non-static, non-brittle
API. It either never changes, or it has the potential to break clients.
Using hypermedia pushes many of the semantics of the API into
standardized formats that are very static, which makes the use of
hypermedia less brittle than a custom API. But there is still
brittleness there (albeit less of it), as there is in any non-standard
API.
If I tell you as a client that there will be a <link> element with a
"rel" attribute marked "parents" in the representation of a resource and
that this hyperlink will retrieve a list of parent resources then there
must be some understanding that the server will not someday randomly
decide to start sticking a hyperlink into "parents" that returns, for
example, child resources instead.
Any agreement between client and server has the potential to be abused.
I do agree with you that hypermedia reduces this potential.
>
> Whether ohiuwftojiics09fsdao is a globally unique ID or not,
> http://example.net/ohiuwftojiics09fsdao is a globally unique ID, as is
> http://example.net/main_api/users/user23, so why bother?
>
Sorry, I wasn't clear. I didn't mean globally unique ID in terms of the
URI identifying a resource. I meant something like a GUID or some
application-internal identifier. You could pull a GUID out of one
resource and use it to construct a URI based on the GUID-based URI from
another resource.
This brings up an interesting question. Do you believe that URI
templates have the potential to fall into the category of hypermedia?
Say I do have a URI scheme that looks like this:
http://example.net/{guid}
And in a particular resource representation I return something that has
this URI template in a contextualized <link-template> element along with
information about a bunch of resources each including a GUID in a
well-known location. If I can programmatically inspect the URI template
and construct valid URIs from it using GUID information from other parts
of the representation, is this considered hypermedia?
>
> The robustness over time is determined by the fact that if one uses
> HATEOS then it doesn't matter which URIs are used. Structured URIs can
> aid this by allowing for relative links and by making spot-hacks by
> humans easier, but these are conveniences.
>
Everybody seems to agree that the use of hypermedia as the primary (or
in your case, sole) driver of state transition is the most desirable
approach. But do you agree that there are applications where returning
an exhaustive list of potential next state transitions in a
representation is just not feasible? As per my original post, how do
you handle applications of this kind?
It seems to me that in cases like this this there will have to be a
deterministic, programmatic method of producing URIs based on some kind
of template or documentation or something else. Or do you believe that
such applications just cannot be purely RESTful?
scott
* scameron02 <scott.cameron@...> [2009-02-03 18:45]: > Everybody seems to agree that the use of hypermedia as the > primary (or in your case, sole) driver of state transition is > the most desirable approach. In my case too. Also in Roy Fielding’s case, which leaves little wiggle room for turning this into a matter of consensus. > But do you agree that there are applications where returning an > exhaustive list of potential next state transitions in a > representation is just not feasible? As per my original post, > how do you handle applications of this kind? URI templates or forms (which are basically the same thing). > It seems to me that in cases like this this there will have to > be a deterministic, programmatic method of producing URIs based > on some kind of template or documentation or something else. Exactly. But the template must be provided by the server in hypermedia, not hardwired into the client on a per-app basis. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
--- In rest-discuss@yahoogroups.com, Aristotle Pagaltzis <pagaltzis@...> wrote: > > > It seems to me that in cases like this this there will have to > > be a deterministic, programmatic method of producing URIs based > > on some kind of template or documentation or something else. > > Exactly. But the template must be provided by the server in > hypermedia, not hardwired into the client on a per-app basis. > This is where I get a little bit stuck when I think about the hypermedia constraint of REST. At some point it seems that a transition needs to be made between the world of machine-discovered application interaction and the world of human-discovered application interaction. At some level there will be an agreement made between the server application writers and the client application writers that tells the client guys what they need to hardcode into their apps. That is, which aspects of the interface are guaranteed to never change. One option for doing this is a documented URI schema. In REST, this is considered bad because clients should be interacting with hypermedia, not constructing URIs based on promises from the server. The other option (the one prescribed by REST) is hypermedia. OK, so you return a set of hyperlinks or some kind of form to the client. The use of well-known hyperlink tags and standard form vocabularies provides a higher level of machine-discoverable semantics. For example, my app knows it’s safe to do a GET on a hyperlink. And it knows that, say, the form is asking for an input selection of one of the following 3 choices. But what does this really mean? How does my application distinguish between one hyperlink and another hyperlink? The one tagged “parents†probably returns the current resource’s parent resources and the one tagged “children†returns its children. But aren’t we back to hardcoding knowledge into the application about what “parents†and “children†actually mean? If the server ever changes this meaning, my application breaks. The same goes for content (as opposed to the structure) of the forms. I think that the answer comes down to well-known representation formats. The semantics hardcoded into client application should be based on metadata that is not determined by the server but rather by an agreed upon, standardized format that is guaranteed to never change. As long as the server implements the format correctly, everybody is automatically in agreement. But â€" taking forms as a specific example -- there still seems to be missing information. The form structure may be standardized, but the semantics associated with the content of a specific form instance will be specialized for each application. Aren’t we back, then, to hard-coding semantics into the client based on documentation from the server? In this case, how much difference is there between documenting the semantics of a transparent URI schema or documenting the semantics of a specific form instance (assuming the form is used to drive a program). I’m not trying to be argumentative here... the hypermedia constraint of REST is something that I’m finding the most difficult of all the core principles to understand in practical terms. I’m very interested in your take on this topic. Thanks, scott
On Wed, Feb 4, 2009 at 1:53 PM, scameron02 <scott.cameron@...> wrote: > > --- In rest-discuss@yahoogroups.com, Aristotle Pagaltzis <pagaltzis@...> > wrote: > > > > > It seems to me that in cases like this this there will have to > > > be a deterministic, programmatic method of producing URIs based > > > on some kind of template or documentation or something else. > > > > Exactly. But the template must be provided by the server in > > hypermedia, not hardwired into the client on a per-app basis. > > > > This is where I get a little bit stuck when I think about the hypermedia > constraint of REST. At some point it seems that a transition needs to be > made between the world of machine-discovered application interaction and the > world of human-discovered application interaction. At some level there will > be an agreement made between the server application writers and the client > application writers that tells the client guys what they need to hardcode > into their apps. That is, which aspects of the interface are guaranteed to > never change. > > One option for doing this is a documented URI schema. In REST, this is > considered bad because clients should be interacting with hypermedia, not > constructing URIs based on promises from the server. > > The other option (the one prescribed by REST) is hypermedia. OK, so you > return a set of hyperlinks or some kind of form to the client. The use of > well-known hyperlink tags and standard form vocabularies provides a higher > level of machine-discoverable semantics. For example, my app knows it’s > safe to do a GET on a hyperlink. And it knows that, say, the form is asking > for an input selection of one of the following 3 choices. But what does > this really mean? How does my application distinguish between one hyperlink > and another hyperlink? The one tagged “parents†probably returns the > current resource’s parent resources and the one tagged “children†> returns its children. But aren’t we back to hardcoding knowledge into the > application about what “parents†and “children†actually mean? If the > server ever changes this meaning, my application breaks. The same goes for > content (as opposed to the structure) of the forms. > > I think that the answer comes down to well-known representation formats. > The semantics hardcoded into client application should be based on metadata > that is not determined by the server but rather by an agreed upon, > standardized format that is guaranteed to never change. As long as the > server implements the format correctly, everybody is automatically in > agreement. But â€" taking forms as a specific example -- there still seems > to be missing information. The form structure may be standardized, but the > semantics associated with the content of a specific form instance will be > specialized for each application. Aren’t we back, then, to hard-coding > semantics into the client based on documentation from the server? > > In this case, how much difference is there between documenting the > semantics of a transparent URI schema or documenting the semantics of a > specific form instance (assuming the form is used to drive a program). > If you use hypermedia, servers have more liberty in choosing which URL structures to use, even which methods and content types. It's easier to build compatible services using different tools, platforms, change them over time, build them out of parts, etc. When the only thing you reuse is the content type, it's easier to mix as many clients and servers as you need. It's also easier to extend by adding more links/actions into the hypermedia. If you use URL structures (don't forget to also list which methods are available when) you've added a restriction on the servers. And URLs that are really easy, sometimes free, in one platform can be a pain to support in another. A structure you think is awesome today could feel very restrictive tomorrow. And extensibility is hard because you can't determine what a service is capable of, you no longer get a list of links/actions, you have to guess if making a method against a URL is safe or not. On the other hand, in my experience it's easier to build code around a fixed URL structure, and if you only imagine having one (or few) clients to a single service, hypermedia might be an overkill. Assaf > > > I’m not trying to be argumentative here... the hypermedia constraint of > REST is something that I’m finding the most difficult of all the core > principles to understand in practical terms. I’m very interested in your > take on this topic. > > Thanks, > scott > > > >
Understanding the hypermedia aspects of a RESTful design can be
challenging, especially when talking about practical specifics. I
thought it might be useful for people new to REST (like me) to have some
concrete examples to look at for ideas about the "right" way to do
things.
I'm most interested in the use of hypermedia to drive program-to-program
interactions (as opposed to human-to-program, like a website).
What are examples of hypermedia technologies that you've used
successfully or that you've seen used successfully? Some of the
commonly cited examples I've seen are:
* Atom and APP
* OpenSearch
* XML with custom XML Schema
* XForms
* XLink
* HTML microformats
Have you had success with any of these? Are there some there that you
don't think should be there? Others that are missing?
Second, what are the existing, publicly available applications or
frameworks currently in use that you feel are examples of really good
RESTful design, especially in terms of their use of hypermedia to drive
application state?
Thanks,
scott
I don't know but it seems to me that the discussion has gone into HATEOAS instead of talking about the issues Nick has raised (seeing Nick's points I think he understand HATEOAS pretty well) -- I will try and address them in what little way I can and hopefully it will also bring back the discussion to the issues Nick raised (which I think are really interesting ) 2009/2/2 Nick Gall <nick.gall@...>: > On Mon, Feb 2, 2009 at 8:42 AM, Aristotle Pagaltzis <pagaltzis@...> > wrote: >> If anything is important in that sense, then it's the design of >> your resource representations: where do hyperlinks go? What does >> a particular form of link mean? Ie. what does it imply about the >> operations you can expect to be able to perform on the target of >> the link? These are the things that a client has to know in order >> to operate a REST service. >> >> The structure of URIs, in contrast, is a server implementation >> detail that the client neither need nor should care about. > > (I know I'm going to regret asking this, but it's been bothering me for so > long that I have to ask it.) > > I think the concept of (promiscuous) bookmarking means that the client > effectively does care (or at least becomes dependent upon) the structure of > URIs. Why ? The client becomes dependant on the URI ... but not the structure .. I bookmark the URI, not assume a template from it. The only case of promiscuous bookmarking of a template I can think of is the use of search engines in browsers. > What's been bothering me for a long time is that I think there is a > fundamental tension between REST's concept of "bookmarks as limited set of > entry points" and the Web's concept of "URIs for everything". > When I read some of the posts in this list, including those by Roy Fielding > himself, I sometimes get the sense that "entry point" URIs should be kept to > a bare minimum in order to minimize the "dependency surface area" between > clients and servers. For example, Roy made this comment about bookmarks: > "REST is limited to the client being told what to do next by the current > state of where they are now, aside from the entry point(s) we call a > bookmark." My view is that bookmarking templates is fine as long as it is limited to a few important starting points (like a search engine). > In other words, bookmarks are an aside -- only a relative handful of URIs > should be "bookmarked" by clients. But isn't one of the core principles of And this handful could contain a template too ... Bottom line .. bookmarking should be limited - whether thats template or URI itself. > the Web the idea that any URI should be bookmarkable and that bookmarking is > to be encouraged? And remember, bookmarking doesn't just mean putting the > URI into a list of favorites in a browser. It also means the client > embedding that URI into a representation that it passes along to some other > client. Sometimes the REST talk of URI "entry points" seems like an implicit > rejection of deep linking, ie forbidding linking to any site page other than > a site's main or home page. And a rejection of deep linking is fundamentally > at odds with the core principles of the Web. Yeah .. I have noticed this too .. I don't have an answer to this -- it does look like it is against the concept of deep linking. I think (in a wild guess) , that you can deep link inside a website, to the start of an application/transaction - but not deep link inside it somewhere (as I said this is very crude and I don't know the answer ) > If promiscuous bookmarking is indeed encouraged by the Web (and by REST), > then a well designed system must assume that any URI that appears in any > representation it returns could someday be used as an "entry point". In > which case the system must assume that the world of clients is potentially > dependent on the entire structure of its initial "network" of URIs -- not > just a handful of designated "entry point" URIs. In other words the ratio of > bookmarked (entry point) URIs to all the URIs returned in representations > isn't extremely small, its potentially one-to-one. > To put it another way, what's the difference in the degree or nature of the > dependency, from the server's POV, between: > > The entire set of URIs it has ever returned in representations being > bookmarked by clients and then used later as entry points; and > Clients generating such entry-point URIs via URI templates and a scripting > language > > Conceptually the difference is that (2) can generate novel URIs that were > never returned in any representation. For example, a geospatial system could > have returned millions of lat/long URIs in various representations over the > years, but never returned one with the specific lat/long that a client > script generates, eg no one had ever asked about THAT part of the Pacific > yet. I think , the server should encourage people to bookmark only a particular set of URIs. A (not good) example is how blogs require you to permalink instead of the page you are viewing . If the user doesn't follow what the server asked it to do, then he/she would end up in a mess (which won't be server's fault). > But pragmatically there seems to me to be no difference at all. In other > words, the idea that one can substantially reduce the dependencies between > clients and servers by returning a network of URIs in representations > (HATEOAS) instead of explicitly documenting the URI templates that could > generate them, seems only to work if one prohibits or at least discourages > promiscuous bookmarking of URIs, ie prohibits deep linking by REST clients. > This is why I think the structure of URIs IS important and the use of URI > templates is NOT suspect. Do others see this as a tension as well, or I am > just misunderstanding something? > I think I am also misunderstanding things .. we need Roy here I think :) But I do see the tension! Cheers Devdata
FWIW, my views are as follows: First, I know of nothing "un-REST-full" about bookmarking. I see no reasoning "deep-linking" is "un-REST-ful" or evidence that it is discouraged by the the REST style. In fact, the one use of the term in Chap5 of Fielding's dissertation speaks rather positively on the use of bookmarks: "The application state is controlled and stored by the user agent and can be composed of representations from multiple servers. In addition to freeing the server from the scalability problems of storing state, this allows the user to directly manipulate the state (e.g., a Web browser's history), anticipate changes to that state (e.g., link maps and prefetching of representations), and jump from one application to another (e.g., bookmarks and URI-entry dialogs)."[1] Second, IMO, bookmarks are orthogonal to URI design. Once an item is bookmarked the user agent need not care about the actual composition of the URI - as long as it continues to resolve properly. Third, URI templating is also orthogonal to URI design. Sure, a sane design makes automating the generation of URIs easier to deal with, but that's as far as I see it going. A server-side URI generator that produces random numbers is no better/worse than one that goes to great pains to build a more complex URI that contains separators and the like - as long as it continues to resolve properly. Fourth, resolving URIs properly is the work of the server and, while "Cool URIs don't change" [2], implementing a design that *requires* URIs never change is a (possibly fatal) self-inflicted wound. If the server finds a need to start generating different URIs for existing resources, it is the server's responsibility to deal with the 'old' URIs that might still be out in the wild. URI rewriters are a wonderful thing. Also, there are many cases where bookmarked URIs should return 410 Gone. This is not a bad thing and should not be discouraged. That said, things can be thought of differently if you assume the *user agent* is in charge of building the URIs in order to advance the state of the application. In that case, the server must send to the user agent all the necessary information to allow the user agent to construct the proper URIs for use. This might be in the form of a script, a set of templates and rules, or, if the user agent doing the URI constructing is a human, it might take the form of an 'easily grok-able' pattern in the existing URIs that make things more 'hackable.' In that case it seems desirable to have a URI design that 'makes sense' (to humans, usually) in order to ease the construction. This desirability, however, has nothing to do with REST. Finally, IMO, the more one relies on the user agent to construct URIs, the less REST-ful the application. This is especially true of the emerging set of 'Data API' implementations. And that is the point of the "limited entry URIs" notion. If you want your data to be useful to a wide range of non-human user agents, don't require these non-human user agents to have a great deal of fore-knowledge about the URIs in your application. Instead, *tell* the user agent what those possible URIs are by sending links with every response. mca http://amundsen.com/blog/ [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_3 [2] http://www.w3.org/Provider/Style/URI On Thu, Feb 5, 2009 at 12:26, Devdatta <dev.akhawe@...> wrote: > I don't know but it seems to me that the discussion has gone into > HATEOAS instead of talking about the issues Nick has raised (seeing > Nick's points I think he understand HATEOAS pretty well) -- I will try > and address them in what little way I can and hopefully it will also > bring back the discussion to the issues Nick raised (which I think are > really interesting ) > > 2009/2/2 Nick Gall <nick.gall@...>: >> On Mon, Feb 2, 2009 at 8:42 AM, Aristotle Pagaltzis <pagaltzis@...> >> wrote: >>> If anything is important in that sense, then it's the design of >>> your resource representations: where do hyperlinks go? What does >>> a particular form of link mean? Ie. what does it imply about the >>> operations you can expect to be able to perform on the target of >>> the link? These are the things that a client has to know in order >>> to operate a REST service. >>> >>> The structure of URIs, in contrast, is a server implementation >>> detail that the client neither need nor should care about. >> >> (I know I'm going to regret asking this, but it's been bothering me for so >> long that I have to ask it.) >> >> I think the concept of (promiscuous) bookmarking means that the client >> effectively does care (or at least becomes dependent upon) the structure of >> URIs. > > Why ? The client becomes dependant on the URI ... but not the > structure .. I bookmark the URI, not assume a template from it. The > only case of promiscuous bookmarking of a template I can think of is > the use of search engines in browsers. > >> What's been bothering me for a long time is that I think there is a >> fundamental tension between REST's concept of "bookmarks as limited set of >> entry points" and the Web's concept of "URIs for everything". >> When I read some of the posts in this list, including those by Roy Fielding >> himself, I sometimes get the sense that "entry point" URIs should be kept to >> a bare minimum in order to minimize the "dependency surface area" between >> clients and servers. For example, Roy made this comment about bookmarks: >> "REST is limited to the client being told what to do next by the current >> state of where they are now, aside from the entry point(s) we call a >> bookmark." > > My view is that bookmarking templates is fine as long as it is limited > to a few important starting points (like a search engine). > >> In other words, bookmarks are an aside -- only a relative handful of URIs >> should be "bookmarked" by clients. But isn't one of the core principles of > > And this handful could contain a template too ... > > Bottom line .. bookmarking should be limited - whether thats template > or URI itself. > >> the Web the idea that any URI should be bookmarkable and that bookmarking is >> to be encouraged? And remember, bookmarking doesn't just mean putting the >> URI into a list of favorites in a browser. It also means the client >> embedding that URI into a representation that it passes along to some other >> client. Sometimes the REST talk of URI "entry points" seems like an implicit >> rejection of deep linking, ie forbidding linking to any site page other than >> a site's main or home page. And a rejection of deep linking is fundamentally >> at odds with the core principles of the Web. > > Yeah .. I have noticed this too .. I don't have an answer to this -- > it does look like it is against the concept of deep linking. > > I think (in a wild guess) , that you can deep link inside a website, > to the start of an application/transaction - but not deep link inside > it somewhere (as I said this is very crude and I don't know the answer > ) > >> If promiscuous bookmarking is indeed encouraged by the Web (and by REST), >> then a well designed system must assume that any URI that appears in any >> representation it returns could someday be used as an "entry point". In >> which case the system must assume that the world of clients is potentially >> dependent on the entire structure of its initial "network" of URIs -- not >> just a handful of designated "entry point" URIs. In other words the ratio of >> bookmarked (entry point) URIs to all the URIs returned in representations >> isn't extremely small, its potentially one-to-one. >> To put it another way, what's the difference in the degree or nature of the >> dependency, from the server's POV, between: >> >> The entire set of URIs it has ever returned in representations being >> bookmarked by clients and then used later as entry points; and >> Clients generating such entry-point URIs via URI templates and a scripting >> language >> >> Conceptually the difference is that (2) can generate novel URIs that were >> never returned in any representation. For example, a geospatial system could >> have returned millions of lat/long URIs in various representations over the >> years, but never returned one with the specific lat/long that a client >> script generates, eg no one had ever asked about THAT part of the Pacific >> yet. > > I think , the server should encourage people to bookmark only a > particular set of URIs. A (not good) example is how blogs require you > to permalink instead of the page you are viewing . If the user doesn't > follow what the server asked it to do, then he/she would end up in a > mess (which won't be server's fault). > > > >> But pragmatically there seems to me to be no difference at all. In other >> words, the idea that one can substantially reduce the dependencies between >> clients and servers by returning a network of URIs in representations >> (HATEOAS) instead of explicitly documenting the URI templates that could >> generate them, seems only to work if one prohibits or at least discourages >> promiscuous bookmarking of URIs, ie prohibits deep linking by REST clients. >> This is why I think the structure of URIs IS important and the use of URI >> templates is NOT suspect. Do others see this as a tension as well, or I am >> just misunderstanding something? >> > > I think I am also misunderstanding things .. we need Roy here I think :) > > But I do see the tension! > > Cheers > Devdata > > > ------------------------------------ > > Yahoo! Groups Links > > > >
> Second, IMO, bookmarks are orthogonal to URI design. Once an item is > bookmarked the user agent need not care about the actual composition > of the URI - as long as it continues to resolve properly. +1. It is about being able to safely replay a given GET. > Third, URI templating is also orthogonal to URI design. Sure, a sane > design makes automating the generation of URIs easier to deal with, > but that's as far as I see it going. A server-side URI generator that > produces random numbers is no better/worse than one that goes to great > pains to build a more complex URI that contains separators and the > like - as long as it continues to resolve properly. Right. Bookmarking applies to URIs and not templates. Bookmarkable URIs include root-level URIs applications may publish, and URIs found links and Location headers. In a pure app-app to scenario, these are the URIs that client apps "remember" so that they can get back to a given application state at a later time. T > Fourth, resolving URIs properly is the work of the server and, while > "Cool URIs don't change" [2], implementing a design that *requires* > URIs never change is a (possibly fatal) self-inflicted wound. If the > server finds a need to start generating different URIs for existing > resources, it is the server's responsibility to deal with the 'old' > URIs that might still be out in the wild. URI rewriters are a > wonderful thing. Also, there are many cases where bookmarked URIs > should return 410 Gone. This is not a bad thing and should not be > discouraged. Well said. Subbu --- http://subbu.org
mike amundsen wrote: > <snip> > Generally I am finding that intermediaries tend to be the tie-breaker > in many design choices which are orthogonal to REST. > </snip> > > +1 +1 Bill
Assaf Arkin wrote: > On the other hand, in my experience it's easier to build code around a > fixed URL structure, and if you only imagine having one (or few) clients > to a single service, hypermedia might be an overkill. Also, hypertext for simple case tends to need two calls, one the bootstrap document to find the link, and two, to the actual link you care about. Whereas a fixed url scheme on the client means one call. Bill
On Feb 5, 2009, at 5:45 PM, Bill de hOra wrote: > Assaf Arkin wrote: > >> On the other hand, in my experience it's easier to build code >> around a >> fixed URL structure, and if you only imagine having one (or few) >> clients >> to a single service, hypermedia might be an overkill. > > Also, hypertext for simple case tends to need two calls, one the > bootstrap document to find the link, and two, to the actual link you > care about. Whereas a fixed url scheme on the client means one call. Yes, a RESTful system is at least one level of indirection away from a strongly coupled system. A fixed URL scheme is essentially the same as baking the first representation into each client. Is that surprising? Likewise, a RESTful system is always going to be less efficient than a system designed for a static set of clients accessing a small set of services that never change. A lot of people think of systems as static things. Dead things. REST is not going to appeal to those people. All of its constraints are designed to keep systems living longer than we are willing or able to anticipate. ....Roy
yes. you are missing the whole of the Linked Data movement which is all about that. http://linkeddata.org/ http://esw.w3.org/topic/LinkedData One of the pieces that is best known is foaf, the friend of a friend ontology . This is all about RESTful hypermedia. Henry On 5 Feb 2009, at 17:46, Cameron, Scott wrote: > > Understanding the hypermedia aspects of a RESTful design can be > challenging, especially when talking about practical specifics. I > thought it might be useful for people new to REST (like me) to have > some > concrete examples to look at for ideas about the "right" way to do > things. > > I'm most interested in the use of hypermedia to drive program-to- > program > interactions (as opposed to human-to-program, like a website). > > What are examples of hypermedia technologies that you've used > successfully or that you've seen used successfully? Some of the > commonly cited examples I've seen are: > > * Atom and APP > * OpenSearch > * XML with custom XML Schema > * XForms > * XLink > * HTML microformats > > > Have you had success with any of these? Are there some there that you > don't think should be there? Others that are missing? > > Second, what are the existing, publicly available applications or > frameworks currently in use that you feel are examples of really good > RESTful design, especially in terms of their use of hypermedia to > drive > application state? > > Thanks, > scott
On Fri, Feb 6, 2009 at 12:54 AM, Roy T. Fielding <fielding@...> wrote: > > On Feb 5, 2009, at 5:45 PM, Bill de hOra wrote: > > Assaf Arkin wrote: > > > >> On the other hand, in my experience it's easier to build code > >> around a > >> fixed URL structure, and if you only imagine having one (or few) > >> clients > >> to a single service, hypermedia might be an overkill. > > > > Also, hypertext for simple case tends to need two calls, one the > > bootstrap document to find the link, and two, to the actual link you > > care about. Whereas a fixed url scheme on the client means one call. > > Yes, a RESTful system is at least one level of indirection away > from a strongly coupled system. A fixed URL scheme is essentially > the same as baking the first representation into each client. > > Is that surprising? > > Likewise, a RESTful system is always going to be less efficient > than a system designed for a static set of clients accessing > a small set of services that never change. > > A lot of people think of systems as static things. Dead things. > REST is not going to appeal to those people. All of its constraints > are designed to keep systems living longer than we are willing > or able to anticipate. Roy, thanks for responding. I completely agree that way REST dynamically provides the next relevant set of URIs via HATEOAS is highly dynamic, and dynamic systems live longer than non-dynamic ones. But you didn't address the (apparent) tension between bookmarking/deep-linking and REST, which was my original question. Bookmarking takes a dynamically generated bookmark and effectively makes it static -- at least it's static (persisted) on the client side. As more and more clients bookmark more and more of the URIs they receive, the more they are incrementally "fixing" the original URI space. Hence my point: encouraging promiscuous bookmarking of (deep) links seems to be at odds with REST's desire to minimize "entry point" URIs. Do you see the tension as well, or am I missing something that resolves the tension? Devdatta makes the excellent point earlier in the thread about "how blogs require you to permalink instead of the page you are viewing." It seems that perhaps there is an implicit REST constraint that is beginning to become more explicit. Roughly, REST distinguishes two types of URIs: 1. "entry point" type URIs, which may be bookmarked indefinitely. These are Cool URIs. 2. "transitional" type URIs, which may not be bookmarked indefinitely. These are unCool URIs. I call then "transitional" given that their role is typically to enable transition to the next state. I'm not wedded to the names (you could call them internal/external), but I do think this distinction between types of URIs is an important aspect of REST that, so far, has not been clearly outlined. It certainly seems to be in play in the permathread debates regarding whether "URIs should be RESTful or not" (and whether that designation is even a meaningful one). I also think the distinction is a bit at odds with the common understanding of URIs on the Web that ANY URI should be a bookmarkable URI, ie that ALL URIs should strive to be Cool. I think it's a pretty major change in Web architectural thinking (or at least emphasis) to now say (effectively) that a significant class of URIs should NOT be cool, ie one should NOT expect them to be useable indefinitely. In a way, this harks all the way back to Parnas's admonition that we hide information that is likely to change. Using modern web-speak, REST seems to admonish us to "hide" the URIs that are likely to change (unCool URIs) and "expose" only those URIs that are likely to stay the same (Cool URIs). But nothing I've seen in the desciptions of Web Architecture (eg AWWWv1) remotely suggests that some URIs should effectively be hidden, or per my previous point, not bookmarked. If anything, Web Architecture descriptions at least imply that all URIs SHOULD be "exposed" and bookmarkable. Any light you can shed on this issue would be sincerely appreciated. -- Nick
www.blog.com www.blog.com/post/1234 usually the second would be a permalink -to a particular post . But the first can be bookmarked too - the resource in question is the latest blog post (which is not what the user wants - thats why they click on permalink) As I warned earlier : my examples were a little contrived! :) Now I agree, there is ,I think, a difference between bookmarkable URIs and non-bookmarkable URIs - As mike noted from the REST bible : " The application state is controlled and stored by the user agent and can be composed of representations from multiple servers. In addition to freeing the server from the scalability problems of storing state, this allows the user to directly manipulate the state (e.g., a Web browser's history), anticipate changes to that state (e.g., link maps and prefetching of representations), and jump from one application to another (e.g., bookmarks and URI-entry dialogs). " So you should book mark to jump from one application to another , but not to land up somewhere in a transient state of an application via a bookmark. > I think it's a pretty major change in Web architectural thinking (or at > least emphasis) to now say (effectively) that a significant class of URIs > should NOT be cool, ie one should NOT expect them to be useable > indefinitely. this is an inference that I don't like and I think it is wrong. lets say : hugeretailer.com/creditcardtransaction/enterdetails is not something bookmarkable as it is an intermediate step through the application (which is "credit card transaction"). But if someone does bookmark it and go there directly, then the server shouldn't return a 404 but return something stating that you haven't followed the right path. I think I am arguing for both sides of the debate - but thats mostly because I am really confused and not sure what is correct ... so forgive me ! Cheers Devdatta
1. Bookmarkability is one of the strengths of REST and the Web. 2. I don't see how you can prevent people from bookmarking anything they want to bookmark. 3. Sometimes serendipitous bookmarks are useful, as with searches and complex queries. 4. The server needs to act defensively if people bookmark intermediate states that will lead to trouble. E.g. redirect to somewhere safe like a starting point.
I think Josh is in the right direction in terms of resolving the tension
between deep bookmarking and HATEOAS.
Any set of bookmarks from a website may partially break at any point as the
server is not committed to maintaining those URIs. As I understand it, this
is the loose-coupling property of REST. If I recall correctly, tolerance for
broken links was one of the major insights of the Web.
Where HATEOAS comes in, is that it allows a client (machine or human) to
rediscover a resource, the way it was found initially. However the concept
of 'entry' URIs seems suspicious. Any resource that links to other resources
may serve as an 'entry point' in the rediscovery of the URIs for the other
resources. The way I see it, HATEOAS provides a mechanism for healing as set
of bookmarked URIs in case of partial, even significant, breakage. However,
the entry point may be any of the 'surviving' URIs, not a specific,
pre-ordained one.
Where Nick may be coming from is that in practice, if the API of a service
changes completely, this would mean that the resources would have to be
rediscovered from the 'top'. This can only happen if the entire network of
resources is under the control of a single entity and can therefore change
instantly, and also is isolated from the rest of the Web. If part of a
multi-owner network of resources 'breaks', as the resources are rediscovered
and relinked through repair of the broken links from other domains, it
becomes easier for the rest of the network to rediscover the new URIs,
through the repaired links. So while there is a situation where an entire
owner-domain of resources disappears and can only be rediscovered from the
'main' URI, it is a limited case. What it implies for the responsibilities
of a server, I am not sure.
Alexandros Marinos
On Fri, Feb 6, 2009 at 2:10 PM, Josh Sled <jsled@...> wrote:
> Nick Gall <nick.gall@...> writes:
> > Hence my point: encouraging promiscuous bookmarking of (deep) links seems
> to be at odds with REST's desire to minimize "entry point" URIs. Do you see
> the tension as well, or am I missing something
> > that resolves the tension?
>
> 404 and redirection responses, combined with further use hypermedia,
> help resolve the tension, I think. As well, conditional-GET can
> optimize some of those initial requests; if the client can simply verify
> the root/"service-locator" document is the same as before, it can
> continue to use the service as it knows it existed before (at least at
> that level).
>
> --
> ...jsled
> http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
>
Nick Gall wrote: > But you didn't address the (apparent) tension between > bookmarking/deep-linking and REST, "Bookmarking" means keeping a place. "Deep-linking" is a legal concept ruled non-existent in many jurisdictions, to exist in some contexts in some jurisdictions (there are limits on how much a company can deep-link into a site owned by a company they compete with in Denmark) and not yet decided upon in some others. Personally, the concept makes me want to side with Shakespeare's Dick the Butcher on the matter of lawyers, and how best to treat them. > which was my original question. > Bookmarking takes a dynamically generated bookmark and effectively makes > it static -- at least it's static (persisted) on the client side. As > more and more clients bookmark more and more of the URIs they receive, > the more they are incrementally "fixing" the original URI space. > > Hence my point: encouraging promiscuous bookmarking of (deep) links > seems to be at odds with REST's desire to minimize "entry point" URIs. > Do you see the tension as well, or am I missing something that resolves > the tension? On subsequent operation a bookmarked link may have the following responses, in order of desirability (and eliding issues such as authentication requests for the sake of simplicity): 2xx because it has worked or a 3xx other than 301 because it has worked indirectly, by directing one elsewhere to fulfil or complete the request. 301 because it has been replaced, and the URI given should be used from now on (update the bookmark). 410 because it's gone, and the bookmark should be removed. 5xx because nobody's perfect. 404 - maybe it's gone, maybe things are temporarily awry, we don't know. 2xx or 3xx because it has worked, but alas "worked" now means something completely different to what it meant before. Of all of these possible responses, only the last may be a disaster, and then only if the client cannot identify that the "success" was not what was desired. The failure of a bookmark to work as we intend means that we must repeat the operation that gave us that bookmark to begin with. As long as we are capable of doing so, and as long as we have no false successes by resources being replaced by new and incompatible ones we can't recognise as such, then bookmarking has no difficulty. As long as the instances of bookmarks failing to persist in this manner is less than the number of instances of them working, then bookmarking makes gains in efficiency that make it worth while. > Devdatta makes the excellent point earlier in the thread about "how > blogs require you to permalink instead of the page you are viewing." It > seems that perhaps there is an implicit REST constraint that is > beginning to become more explicit. Roughly, REST distinguishes two types > of URIs: > > 1. "entry point" type URIs, which may be bookmarked indefinitely. > These are Cool URIs. > 2. "transitional" type URIs, which may not be bookmarked > indefinitely. These are unCool URIs. I call then "transitional" > given that their role is typically to enable transition to the > next state. Actually, the entry-points most often used for blogs are not those designated "permalinks". The difference is not between permanence and transience as between one that permanently means "page X of my posts in descending order of date of posting" and one that permanently means "what I posted about X on date Y". If you want to maintain a record of how to obtain the former, then that is the one you should bookmark, even though it isn't the one with "perma" in its name. Bookmarking is fine, as long as you handle them breaking. Breaking bookmarks is fine, as long as you make it clear you broke it.
Hi If I am not mistaken, Jon,Josh and Alexandro are talking about how to resolve the tension that Nick raises. This assumes that a tension exists and the way to resolve it is say via appropriate status code in responses. But I am not sure whether the tension actually exists -- so are the people above actually agreeing that this tension does exist? (I agree with your steps on solving the problem) Anyone dissenting ? Cheers Devdatta
Devdatta wrote: > But I am not sure whether the tension actually exists -- so > are the people above actually agreeing that this tension does exist? > (I agree with your steps on solving the problem) No tension to my mind. Bookmark, try the bookmark, if it fails go back and repeat the HATEOAS means by which you got the bookmark.
On Feb 6, 2009, at 4:03 AM, Nick Gall wrote:
> But you didn't address the (apparent) tension between bookmarking/
> deep-linking and REST, which was my original question. Bookmarking
> takes a dynamically generated bookmark and effectively makes it
> static -- at least it's static (persisted) on the client side. As
> more and more clients bookmark more and more of the URIs they
> receive, the more they are incrementally "fixing" the original URI
> space.
>
> Hence my point: encouraging promiscuous bookmarking of (deep) links
> seems to be at odds with REST's desire to minimize "entry point"
> URIs. Do you see the tension as well, or am I missing something
> that resolves the tension?
Why do you think that there is a desire in REST to minimize the
number of entry points?
> Devdatta makes the excellent point earlier in the thread about "how
> blogs require you to permalink instead of the page you are
> viewing." It seems that perhaps there is an implicit REST
> constraint that is beginning to become more explicit. Roughly, REST
> distinguishes two types of URIs:
>
> "entry point" type URIs, which may be bookmarked indefinitely.
> These are Cool URIs.
> "transitional" type URIs, which may not be bookmarked indefinitely.
> These are unCool URIs. I call then "transitional" given that their
> role is typically to enable transition to the next state.
No. There are some crappy web applications that expose what you would
call a transitional (partial web state) URI to the client because
they are badly written to expose an old terminal screen interface
as intermediate HTML forms, but I would not call such a system RESTful.
Those URIs are not identifying resources.
Permalinks provide a URI to the article resource that is independent
of its current representation (usually, the representation of some feed,
which is not the same resource and hence requires the provision of a
permalink URI for people to bookmark). It is just hypertext and has
the same role in REST as the list of summaries that Google returns
in response to a search.
> I'm not wedded to the names (you could call them internal/
> external), but I do think this distinction between types of URIs is
> an important aspect of REST that, so far, has not been clearly
> outlined. It certainly seems to be in play in the permathread
> debates regarding whether "URIs should be RESTful or not" (and
> whether that designation is even a meaningful one). I also think
> the distinction is a bit at odds with the common understanding of
> URIs on the Web that ANY URIshould be a bookmarkable URI, ie that
> ALL URIs should strive to be Cool.
I won't repeat my opinion about opaque URIs. If there is a
tension between the desire to bookmark and the fact that REST
encourages folks to break up an application into a state
machine of reusable resource states, then I would consider it to be
more like sexual tension. Just because you have it doesn't mean
it is bad, and one way to improve things is to make the more
important resource links look sexier than the less important ones.
http://www.bootstrap.org/augdocs/augment-132082.htm#11J
> I think it's a pretty major change in Web architectural thinking
> (or at least emphasis) to now say (effectively) that a significant
> class of URIs should NOT be cool, ie one should NOT expect them to
> be useable indefinitely. In a way, this harks all the way back to
> Parnas's admonition that we hide information that is likely to
> change. Using modern web-speak, REST seems to admonish us to "hide"
> the URIs that are likely to change (unCool URIs) and "expose" only
> those URIs that are likely to stay the same (Cool URIs). But
> nothing I've seen in the desciptions of Web Architecture (eg
> AWWWv1) remotely suggests that some URIs should effectively be
> hidden, or per my previous point, not bookmarked. If anything, Web
> Architecture descriptions at least imply that all URIs SHOULD be
> "exposed" and bookmarkable.
"All important resources should be identifiable by URI."
I think you should look at each of those words in turn and
consider why they were chosen. That particular quote is from
http://www.w3.org/2001/tag/2002/01-uriMediaType-9
and
http://www.w3.org/2002/04/22-tag-summary
but it was also in the first drafts of the TAG's webarch. That
principle was not new -- I remember TimBL mentioning it during his
keynote in Geneva, May 1994, and it dates from Engelbart's work:
http://www.bootstrap.org/augdocs/augment-132082.htm#11K
which in turn influenced my design when HTTP/1.0 needed finishing.
By definition, working on improving the Web Project meant increasing
the number of Web-accessible resources.
Here is a more recent variation on the same theme that I just ran
across while doing a search:
http://derivadow.com/2007/12/28/web-design-20-its-all-about-the-
resource-and-its-url/
Other things that might be worth keeping in mind is that REST is
designed for reuse, not just use. The notion that anyone has control
over a successful application's reuse is pure fantasy, as described in
<http://www.w3.org/mid/E6416F61-E40C-4DE6-8B7A-D8A94EE8537B@...>
....Roy
Hi, Whens the server determines that it cannot return a requested resource (4xx/5xx), is it OK/sufficient with regard to RESTful webapps to just send the status code, some appropriate headers (i.e. no content) and rely on the client to interpret it? thanks, -Rob
The pattern I use is to report any 4xx or greater as follows: - set status code - set status message with either a custom message (if available) or the default text associated w/ the status code - return a body in the requested/default representation (use the "Accept" header to determine the Internet media-type to use) - for example: REQUEST: ************** GET /sds-proxy/mamund/ HTTP/1.1 Host: amundsen.com Accept: application/x-ssds+xml Authorization: Basic Og== RESPONSE: ************** HTTP/1.1 401 Unauthorized Content-Type: application/x-ssds+xml; charset=utf-8 Content-Length: 628 <s:Error xmlns:s='http://schemas.microsoft.com/sitka/2008/03/'> <s:Code>401</s:Code> <s:Message>Invalid credentials. Please try again</s:Message> <s:Link rel="re-try">http://amundsen.com/sds-proxy/mamund/</s:Link> <s:Link rel="login">http://amundsen.com/sds-proxy/login</s:Link> <s:Link rel="help">http://amundsen.com/sds-proxy/help/</s:Link> </s:Error> I leave it up to the user agent to decide how to handle the details. For example, the user again can: - use the body to render a friendly UI and prompt the user for details. - display a detailed error message using the data returned in the Headers - echo the HTTP status code and stop The key, IMO, is to both return the appropriate code and message *plus* a helpful body in the appropriate content-type. mca http://amundsen.com/blog/ On Fri, Feb 6, 2009 at 19:06, Robert Koberg <rob@...> wrote: > Hi, > > Whens the server determines that it cannot return a requested resource > (4xx/5xx), is it OK/sufficient with regard to RESTful webapps to just > send the status code, some appropriate headers (i.e. no content) and > rely on the client to interpret it? > > thanks, > -Rob > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Roy T. Fielding wrote: > > > On Feb 5, 2009, at 5:45 PM, Bill de hOra wrote: > > Also, hypertext for simple case tends to need two calls, one the > > bootstrap document to find the link, and two, to the actual link you > > care about. Whereas a fixed url scheme on the client means one call. > > Yes, a RESTful system is at least one level of indirection away > from a strongly coupled system. A fixed URL scheme is essentially > the same as baking the first representation into each client. Right. I think some people, when thinking about bootstrap problems (which is what we're talking about here), end up in logical knots and fallacies, or worse, invent pointless discovery technologies to solve a non-problem. The first link is always out of band, get over it. > Is that surprising? Obviously not, but it seemed fair to point it out. Bill
One of this things I find I often confront is the "dis-comfort" folks experience when they identify the "inefficiency of abstraction" that is the result of the loose coupling in the REST style. I usually address this comfort the same way I address the "premature optimization" issue. mca http://amundsen.com/blog/ On Sat, Feb 7, 2009 at 07:54, Bill de hOra <bill@...> wrote: > Roy T. Fielding wrote: >> >> >> On Feb 5, 2009, at 5:45 PM, Bill de hOra wrote: >> > Also, hypertext for simple case tends to need two calls, one the >> > bootstrap document to find the link, and two, to the actual link you >> > care about. Whereas a fixed url scheme on the client means one call. >> >> Yes, a RESTful system is at least one level of indirection away >> from a strongly coupled system. A fixed URL scheme is essentially >> the same as baking the first representation into each client. > > Right. I think some people, when thinking about bootstrap problems > (which is what we're talking about here), end up in logical knots and > fallacies, or worse, invent pointless discovery technologies to solve a > non-problem. The first link is always out of band, get over it. > >> Is that surprising? > > Obviously not, but it seemed fair to point it out. > > Bill > > > ------------------------------------ > > Yahoo! Groups Links > > > >
So, it is not a requirement to return content in the response, right? In other words, if my framework/app were to go live and I claimed it was RESTful, I have a guarantee I wouldn't get flamed for this particular aspect :) Basically, I would expect the client to have the relevant links present/displayed and display a message (and possibly additional relevant links) based on the status code. best, -Rob On Feb 6, 2009, at 8:35 PM, mike amundsen wrote: > The pattern I use is to report any 4xx or greater as follows: > > - set status code > - set status message with either a custom message (if available) or > the default text associated w/ the status code > - return a body in the requested/default representation (use the > "Accept" header to determine the Internet media-type to use) > - for example: > REQUEST: ************** > GET /sds-proxy/mamund/ HTTP/1.1 > Host: amundsen.com > Accept: application/x-ssds+xml > Authorization: Basic Og== > > RESPONSE: ************** > HTTP/1.1 401 Unauthorized > Content-Type: application/x-ssds+xml; charset=utf-8 > Content-Length: 628 > > <s:Error xmlns:s='http://schemas.microsoft.com/sitka/2008/03/'> > <s:Code>401</s:Code> > <s:Message>Invalid credentials. Please try again</s:Message> > <s:Link rel="re-try">http://amundsen.com/sds-proxy/mamund/</s:Link> > <s:Link rel="login">http://amundsen.com/sds-proxy/login</s:Link> > <s:Link rel="help">http://amundsen.com/sds-proxy/help/</s:Link> > </s:Error> > > I leave it up to the user agent to decide how to handle the details. > For example, the user again can: > - use the body to render a friendly UI and prompt the user for > details. > - display a detailed error message using the data returned in the > Headers > - echo the HTTP status code and stop > > The key, IMO, is to both return the appropriate code and message > *plus* a helpful body in the appropriate content-type. > > mca > http://amundsen.com/blog/ > > On Fri, Feb 6, 2009 at 19:06, Robert Koberg <rob@...> wrote: > > Hi, > > > > Whens the server determines that it cannot return a requested > resource > > (4xx/5xx), is it OK/sufficient with regard to RESTful webapps to > just > > send the status code, some appropriate headers (i.e. no content) and > > rely on the client to interpret it? > > > > thanks, > > -Rob >
I find nothing in the REST style that addresses Status Codes. However, there are some detailed rules in the HTTP 1.1docs (RFC2616) addressing the use of the message body: Method Definitions: http://tools.ietf.org/html/rfc2616#section-9 Status Code Definitons: http://tools.ietf.org/html/rfc2616#section-10 mca http://amundsen.com/blog/ On Sat, Feb 7, 2009 at 16:43, Robert Koberg <rob@...> wrote: > > So, it is not a requirement to return content in the response, right? > In other words, if my framework/app were to go live and I claimed it > was RESTful, I have a guarantee I wouldn't get flamed for this > particular aspect :) > > Basically, I would expect the client to have the relevant links > present/displayed and display a message (and possibly additional > relevant links) based on the status code. > > best, > -Rob > > > On Feb 6, 2009, at 8:35 PM, mike amundsen wrote: > >> The pattern I use is to report any 4xx or greater as follows: >> >> - set status code >> - set status message with either a custom message (if available) or >> the default text associated w/ the status code >> - return a body in the requested/default representation (use the >> "Accept" header to determine the Internet media-type to use) >> - for example: >> REQUEST: ************** >> GET /sds-proxy/mamund/ HTTP/1.1 >> Host: amundsen.com >> Accept: application/x-ssds+xml >> Authorization: Basic Og== >> >> RESPONSE: ************** >> HTTP/1.1 401 Unauthorized >> Content-Type: application/x-ssds+xml; charset=utf-8 >> Content-Length: 628 >> >> <s:Error xmlns:s='http://schemas.microsoft.com/sitka/2008/03/'> >> <s:Code>401</s:Code> >> <s:Message>Invalid credentials. Please try again</s:Message> >> <s:Link rel="re-try">http://amundsen.com/sds-proxy/mamund/</s:Link> >> <s:Link rel="login">http://amundsen.com/sds-proxy/login</s:Link> >> <s:Link rel="help">http://amundsen.com/sds-proxy/help/</s:Link> >> </s:Error> >> >> I leave it up to the user agent to decide how to handle the details. >> For example, the user again can: >> - use the body to render a friendly UI and prompt the user for >> details. >> - display a detailed error message using the data returned in the >> Headers >> - echo the HTTP status code and stop >> >> The key, IMO, is to both return the appropriate code and message >> *plus* a helpful body in the appropriate content-type. >> >> mca >> http://amundsen.com/blog/ >> >> On Fri, Feb 6, 2009 at 19:06, Robert Koberg <rob@...> wrote: >> > Hi, >> > >> > Whens the server determines that it cannot return a requested >> resource >> > (4xx/5xx), is it OK/sufficient with regard to RESTful webapps to >> just >> > send the status code, some appropriate headers (i.e. no content) and >> > rely on the client to interpret it? >> > >> > thanks, >> > -Rob >> > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Robert Koberg wrote: > So, it is not a requirement to return content in the response, right? > In other words, if my framework/app were to go live and I claimed it > was RESTful, I have a guarantee I wouldn't get flamed for this > particular aspect :) It's not a matter of it being RESTful or not, it's a matter of it fulfiling the HTTP spec or not. The HTTP spec says that you SHOULD include an entity body, in the RFC 2119 sense of "SHOULD": "there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course." So really, best to send the proper entities.
Don't send some ridiculous XML format. Send HTML or nothing. It's amusing that XML somehow claimed the throne of generalization by standardizing tokenization. XML is a shitty error format unless you provide an XSLT to transform it to HTML. - Rob On Feb 10, 2009 12:59 PM, "Jon Hanna" <jon@...> wrote: Robert Koberg wrote: > So, it is not a requirement to return content in the response, right? > In o... It's not a matter of it being RESTful or not, it's a matter of it fulfiling the HTTP spec or not. The HTTP spec says that you SHOULD include an entity body, in the RFC 2119 sense of "SHOULD": "there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course." So really, best to send the proper entities.
+1 Error and redirect responses should preferably be in HTML. Subbu On Feb 10, 2009, at 7:28 PM, Robert Sayre wrote: > Don't send some ridiculous XML format. Send HTML or nothing. > > It's amusing that XML somehow claimed the throne of generalization by > standardizing tokenization. XML is a shitty error format unless you > provide > an XSLT to transform it to HTML. > > - Rob > > On Feb 10, 2009 12:59 PM, "Jon Hanna" <jon@...> wrote: > > Robert Koberg wrote: > So, it is not a requirement to return > content in > the response, right? > In o... > It's not a matter of it being RESTful or not, it's a matter of it > fulfiling the HTTP spec or not. > > The HTTP spec says that you SHOULD include an entity body, in the RFC > 2119 sense of "SHOULD": > > "there may exist valid reasons in particular circumstances to ignore a > particular item, but the full implications must be understood and > carefully weighed before choosing a different course." > > So really, best to send the proper entities. > > --- http://subbu.org
How about text/plain? I've found many instances in which I need a readable error message whether I am using a browser, firebug, curl, etc. Since I do not expect the need for links, I have formatted (i.e., w/ line breaks) error messages that looks OK in all of those. I had run into too many case in which the html tags just obscured things (in non-browser situations). --peter keane On Tue, Feb 10, 2009 at 10:10 PM, Subbu Allamaraju <subbu@...> wrote: > +1 > > Error and redirect responses should preferably be in HTML. > > Subbu > > On Feb 10, 2009, at 7:28 PM, Robert Sayre wrote: > >> Don't send some ridiculous XML format. Send HTML or nothing. >> >> It's amusing that XML somehow claimed the throne of generalization by >> standardizing tokenization. XML is a shitty error format unless you >> provide >> an XSLT to transform it to HTML. >> >> - Rob >> >> On Feb 10, 2009 12:59 PM, "Jon Hanna" <jon@...> wrote: >> >> Robert Koberg wrote: > So, it is not a requirement to return >> content in >> the response, right? > In o... >> It's not a matter of it being RESTful or not, it's a matter of it >> fulfiling the HTTP spec or not. >> >> The HTTP spec says that you SHOULD include an entity body, in the RFC >> 2119 sense of "SHOULD": >> >> "there may exist valid reasons in particular circumstances to ignore a >> particular item, but the full implications must be understood and >> carefully weighed before choosing a different course." >> >> So really, best to send the proper entities. >> >> > > --- > http://subbu.org > >
What if you target client is a javascript interpreter? Also why send
the response content if the status code is well known?
Here is what I am currently doing:
// inside an error callback
var infoTxt = "";
if (status === 401) {
var loginTriesLeft = req.getResponseHeader("LoginTriesLeft");
if (loginTriesLeft) {
infoTxt = '<p style="color:red">Either your email address or
password was incorrect. You have ' + loginTriesLeft + ' tries left.</
p>';
} else {
infoTxt = '<p>Please login.</p>';
}
FSR.loadComponent(Y, "view/login/", mainNode);
} else if (status === 403) {
infoTxt = '<p style="color:red">You are not authenticated of
authorized to access the requested resource.</p>';
anim = new Y.Anim({
node: mainNode,
to: { opacity: 0 }
});
anim.run();
} else if (status === 404) {
infoTxt = '<p style="color:red">The requested resource (' +
reqObj.uri + ') was not found.</p>';
} else {
var json = eval('(' + req.responseText + ')' );
infoTxt = '<div style="color:red"><p>An error occurred:</p><p>' +
json.msg + '</p></div>';
}
best,
-Rob
On Feb 10, 2009, at 11:23 PM, Peter Keane wrote:
> How about text/plain? I've found many instances in which I need a
> readable error message whether I am using a browser, firebug, curl,
> etc. Since I do not expect the need for links, I have formatted (i.e.,
> w/ line breaks) error messages that looks OK in all of those. I had
> run into too many case in which the html tags just obscured things (in
> non-browser situations).
>
> --peter keane
>
> On Tue, Feb 10, 2009 at 10:10 PM, Subbu Allamaraju <subbu@...>
> wrote:
> > +1
> >
> > Error and redirect responses should preferably be in HTML.
> >
> > Subbu
> >
> > On Feb 10, 2009, at 7:28 PM, Robert Sayre wrote:
> >
> >> Don't send some ridiculous XML format. Send HTML or nothing.
> >>
> >> It's amusing that XML somehow claimed the throne of
> generalization by
> >> standardizing tokenization. XML is a shitty error format unless you
> >> provide
> >> an XSLT to transform it to HTML.
> >>
> >> - Rob
> >>
> >> On Feb 10, 2009 12:59 PM, "Jon Hanna" <jon@...> wrote:
> >>
> >> Robert Koberg wrote: > So, it is not a requirement to return
> >> content in
> >> the response, right? > In o...
> >> It's not a matter of it being RESTful or not, it's a matter of it
> >> fulfiling the HTTP spec or not.
> >>
> >> The HTTP spec says that you SHOULD include an entity body, in the
> RFC
> >> 2119 sense of "SHOULD":
> >>
> >> "there may exist valid reasons in particular circumstances to
> ignore a
> >> particular item, but the full implications must be understood and
> >> carefully weighed before choosing a different course."
> >>
> >> So really, best to send the proper entities.
> >>
> >>
> >
> > ---
> > http://subbu.org
> >
> >
>
>
On 11.02.2009, at 05:23, Peter Keane wrote: > How about text/plain? +1 – my preferred format for anything that's unlikely to be viewed from a browser. Stefsan
Hi, What method should be used if the request is for an email to be sent? For example, you have a 'forgot password' view. The user enters their email address, submits the form and an email is sent with instructions on how to reset their password. It seems like this is a GET. Before REST, I would have probably used a POST. What method should it be? thanks, -Rob
A GET needs to be safe and idempotent. If every time a dereference a URI it sends another email out to someone, it is neither safe nor idempotent so I say you should use POST. StanD. Robert Koberg wrote: > > > Hi, > > What method should be used if the request is for an email to be sent? > > For example, you have a 'forgot password' view. The user enters their > email address, submits the form and an email is sent with instructions > on how to reset their password. > > It seems like this is a GET. Before REST, I would have probably used a > POST. > > What method should it be? > > thanks, > -Rob >
Robert Koberg wrote: > Hi, > > What method should be used if the request is for an email to be sent? > > For example, you have a 'forgot password' view. The user enters their > email address, submits the form and an email is sent with instructions > on how to reset their password. > > It seems like this is a GET. Before REST, I would have probably used a > POST. May I ask, why this seemed like a GET to you?
On Feb 12, 2009, at 8:32 PM, Jon Hanna wrote: > Robert Koberg wrote: > > Hi, > > > > What method should be used if the request is for an email to be > sent? > > > > For example, you have a 'forgot password' view. The user enters > their > > email address, submits the form and an email is sent with > instructions > > on how to reset their password. > > > > It seems like this is a GET. Before REST, I would have probably > used a > > POST. > > May I ask, why this seemed like a GET to you? > (I keep forgetting to hit reply all) Because I am dense :) I understand now that sending the email is a side effect outside of the request/response cycle and so not safe. I was thinking that since the actual send of the email was not the responsibility the app server(s) that it is a safe and idempotent. It falls to the mail server to handle it so the app server can wash its hands of the situation. But, say a GET request is cached (on the originating server or at any hop along the way). Is that a side effect? If not, why not? -Rob
I guess I am still confused. Is the distinction between the GET and POST in this instance that requested resource is viewed through a different client than the one requesting it? The user just clicks a button to get a view of a resource, but instead of returning to the browser the view is returned to an email client. Is that the distinction? I accept that it should be a POST, but, if side effects like caching and request logging are OK for GET, why not sending an email? Apologies for my newbie ignorance, -Rob On Feb 12, 2009, at 8:55 PM, Robert Koberg wrote: > > On Feb 12, 2009, at 8:32 PM, Jon Hanna wrote: > >> Robert Koberg wrote: >> > Hi, >> > >> > What method should be used if the request is for an email to be >> sent? >> > >> > For example, you have a 'forgot password' view. The user enters >> their >> > email address, submits the form and an email is sent with >> instructions >> > on how to reset their password. >> > >> > It seems like this is a GET. Before REST, I would have probably >> used a >> > POST. >> >> May I ask, why this seemed like a GET to you? >> > > (I keep forgetting to hit reply all) > > Because I am dense :) I understand now that sending the email is a > side effect outside of the request/response cycle and so not safe. > > I was thinking that since the actual send of the email was not the > responsibility the app server(s) that it is a safe and idempotent. > It falls to the mail server to handle it so the app server can wash > its hands of the situation. > > But, say a GET request is cached (on the originating server or at > any hop along the way). Is that a side effect? If not, why not? > > -Rob
check out this section of the spec for details on each method: http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html the first sentence in each section should give you a clear idea of when each method should be used. mca http://amundsen.com/blog/ On Thu, Feb 12, 2009 at 22:49, Robert Koberg <rob@...> wrote: > I guess I am still confused. Is the distinction between the GET and > POST in this instance that requested resource is viewed through a > different client than the one requesting it? The user just clicks a > button to get a view of a resource, but instead of returning to the > browser the view is returned to an email client. Is that the > distinction? > > I accept that it should be a POST, but, if side effects like caching > and request logging are OK for GET, why not sending an email? > > Apologies for my newbie ignorance, > -Rob > > > On Feb 12, 2009, at 8:55 PM, Robert Koberg wrote: > >> >> On Feb 12, 2009, at 8:32 PM, Jon Hanna wrote: >> >>> Robert Koberg wrote: >>> > Hi, >>> > >>> > What method should be used if the request is for an email to be >>> sent? >>> > >>> > For example, you have a 'forgot password' view. The user enters >>> their >>> > email address, submits the form and an email is sent with >>> instructions >>> > on how to reset their password. >>> > >>> > It seems like this is a GET. Before REST, I would have probably >>> used a >>> > POST. >>> >>> May I ask, why this seemed like a GET to you? >>> >> >> (I keep forgetting to hit reply all) >> >> Because I am dense :) I understand now that sending the email is a >> side effect outside of the request/response cycle and so not safe. >> >> I was thinking that since the actual send of the email was not the >> responsibility the app server(s) that it is a safe and idempotent. >> It falls to the mail server to handle it so the app server can wash >> its hands of the situation. >> >> But, say a GET request is cached (on the originating server or at >> any hop along the way). Is that a side effect? If not, why not? >> >> -Rob > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Robert,
Simply consider this:
1. Sending the email is an expected and intended behaviour.
2. Responses to GET are cacheable.
3. Cached responses mean that the request never reaches the orginating
server.
4. Ergo, there will be apparently successful responses that do not exhibit
the intended behaviour of sending an email.
Theoretically, POST responses are also cacheable and are therefore subject
to the same problem.
On the other hand PUT requests are not cacheable and are therefore cannot be
intermediated, which sounds like what you want in order to ensure emails are
sent. A simple example might be (text/plain used for ease of reading):
-->
PUT /email/{unique-identifier}
Content-Type: text/plain
john.doe@...
<--
201 Created
-->
GET /email/{unique-identifier}
<--
200 OK
Content-Type: text/plain
Expires: {Now +1 year}
This email was sent to john.doe@... at {date}.
Regards,
Alan Dean
http://twitter.com/adean
On Fri, Feb 13, 2009 at 3:49 AM, Robert Koberg <rob@...> wrote:
> I guess I am still confused. Is the distinction between the GET and
> POST in this instance that requested resource is viewed through a
> different client than the one requesting it? The user just clicks a
> button to get a view of a resource, but instead of returning to the
> browser the view is returned to an email client. Is that the
> distinction?
>
> I accept that it should be a POST, but, if side effects like caching
> and request logging are OK for GET, why not sending an email?
>
> Apologies for my newbie ignorance,
> -Rob
>
>
> On Feb 12, 2009, at 8:55 PM, Robert Koberg wrote:
>
> >
> > On Feb 12, 2009, at 8:32 PM, Jon Hanna wrote:
> >
> >> Robert Koberg wrote:
> >> > Hi,
> >> >
> >> > What method should be used if the request is for an email to be
> >> sent?
> >> >
> >> > For example, you have a 'forgot password' view. The user enters
> >> their
> >> > email address, submits the form and an email is sent with
> >> instructions
> >> > on how to reset their password.
> >> >
> >> > It seems like this is a GET. Before REST, I would have probably
> >> used a
> >> > POST.
> >>
> >> May I ask, why this seemed like a GET to you?
> >>
> >
> > (I keep forgetting to hit reply all)
> >
> > Because I am dense :) I understand now that sending the email is a
> > side effect outside of the request/response cycle and so not safe.
> >
> > I was thinking that since the actual send of the email was not the
> > responsibility the app server(s) that it is a safe and idempotent.
> > It falls to the mail server to handle it so the app server can wash
> > its hands of the situation.
> >
> > But, say a GET request is cached (on the originating server or at
> > any hop along the way). Is that a side effect? If not, why not?
> >
> > -Rob
>
>
>
--
Regards,
Alan Dean
Sent from: Woking Surrey United Kingdom.
* Alan Dean <alan.dean@...> [2009-02-13 07:45]: > Theoretically, POST responses are also cacheable … … wiggy-wiggy-what!? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Robert Koberg <rob@...> [2009-02-13 02:55]: > Because I am dense :) I understand now that sending the email > is a side effect outside of the request/response cycle and so > not safe. > > I was thinking that since the actual send of the email was not > the responsibility the app server(s) that it is a safe and > idempotent. It falls to the mail server to handle it so the > app server can wash its hands of the situation. > > But, say a GET request is cached (on the originating server or > at any hop along the way). Is that a side effect? If not, why > not? That’s not the right way to think about it. Pretty much every web server keeps an access log, and will make multiple entries in it if you make multiple identical GET requests. This is a side effect outside the req/resp cycle, but that doesn’t make logging and GET requests antithetical. The question is who assumes responsibility for the side effect. GET requests are supposed to mean that the client is not asking for any side effects whatsoever and cannot be held liable for any such side effects that the server decides to perform. In case of keeping log files, the server doesn’t even *want* the client to be responsible, so keeping logs of GET accesses is perfectly fine. For an email being sent, OTOH, you want the client to bear full responsibility, and that means you want anything but GET. But if the server sent an email to the sysadmin every time another 10,000 lines of access log records pile up, that would fall under “safe†since the server takes responsibility, it does not the expect the client to do so. With that out of the way, we can get to the verbs: sending an email on the client’s responsibility is certainly not an idempotent side effect if it happens as many times as the request is repeated. Therefore as a first approximation you want POST. However, you could do something like mint a password request token that can be used only once, in which case the password request action becomes idempotent, and so you could then use PUT. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
"Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields. However, the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource." [1] Typically, responses to POST are transient in nature and are not crafted to be cached. However, they are 'theoretically' cacheable (see above), whereas responses to PUT are specified as not cacheable. [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.5 Alan On Sat, Feb 14, 2009 at 7:17 AM, Aristotle Pagaltzis <pagaltzis@...> wrote: > > * Alan Dean <alan.dean@...> [2009-02-13 07:45]: > > > Theoretically, POST responses are also cacheable > > … > > … wiggy-wiggy-what!? > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/>
* Alan Dean <alan.dean@...> [2009-02-14 11:00]: > "Responses to this method are not cacheable, unless the > response includes appropriate Cache-Control or Expires header > fields. However, the 303 (See Other) response can be used to > direct the user agent to retrieve a cacheable resource." [1] > > Typically, responses to POST are transient in nature and are > not crafted to be cached. However, they are 'theoretically' > cacheable (see above), Right: the server is fully in control. So it makes no difference in whether you should choose PUT or POST. If you don’t want your responses to be cached, you don’t put in headers to declare it cacheable. Done. > whereas responses to PUT are specified as not cacheable. I wonder why the RFC states that they are categorically uncacheable. Seems to me that there wouldn’t have been any harm in letting the origin server decide whether it wants to declare the response cacheable or not. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Peter Keane <pkeane@...> [2009-02-11 05:25]: > How about text/plain? Yup. Or in some specialised cases, text/uri-list. <tangent type="unimportant"> One thing I *don’t* do, btw, is put the numeric status code in the entity body. There are a bunch of services that follow a ludicrous pattern of sending 200 and then putting a different status number in the entity body. Twitter did this until at least recently, where API requests past the API throttle limit would come back with 200 but would contain some 5xx status message in the text/plain body (I don’t remember which one). There are also many sites that will handle 404s by sending a redirect to a four-oh-four page… that is served with 200, d’oh. So I put a human-friendly message in the body, but no status number, as a statement that programmatic clients are to kindly pay attention to the status in the envelope. Admittedly, that is motivated out of self-righteousness more than practical concerns… :-) It does no practical harm, though. Theoretically it might keep server developers honest if adopted as a guideline for server implementation; not that it will happen in practice. </tangent> Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> One thing I *don’t* do, btw, is put the numeric status code in > the entity body. There are a bunch of services that follow a[...] Sadly that's where dealing with some clients becomes difficult. I've had in the past to add additional metadata in the entity body only because one of the clients (Flash) couldn't retrieve which status code was returned. Seb
Hi All, I am happy to announce the availability of 2.3 version of WizTools.org RESTClient: http://code.google.com/p/rest-client/ Download from: http://code.google.com/p/rest-client/downloads/list Cookbook on usage and extending: http://code.google.com/p/rest-client/wiki/Cookbook Changelog: http://code.google.com/p/rest-client/wiki/ReleaseNotes Thanks for all the people who reported bugs, and helped in development. Encouraging all to submit feature requests and bugs: http://code.google.com/p/rest-client/issues/list -- Regards, Subhash Chandran S http://indiwiz.com/
Looking for a good testing tool for testing (both automated functionality and stress) our Web Services REST APIs. Looked at Google rest client tool (http://code.google.com/p/rest-client/), but that seems to be Java-based. Our QA folks prefer scripting languages like python. Googled a bit and found the following: http://restclient.org/ http://pycurl.sourceforge.net/ http://pypi.python.org/pypi/benri/0.0.3 Do you have any recommendation? Thanks, -rama
I'm building a REST based site in Django. They have a nice WebClient that you can make requests / responses in your app and verify certain things in the response: http://docs.djangoproject.com/en/dev/topics/testing/ Griffin Caprio - Founder & President, 1530 Technologies, Inc. gcaprio@1530technologies.com On Feb 15, 2009, at 4:52 PM, ramsub4 wrote: > Looking for a good testing tool for testing (both automated > functionality and stress) our Web Services REST APIs. Looked at Google > rest client tool (http://code.google.com/p/rest-client/), but that > seems to be Java-based. Our QA folks prefer scripting languages like > python. Googled a bit and found the following: > > http://restclient.org/ > http://pycurl.sourceforge.net/ > http://pypi.python.org/pypi/benri/0.0.3 > > Do you have any recommendation? > > Thanks, > > -rama > > >
Hello, I am building a RESTful web service to serve as an web based API for reporting customer statistics. The goal is to allow all of our customers to access our api so they can get information regarding their account/transactions. Creating the resources seem fairly straight forward. For example, each customer can access a list of their transactions via a URL such as: http://webapi.ourdomain.com/customer/123/transactions The question I have is how to I secure the application such that customer 123 only has access to /customer/123/* ?? I am developing the application using the RESTEasy framework running in Tomcat. I am familiar with basic servlet authentication. However, we are likely to have more and more customers and do not want to modify the web.xml and redeploy for every new customer. Is there an approach to configuring multiple client authentication without having to redeploy each time? Thanks, Jeff
On Mon, Feb 16, 2009 at 10:11 AM, thornj1 <jeff@...> wrote: > Hello, > I am building a RESTful web service to serve as an web based API for > reporting customer statistics. The goal is to allow all of our > customers to access our api so they can get information regarding > their account/transactions. > > Creating the resources seem fairly straight forward. For example, each > customer can access a list of their transactions via a URL such as: > http://webapi.ourdomain.com/customer/123/transactions > > The question I have is how to I secure the application such that > customer 123 only has access to /customer/123/* ?? > > I am developing the application using the RESTEasy framework running > in Tomcat. I am familiar with basic servlet authentication. However, > we are likely to have more and more customers and do not want to > modify the web.xml and redeploy for every new customer. > > Is there an approach to configuring multiple client authentication > without having to redeploy each time? > > Thanks, > Jeff You are likely to get the best help by asking this question on the RESTeasy user mailing list or forum, where everyone involved will be more likely to be familiar with that particular technology. I can tell you how to do it with Jersey (Java) or Rails (Ruby), but not that particular framework :-). Craig McClanahan
Since you are using Tomcat, this may be helpful: http://tomcat.apache.org/tomcat-5.5-doc/realm-howto.html Regards, Subhash. On Mon, Feb 16, 2009 at 11:41 PM, thornj1 <jeff@...> wrote: > Hello, > I am building a RESTful web service to serve as an web based API for > reporting customer statistics. The goal is to allow all of our > customers to access our api so they can get information regarding > their account/transactions. > > Creating the resources seem fairly straight forward. For example, each > customer can access a list of their transactions via a URL such as: > http://webapi.ourdomain.com/customer/123/transactions > > The question I have is how to I secure the application such that > customer 123 only has access to /customer/123/* ?? > > I am developing the application using the RESTEasy framework running > in Tomcat. I am familiar with basic servlet authentication. However, > we are likely to have more and more customers and do not want to > modify the web.xml and redeploy for every new customer. > > Is there an approach to configuring multiple client authentication > without having to redeploy each time? > > Thanks, > Jeff > > -- Regards, Subhash Chandran S http://indiwiz.com/
Hi Craig, Thanks for the response. I haven't committed 100% to a particular framework yet. Out of curiosity, how would you implement it in Jersery? Thanks, Jeff On Mon, Feb 16, 2009 at 9:05 PM, Craig McClanahan <craigmcc@...>wrote: > On Mon, Feb 16, 2009 at 10:11 AM, thornj1 <jeff@...> > wrote: > > Hello, > > I am building a RESTful web service to serve as an web based API for > > reporting customer statistics. The goal is to allow all of our > > customers to access our api so they can get information regarding > > their account/transactions. > > > > Creating the resources seem fairly straight forward. For example, each > > customer can access a list of their transactions via a URL such as: > > http://webapi.ourdomain.com/customer/123/transactions > > > > The question I have is how to I secure the application such that > > customer 123 only has access to /customer/123/* ?? > > > > I am developing the application using the RESTEasy framework running > > in Tomcat. I am familiar with basic servlet authentication. However, > > we are likely to have more and more customers and do not want to > > modify the web.xml and redeploy for every new customer. > > > > Is there an approach to configuring multiple client authentication > > without having to redeploy each time? > > > > Thanks, > > Jeff > You are likely to get the best help by asking this question on the > RESTeasy user mailing list or forum, where everyone involved will be > more likely to be familiar with that particular technology. I can > tell you how to do it with Jersey (Java) or Rails (Ruby), but not that > particular framework :-). > > Craig McClanahan > -- Jeff Thorn Thorn Technologies, LLC (443) 255-2803 jeff@...
Jeff: Not sure if this is what you are looking for a common pattern that works for me when I implement security for HTTP is to map URI + HTTP Method to an authenticated user. user=mca /users/mca = GET,HEAD,OPTIONS,POST,PUT,DELETE user=anonymous /users/mca = GET,HEAD,OPTIONS There are lots of ways to implement this sort of thing including using RegExp or URI templates to evaluate the requested URI at runtime. mca http://amundsen.com/blog/ On Tue, Feb 17, 2009 at 07:50, Jeff Thorn <jeff@...> wrote: > Hi Craig, > Thanks for the response. I haven't committed 100% to a particular framework > yet. Out of curiosity, how would you implement it in Jersery? > > Thanks, > Jeff > > On Mon, Feb 16, 2009 at 9:05 PM, Craig McClanahan <craigmcc@...> > wrote: >> >> On Mon, Feb 16, 2009 at 10:11 AM, thornj1 <jeff@...> >> wrote: >> > Hello, >> > I am building a RESTful web service to serve as an web based API for >> > reporting customer statistics. The goal is to allow all of our >> > customers to access our api so they can get information regarding >> > their account/transactions. >> > >> > Creating the resources seem fairly straight forward. For example, each >> > customer can access a list of their transactions via a URL such as: >> > http://webapi.ourdomain.com/customer/123/transactions >> > >> > The question I have is how to I secure the application such that >> > customer 123 only has access to /customer/123/* ?? >> > >> > I am developing the application using the RESTEasy framework running >> > in Tomcat. I am familiar with basic servlet authentication. However, >> > we are likely to have more and more customers and do not want to >> > modify the web.xml and redeploy for every new customer. >> > >> > Is there an approach to configuring multiple client authentication >> > without having to redeploy each time? >> > >> > Thanks, >> > Jeff >> You are likely to get the best help by asking this question on the >> RESTeasy user mailing list or forum, where everyone involved will be >> more likely to be familiar with that particular technology. I can >> tell you how to do it with Jersey (Java) or Rails (Ruby), but not that >> particular framework :-). >> >> Craig McClanahan > > > > -- > > Jeff Thorn > Thorn Technologies, LLC > (443) 255-2803 > jeff@... > > > >
All, FYI: I just got approval from Wiley for my book proposal "Delivering RESTful solutions from Microsoft Azure" which will be part of the Wrox "Problem - Design - Solution" series. Regards, Alan Dean http://twitter.com/adean
On Tue, Feb 17, 2009 at 4:50 AM, Jeff Thorn <jeff@...> wrote: > Hi Craig, > Thanks for the response. I haven't committed 100% to a particular framework > yet. Out of curiosity, how would you implement it in Jersery? > Jersey 1.0.2 (recently released) includes a mechanism to provide filters that are invoked either globally, or on particular resource URIs. In addition, you can use a filter to inject a security context that includes logic to perform role based authorization. To see an example of this in action, check out the "atompub-contacts-server" example in the "samples" directory. In particular, look at class "com.sun.jersey.samples.contacts.auth.SecurityFilter". Craig > Thanks, > Jeff
Could HTTP DELETE carry a body? I didn't see a mention of this in RFC2616. If it's not allowed, how would you perform DELETE on multiple resources?As ";" separated params? If the DELETE request can carry a body, we can specify the individual resources in the body itself. Thanks, -rama
Rama, "The DELETE method requests that the origin server delete the resource identified by the Request-URI." [1] [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.7 If multiple resources are deleted in the manner you propose, you are not compliant with RFC2616 and therefore you are in breach of the uniform interface constraint of REST. RFC2616 does not provide a mechanism to carry out multiple deletions with a single request. Like all architectural styles, REST is a selection of trade-offs. One of these is that the uniform interface degrades effeciency in order to improve visibility, which results in a more verbose exchange between client and server. Regards, Alan Dean http://twitter.com/adean On Sat, Feb 21, 2009 at 7:59 AM, ramsub4 <ramsub4@...> wrote: > > Could HTTP DELETE carry a body? I didn't see a mention of this in > RFC2616. If it's not allowed, how would you perform DELETE on multiple > resources?As ";" separated params? If the DELETE request can carry a > body, we can specify the individual resources in the body itself. > > Thanks, > > -rama
+1 to Alan's points. But, you can POST multipart/mixed with message/ http (as suggested by Aristotle here [1]) and send multiple DELETE or other requests this way. [1] http://dehora.net/journal/2008/02/10/batch-http10/ Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ On 21.02.2009, at 09:19, Alan Dean wrote: > Rama, > > "The DELETE method requests that the origin server delete the resource > identified by the Request-URI." [1] > > [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.7 > > If multiple resources are deleted in the manner you propose, you are > not compliant with RFC2616 and therefore you are in breach of the > uniform interface constraint of REST. > > RFC2616 does not provide a mechanism to carry out multiple deletions > with a single request. Like all architectural styles, REST is a > selection of trade-offs. One of these is that the uniform interface > degrades effeciency in order to improve visibility, which results in a > more verbose exchange between client and server. > > Regards, > Alan Dean > http://twitter.com/adean > > On Sat, Feb 21, 2009 at 7:59 AM, ramsub4 <ramsub4@...> wrote: > > > > Could HTTP DELETE carry a body? I didn't see a mention of this in > > RFC2616. If it's not allowed, how would you perform DELETE on > multiple > > resources?As ";" separated params? If the DELETE request can carry a > > body, we can specify the individual resources in the body itself. > > > > Thanks, > > > > -rama > >
POSTdefinition says:
The POST method is used to request that the origin server accept the entity
enclosed in the request as a new subordinate of the resource identified by
the Request-URI in the Request-Line. POST is designed to allow a uniform
method to cover the following functions:
(...)
- Providing a block of data, such as the result of submitting a
form, to a data-handling process;
The actual function performed by the POST method is determined by the
server and is usually dependent on the Request-URI.
So, a data-handling process can perfectly be a resource-delete process, so
you can send the id's of resources to be deleted under the body of a POST
providing that the Request-URI is a resource that somehow encloses those
resources. If I were to do that I'll probably do it like this:
POST /myapplication/resources/deleteFactory
resource1;resource2;resource3
that should be equivalent to
DELETE /myapplication/resources/resource1
DELETE /myapplication/resources/resource2
DELETE /myapplication/resources/resource3
That way the constraints of both HTTP and REST (which are not the same
thing) are followed.
This is just my 2cents, as I'm not a expert nor in HTTP nor REST...
_______________________________________________
Melhores cumprimentos / Beir beannacht / Best regards
António Manuel dos Santos Mota
_______________________________________________
2009/2/21 Stefan Tilkov <stefan.tilkov@...>
> +1 to Alan's points. But, you can POST multipart/mixed with message/
> http (as suggested by Aristotle here [1]) and send multiple DELETE or
> other requests this way.
>
> [1] http://dehora.net/journal/2008/02/10/batch-http10/
>
> Stefan
> --
> Stefan Tilkov, http://www.innoq.com/blog/st/
>
> On 21.02.2009, at 09:19, Alan Dean wrote:
>
> > Rama,
> >
> > "The DELETE method requests that the origin server delete the resource
> > identified by the Request-URI." [1]
> >
> > [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.7
> >
> > If multiple resources are deleted in the manner you propose, you are
> > not compliant with RFC2616 and therefore you are in breach of the
> > uniform interface constraint of REST.
> >
> > RFC2616 does not provide a mechanism to carry out multiple deletions
> > with a single request. Like all architectural styles, REST is a
> > selection of trade-offs. One of these is that the uniform interface
> > degrades effeciency in order to improve visibility, which results in a
> > more verbose exchange between client and server.
> >
> > Regards,
> > Alan Dean
> > http://twitter.com/adean
> >
> > On Sat, Feb 21, 2009 at 7:59 AM, ramsub4 <ramsub4@...> wrote:
> > >
> > > Could HTTP DELETE carry a body? I didn't see a mention of this in
> > > RFC2616. If it's not allowed, how would you perform DELETE on
> > multiple
> > > resources?As ";" separated params? If the DELETE request can carry a
> > > body, we can specify the individual resources in the body itself.
> > >
> > > Thanks,
> > >
> > > -rama
> >
> >
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
OK, I'm abusing the term "asynchronous" here but it seems I need to in order to describe my problem :).
We have a set of Java applications that take anywhere from 5 min to 60 min to execute.
One of the solutions we implemented had the server write the request to a database, send a 202 to the client and work the request later on a different thread.
The constraint we have is that we are using Jersey and we need to take advantage of the load balancing features provided by Apache and Tomcat/Jetty, so we end up with a large number of long lasting simultaneous connections.
"Asynchronous" approaches I have found are trying to solve a different problem. Since a servlet is blocking, a request makes the thread wait. The proposed solutions look to release threads back into the pool and be made available for other requests not to keep it running for a long time.
How do you, in a RESTful way and using standard load balancing techniques (i.e. those provided by Apache + Tomcat/Jetty), work requests "asynchronously" so that you can release client connections?
+Adolfo
Well I am not that into Java terms -- but forget what all is happening at the server, and just think about what you want the interface to the client to be. REST is an interface model* .. so please describe what exactly do you want as an interface to the client? Why is the send-202 solution a problem ? Cheers Devdatta
I'm not entirely sure I understand what you're asking for, but you might want to check out Grizzly: http://weblogs.java.net/blog/jfarcand/archive/2006/02/grizzly_part_ii.html Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ On 21.02.2009, at 18:23, Adolfo Perez wrote: > OK, I'm abusing the term "asynchronous" here but it seems I need to > in order to describe my problem :). > > We have a set of Java applications that take anywhere from 5 min to > 60 min to execute. > > One of the solutions we implemented had the server write the request > to a database, send a 202 to the client and work the request later > on a different thread. > > The constraint we have is that we are using Jersey and we need to > take advantage of the load balancing features provided by Apache and > Tomcat/Jetty, so we end up with a large number of long lasting > simultaneous connections. > > "Asynchronous" approaches I have found are trying to solve a > different problem. Since a servlet is blocking, a request makes the > thread wait. The proposed solutions look to release threads back > into the pool and be made available for other requests not to keep > it running for a long time. > > How do you, in a RESTful way and using standard load balancing > techniques (i.e. those provided by Apache + Tomcat/Jetty), work > requests "asynchronously" so that you can release client connections? > > +Adolfo > > >
Hi Adolfo, I am also not entirely sure I completely understand what you ask, but I think you are asking whether the client-server HTTP connection persists after a 202 Response? The HTTP Interaction regarding 202 would be something like this: -> POST /orders <order><item id="10"/></order> <- 202 Accepted Location /pending-orders/19 <order processingStatus="pending"/> Client then periodically polls /pending-orders/19: -> GET /pending-orders/19 <- 200 Ok <order processingStatus="pending"/> [wait] -> GET /pending-orders/19 <- 200 Ok <order processingStatus="pending"/> [wait] > GET /pending-orders/19 <- 301 Moved Permanently Location: /orders/188 -> GET /orders/188 <- 200 Ok <order id="188"><item id="10"/></order> The client or server can close the TCP connection between any of these requests. Does that help? Jan On Monday, February 23, 2009, at 11:39AM, "Stefan Tilkov" <stefan.tilkov@...> wrote: >I'm not entirely sure I understand what you're asking for, but you >might want to check out Grizzly: > >http://weblogs.java.net/blog/jfarcand/archive/2006/02/grizzly_part_ii.html > >Stefan >-- >Stefan Tilkov, http://www.innoq.com/blog/st/ > > >On 21.02.2009, at 18:23, Adolfo Perez wrote: > >> OK, I'm abusing the term "asynchronous" here but it seems I need to >> in order to describe my problem :). >> >> We have a set of Java applications that take anywhere from 5 min to >> 60 min to execute. >> >> One of the solutions we implemented had the server write the request >> to a database, send a 202 to the client and work the request later >> on a different thread. >> >> The constraint we have is that we are using Jersey and we need to >> take advantage of the load balancing features provided by Apache and >> Tomcat/Jetty, so we end up with a large number of long lasting >> simultaneous connections. >> >> "Asynchronous" approaches I have found are trying to solve a >> different problem. Since a servlet is blocking, a request makes the >> thread wait. The proposed solutions look to release threads back >> into the pool and be made available for other requests not to keep >> it running for a long time. >> >> How do you, in a RESTful way and using standard load balancing >> techniques (i.e. those provided by Apache + Tomcat/Jetty), work >> requests "asynchronously" so that you can release client connections? >> >> +Adolfo >> >> >> > > > >------------------------------------ > >Yahoo! Groups Links > > > > >
As Jan said, the connection can be closed any time between request-reply pairs in a conversation. This can be done by specify client request Connection: close header or server specific configurations. From the message exchange pattern point of view, what you mean is an async conversation. See http://www.enterpriseintegrationpatterns.com/ramblings/09_correlation.html by Gregor for details. In implementation, if you do not need a connection for maintaining conversation state, then the connection can be closed. However, if you do not want to close that connection for saving open connection time of later messages then you need to use NIO of java in general to handle the large number of concurrent connections. Grizzly mentioned by Stefan is one option, and Jetty is also a good candidate. You can also turn the system's parameter of max idle time for a TCP connection to limit the number of opened connections. Nice to see someone else doing similar stuff as I am currently working on. Cheers, Dong On Mon, Feb 23, 2009 at 6:57 AM, Jan Algermissen <algermissen1971@...> wrote: > > Hi Adolfo, > > I am also not entirely sure I completely understand what you ask, but I think you are asking whether the client-server HTTP connection persists after a 202 Response? > > The HTTP Interaction regarding 202 would be something like this: > > -> POST /orders > <order><item id="10"/></order> > > <- 202 Accepted > Location /pending-orders/19 > <order processingStatus="pending"/> > > Client then periodically polls /pending-orders/19: > > -> GET /pending-orders/19 > > <- 200 Ok > <order processingStatus="pending"/> > > [wait] > > -> GET /pending-orders/19 > > <- 200 Ok > <order processingStatus="pending"/> > > [wait] > > > GET /pending-orders/19 > > <- 301 Moved Permanently > Location: /orders/188 > > > -> GET /orders/188 > > <- 200 Ok > <order id="188"><item id="10"/></order> > > The client or server can close the TCP connection between any of these requests. > > Does that help? > > Jan > > On Monday, February 23, 2009, at 11:39AM, "Stefan Tilkov" <stefan.tilkov@...> wrote: > >I'm not entirely sure I understand what you're asking for, but you > >might want to check out Grizzly: > > > >http://weblogs.java.net/blog/jfarcand/archive/2006/02/grizzly_part_ii.html > > > >Stefan > >-- > >Stefan Tilkov, http://www.innoq.com/blog/st/ > > > > > >On 21.02.2009, at 18:23, Adolfo Perez wrote: > > > >> OK, I'm abusing the term "asynchronous" here but it seems I need to > >> in order to describe my problem :). > >> > >> We have a set of Java applications that take anywhere from 5 min to > >> 60 min to execute. > >> > >> One of the solutions we implemented had the server write the request > >> to a database, send a 202 to the client and work the request later > >> on a different thread. > >> > >> The constraint we have is that we are using Jersey and we need to > >> take advantage of the load balancing features provided by Apache and > >> Tomcat/Jetty, so we end up with a large number of long lasting > >> simultaneous connections. > >> > >> "Asynchronous" approaches I have found are trying to solve a > >> different problem. Since a servlet is blocking, a request makes the > >> thread wait. The proposed solutions look to release threads back > >> into the pool and be made available for other requests not to keep > >> it running for a long time. > >> > >> How do you, in a RESTful way and using standard load balancing > >> techniques (i.e. those provided by Apache + Tomcat/Jetty), work > >> requests "asynchronously" so that you can release client connections? > >> > >> +Adolfo > >> > >> > >> > > > > > > > >------------------------------------ > > > >Yahoo! Groups Links > > > > > > > > > > >
Dong,
Your suggestion regarding integration patterns makes a lot of sense and it confirms my original intention of checking Apache Camel + Jetty endpoints.
I will also explore Stefan's suggestion (Grizzly).
Thank you all very much,
+Adolfo
--- On Mon, 2/23/09, Dong Liu <edongliu@...> wrote:
> From: Dong Liu <edongliu@...>
> Subject: Re: [rest-discuss] "Asynchronous" RESTful application
> To: "Rest List" <rest-discuss@yahoogroups.com>
> Cc: apd486@..., "Jan Algermissen" <algermissen1971@...>
> Date: Monday, February 23, 2009, 11:27 AM
> As Jan said, the connection can be closed any time between
> request-reply pairs in a conversation. This
> can be done by specify client request Connection: close
> header or
> server specific configurations.
>
> From the message exchange pattern point of view, what you
> mean is an
> async conversation.
> See
> http://www.enterpriseintegrationpatterns.com/ramblings/09_correlation.html
> by Gregor for details.
>
> In implementation, if you do not need a connection for
> maintaining
> conversation state,
> then the connection can be closed. However, if you do not
> want to
> close that connection for saving open
> connection time of later messages then you need to use NIO
> of java in
> general to handle the large number
> of concurrent connections. Grizzly mentioned by Stefan is
> one option,
> and Jetty is also a good candidate.
> You can also turn the system's parameter of max idle
> time for a TCP
> connection to limit the number of
> opened connections.
>
> Nice to see someone else doing similar stuff as I am
> currently working on.
>
> Cheers,
>
> Dong
>
>
> On Mon, Feb 23, 2009 at 6:57 AM, Jan Algermissen
> <algermissen1971@...> wrote:
> >
> > Hi Adolfo,
> >
> > I am also not entirely sure I completely understand
> what you ask, but I think you are asking whether the
> client-server HTTP connection persists after a 202 Response?
> >
> > The HTTP Interaction regarding 202 would be something
> like this:
> >
> > -> POST /orders
> > <order><item
> id="10"/></order>
> >
> > <- 202 Accepted
> > Location /pending-orders/19
> > <order processingStatus="pending"/>
> >
> > Client then periodically polls /pending-orders/19:
> >
> > -> GET /pending-orders/19
> >
> > <- 200 Ok
> > <order processingStatus="pending"/>
> >
> > [wait]
> >
> > -> GET /pending-orders/19
> >
> > <- 200 Ok
> > <order processingStatus="pending"/>
> >
> > [wait]
> >
> > > GET /pending-orders/19
> >
> > <- 301 Moved Permanently
> > Location: /orders/188
> >
> >
> > -> GET /orders/188
> >
> > <- 200 Ok
> > <order id="188"><item
> id="10"/></order>
> >
> > The client or server can close the TCP connection
> between any of these requests.
> >
> > Does that help?
> >
> > Jan
> >
> > On Monday, February 23, 2009, at 11:39AM, "Stefan
> Tilkov" <stefan.tilkov@...> wrote:
> > >I'm not entirely sure I understand what
> you're asking for, but you
> > >might want to check out Grizzly:
> > >
> >
> >http://weblogs.java.net/blog/jfarcand/archive/2006/02/grizzly_part_ii.html
> > >
> > >Stefan
> > >--
> > >Stefan Tilkov, http://www.innoq.com/blog/st/
> > >
> > >
> > >On 21.02.2009, at 18:23, Adolfo Perez wrote:
> > >
> > >> OK, I'm abusing the term
> "asynchronous" here but it seems I need to
> > >> in order to describe my problem :).
> > >>
> > >> We have a set of Java applications that take
> anywhere from 5 min to
> > >> 60 min to execute.
> > >>
> > >> One of the solutions we implemented had the
> server write the request
> > >> to a database, send a 202 to the client and
> work the request later
> > >> on a different thread.
> > >>
> > >> The constraint we have is that we are using
> Jersey and we need to
> > >> take advantage of the load balancing features
> provided by Apache and
> > >> Tomcat/Jetty, so we end up with a large
> number of long lasting
> > >> simultaneous connections.
> > >>
> > >> "Asynchronous" approaches I have
> found are trying to solve a
> > >> different problem. Since a servlet is
> blocking, a request makes the
> > >> thread wait. The proposed solutions look to
> release threads back
> > >> into the pool and be made available for other
> requests not to keep
> > >> it running for a long time.
> > >>
> > >> How do you, in a RESTful way and using
> standard load balancing
> > >> techniques (i.e. those provided by Apache +
> Tomcat/Jetty), work
> > >> requests "asynchronously" so that
> you can release client connections?
> > >>
> > >> +Adolfo
> > >>
> > >>
> > >>
> > >
> > >
> > >
> > >------------------------------------
> > >
> > >Yahoo! Groups Links
> > >
> > >
> > >
> > >
> > >
> >
* Alan Dean <alan.dean@...> [2009-02-21 09:20]: > If multiple resources are deleted in the manner you propose, > you are not compliant with RFC2616 and therefore you are in > breach of the uniform interface constraint of REST. > > RFC2616 does not provide a mechanism to carry out multiple > deletions with a single request. I have to object here somewhat. This much is true: DELETE does not provide a way for a client to specify multiple resources for deletion in a single request. However, the server *is* free to respond to the deletion of one resource by also deleting other related resources. F.ex. if an AtomPub client deletes the last entry in a collection, the server may delete the collection along with the entry. (It could be a collection that’s maintained automatically, based on some property of entries added by clients to other collections.) This is perfectly legitimate as long as the server takes responsibility for these extra deletions, since the client did not ask for them and therefore cannot be held liable. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
I cannot have a clear perception of the semantic of having multiple http requests in one request. As António described above, if we want to do this via POST, there will be a "super resource" in the system. To me, this resource is in a different world than those to be deleted. Another concern is the result of such a request. Should the delete operations be transactional or the server delete as much as it can? Cheers, Dong -- http://dongnotes.blogspot.com/
> Another concern is the result of such a request. Should the delete > operations be transactional or the server delete as much as it can? > Totally up to you, your application and your server, imho. Regards devdatta > Cheers, > > Dong > -- > http://dongnotes.blogspot.com/ > >
Rama didn't ask if DELETE was allowed to have side-effects which are not the responsibility of the client. Instead he asked if a single DELETE could carry a list of URIs in the body, to which I answered as clearly as I could. Your answer, whilst technically correct of course, is to some other question than that asked by Rama. Regards, Alan On Mon, Feb 23, 2009 at 10:58 PM, Aristotle Pagaltzis <pagaltzis@...> wrote: > * Alan Dean <alan.dean@...> [2009-02-21 09:20]: > >> If multiple resources are deleted in the manner you propose, >> you are not compliant with RFC2616 and therefore you are in >> breach of the uniform interface constraint of REST. >> >> RFC2616 does not provide a mechanism to carry out multiple >> deletions with a single request. > > I have to object here somewhat. This much is true: DELETE does > not provide a way for a client to specify multiple resources for > deletion in a single request. > > However, the server *is* free to respond to the deletion of one > resource by also deleting other related resources. F.ex. if > an AtomPub client deletes the last entry in a collection, the > server may delete the collection along with the entry. (It could > be a collection that’s maintained automatically, based on some > property of entries added by clients to other collections.) > This is perfectly legitimate as long as the server takes > responsibility for these extra deletions, since the client did > not ask for them and therefore cannot be held liable. > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> > > -- Regards, Alan Dean Sent from: Woking Surrey United Kingdom.
* Alan Dean <alan.dean@...> [2009-02-24 07:05]: > Your answer, whilst technically correct of course, is to some > other question than that asked by Rama. Indeed, I was not answering Rama’s question, and your answer was correct. I simply felt that it was phrased such that it could be read as having broader applicability than Rama’s question alone. So I opted to clarify (rather than contradict) so as to avert any such misreading. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Hi all, I've been toying with an idea lately. Could it be possible to carry the idea of batching (and taking all the requests as one batch, and fail them or succeed them as a block) by sending the requests usign HTTP pipelining? I see two scenarios: first scenario, the client supports pipelining and knows that it wants to group requests together, and use pipelining to "batch" the requests together. Second scenario, the client doesn't support pipelining and simply sends its requests sequencially. Third and last scenario, the client has no idea that pipelining is used by the server to process several requests as a unit, and still pipelines them. I fail to see how the client would be implacted by the decision made by the server to process all those requests as a batch, provided it is simply expecting individual responses "as usual". I see http say a cleint SHOULD NOT use this for non-idempotent requests, but PUT and DELETE can be made idempotent. The only contentious point I see is network failure, but in the case where the cleint is in the middle of a batch of requests without pipelining, it hasn't received responses and hasn't as such had confirmation of any action being taken. Provided the server waits until all the responses have been sent to "commit" those batches, the client would be in a safe place at any time. Any comments / rebuttal would be greatly appreciated. Seb _________________________________________________________________ Love Hotmail? Check out the new services from Windows Live! http://clk.atdmt.com/UKM/go/132630768/direct/01/
Hi all, How about doing this in the following way: 1. use PUT to create a composite resource that contains all the resource that are going to be deleted at the "same" time. Of course, the server side should know the purpose of this PUT, and return the URI of the created composite resource. 2. use DELETE to delete the composite resource. In this way, both the client side and the server side have clear understanding of what each operation and each URI mean. I feel it is more explicit and clear than sending a POST with many "delete". Cheers, Dong -- http://dongnotes.blogspot.com/
Hi,
Many RESTful web applications involve lot of resource reads, but much
fewer resource updates. When updating is discussed (at least in the
tutorials I've seen), it tends to be around fairly simple cases where
the changes are either very straight forward (e.g. a Customer entity
address change from simplevalue1 to simplevalue2) or are fairly
course-grained (e.g. adding a resource to, say, a collection resource).
But what about situations where the state changes are more complicated,
such as a case where the resource state must be changed according to
some business logic? Or where the values of two elements are related
such that they must be updated together in a consistent way?
Here is a contrived example to illustrate what I mean (although you can
imagine extending this to much more complex scenarios in real life):
PropertyA contains a value that can be updated by incrementing
the current value by 1 if PropertyB contains a "true" or 5 if PropertyB
contains a "false".
In a local API library, this would normally be implemented behind a
procedure call interface. In REST there is no such client-side library
available. Does this mean that each client needs to know this logic and
implement it themselves? What if the '5' changes to a '10' someday in
the rule?
How do people handle this kind of thing without resorting to client-side
libraries full of business logic in REST?
Thanks,
scott
If the client was a browser, I'd say Javascript :). REST does allow for downloading logic from the server, in the form of Code-on-Demand ( http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_7). In other words, "in REST there IS such client-side library available. " With that said, I haven't come across a decent way to do that in a non-browser environment. If there are any examples out there, I'm all ears. -Solomon On Tue, Feb 24, 2009 at 2:19 PM, Cameron, Scott <scott.cameron@...>wrote: > Hi, > > Many RESTful web applications involve lot of resource reads, but much > fewer resource updates. When updating is discussed (at least in the > tutorials I’ve seen), it tends to be around fairly simple cases where the > changes are either very straight forward (e.g. a Customer entity address > change from simplevalue1 to simplevalue2) or are fairly course-grained > (e.g. adding a resource to, say, a collection resource). > > But what about situations where the state changes are more complicated, > such as a case where the resource state must be changed according to somebusiness logic? Or where the values of two elements are related such that > they must be updated together in a consistent way? > > Here is a contrived example to illustrate what I mean (although you can > imagine extending this to much more complex scenarios in real life): > > PropertyA contains a value that can be updated by incrementing the > current value by 1 if PropertyB contains a “true” or 5 if PropertyB > contains a “false”. > > In a local API library, this would normally be implemented behind a > procedure call interface. In REST there is no such client-side library > available. Does this mean that each client needs to know this logic and > implement it themselves? What if the ‘5’ changes to a ‘10’ someday in therule? > > How do people handle this kind of thing without resorting to client-side > libraries full of business logic in REST? > > Thanks, > > scott > > >
Sebastien Lambla <seb@...> writes: > Hi all, Hi, > Could it be possible to carry the idea of batching (and taking all > the requests as one batch, and fail them or succeed them as a block) > by sending the requests usign HTTP pipelining? That sounds more like doing atomic transaction than batching. In batching, each command is or can be independent of the other. As such, they fail or succeed individually. > I see two scenarios: first scenario, the client supports pipelining > and knows that it wants to group requests together, and use > pipelining to "batch" the requests together. Second scenario, the > client doesn't support pipelining and simply sends its requests > sequencially. Third and last scenario, the client has no idea that > pipelining is used by the server to process several requests as a > unit, and still pipelines them. HTTP pipelining uses the same connection to service multiple requests. It takes both ends to do pipeline. If the client has no idea that the server supports pipeline, then there is no pipelining. So the third scenario above is not plausible. > I fail to see how the client would be implacted by the decision made by the server to process all those requests as a batch, provided it is simply expecting individual responses "as usual". I see http say a cleint SHOULD NOT use this for non-idempotent requests, but PUT and DELETE can be made idempotent. > The only contentious point I see is network failure, but in the case where the cleint is in the middle of a batch of requests without pipelining, it hasn't received responses and hasn't as such had confirmation of any action being taken. Provided the server waits until all the responses have been sent to "commit" those batches, the client would be in a safe place at any time. > > > > Any comments / rebuttal would be greatly appreciated. Depending on HTTP pipelining to define atomic transaction boundary is a custom behaviour. The client has to know about that a priori. The server can't conclusively know within a reasonable time if all the responses have been received by the client. The server, of course, can define a timeout limit so it can discard the transaction, but does the data requirement allow for holding dirty data that long? YS. > > > > Seb > > _________________________________________________________________ > Love Hotmail? Check out the new services from Windows Live! > http://clk.atdmt.com/UKM/go/132630768/direct/01/
Just designate a sidekick/transaction resource that encapsulates all that logic to do updates on those resources. If you treat every resource as a silo, the server will end up leaking a lot of application rules to the client. Subbu On Feb 24, 2009, at 11:19 AM, Cameron, Scott wrote: > Hi, > > Many RESTful web applications involve lot of resource reads, but much > fewer resource updates. When updating is discussed (at least in the > tutorials I've seen), it tends to be around fairly simple cases where > the changes are either very straight forward (e.g. a Customer entity > address change from simplevalue1 to simplevalue2) or are fairly > course-grained (e.g. adding a resource to, say, a collection > resource). > > But what about situations where the state changes are more > complicated, > such as a case where the resource state must be changed according to > some business logic? Or where the values of two elements are related > such that they must be updated together in a consistent way? > > Here is a contrived example to illustrate what I mean (although you > can > imagine extending this to much more complex scenarios in real life): > > PropertyA contains a value that can be updated by incrementing > the current value by 1 if PropertyB contains a "true" or 5 if > PropertyB > contains a "false". > > In a local API library, this would normally be implemented behind a > procedure call interface. In REST there is no such client-side > library > available. Does this mean that each client needs to know this logic > and > implement it themselves? What if the '5' changes to a '10' someday in > the rule? > > How do people handle this kind of thing without resorting to client- > side > libraries full of business logic in REST? > > Thanks, > scott > > --- http://subbu.org
On 24.02.2009, at 20:19, Cameron, Scott wrote: > PropertyA contains a value that can be updated by > incrementing the current value by1 if PropertyB contains a “true” or > 5 if PropertyB contains a “false”. > > > In a local API library, this would normally be implemented behind a > procedure callinterface. > If that's the way you want it, you can do the same thing with the client library for your REST API. Or you might want to implement this on the server side so that the client doesn't have to care (which seems more reasonable to me) . So this all seems very orthogonal to REST in my view. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Solomon Duskis wrote: > If the client was a browser, I'd say Javascript :). REST does allow > for downloading logic from the server, in the form of Code-on-Demand > (http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_7). > In other words, "in REST there IS such client-side library available . " Also, if the representations are XML, XSLT is a form of Code-on-Demand. There are a number of RESTful systems we developed in my workplace that do just that. K.
That's an interesting idea. But isn't a sidekick resource that encapsulates the logic for operating on other resources basically just a remote procedure call? I'm not totally clear on why that would be a bad thing in this context, though (other than it being chattier, which could affect performance). Is there anything saying that the representation sent to the server for an update (using POST, not PUT) needs to be the same as the one returned from the server in a subsequent GET? In other words, if you have a representation that has a bunch of properties with most being very straight-forward value changes but one being my original example, could you embed something in the representation that would tell the server to update PropertyA according to the correct rules? Maybe this could even be just <PropertyA></PropertyA>. The thing is, though, that a subsequent GET might return <PropertyA>5<PropertyA> even though that's not what you sent to the server. I guess an addressable sidekick resource is more explicit that there is an algorithm here. But embedding it in the POST representation has the advantage of not requiring a separate call to the server for every property update (of which there may be many). scott -----Original Message----- From: Subbu Allamaraju [mailto:subbu@...] Sent: February-24-09 12:50 PM To: Cameron, Scott Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Complex representation updates Just designate a sidekick/transaction resource that encapsulates all that logic to do updates on those resources. If you treat every resource as a silo, the server will end up leaking a lot of application rules to the client. Subbu On Feb 24, 2009, at 11:19 AM, Cameron, Scott wrote: > Hi, > > Many RESTful web applications involve lot of resource reads, but much > fewer resource updates. When updating is discussed (at least in the > tutorials I've seen), it tends to be around fairly simple cases where > the changes are either very straight forward (e.g. a Customer entity > address change from simplevalue1 to simplevalue2) or are fairly > course-grained (e.g. adding a resource to, say, a collection > resource). > > But what about situations where the state changes are more > complicated, > such as a case where the resource state must be changed according to > some business logic? Or where the values of two elements are related > such that they must be updated together in a consistent way? > > Here is a contrived example to illustrate what I mean (although you > can > imagine extending this to much more complex scenarios in real life): > > PropertyA contains a value that can be updated by incrementing > the current value by 1 if PropertyB contains a "true" or 5 if > PropertyB > contains a "false". > > In a local API library, this would normally be implemented behind a > procedure call interface. In REST there is no such client-side > library > available. Does this mean that each client needs to know this logic > and > implement it themselves? What if the '5' changes to a '10' someday in > the rule? > > How do people handle this kind of thing without resorting to client- > side > libraries full of business logic in REST? > > Thanks, > scott > > --- http://subbu.org
No, I definitely don't want a language-specific client library. Part of
the reason we're considering a RESTful approach in the first place is
the promise of a language-agnostic service.
Implementing it on the server side is definitely an option, but I do
have performance concerns with cases that require a large number of
property changes per update.
I'm not sure it's purely orthogonal to REST. It definitely isn't
central to the style but it feels like it's difficult to work around the
performance concerns without breaking the RESTful design constraints.
Pushing everything to the server on a property-by-property basis doesn't
seem feasible, although it is obvious that this would keep the design
RESTful.
Feels like there is tension here in applications with more complicated
state modification requirements...
scott
From: Stefan Tilkov [mailto:stefan.tilkov@...]
Sent: February-24-09 1:15 PM
To: Cameron, Scott
Cc: rest-discuss@yahoogroups.com
Subject: Re: [rest-discuss] Complex representation updates
On 24.02.2009, at 20:19, Cameron, Scott wrote:
PropertyA contains a value that can be updated by incrementing
the current value by1 if PropertyB contains a "true" or 5 if PropertyB
contains a "false".
In a local API library, this would normally be implemented behind a
procedure callinterface.
If that's the way you want it, you can do the same thing with the client
library for your REST API. Or you might want to implement this on the
server side so that the client doesn't have to care (which seems more
reasonable to me) .
So this all seems very orthogonal to REST in my view.
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
I'm just not sure if REST has the premise of language-agnostic services. Sure, it's a worthwhile goal, but IMHO, it's not a REST requirement. REST principles come down to: client-Server interaction, statelessness, caching, uniform interfaces, layered systems and code-on-demand. That's straight form the dissertation. There's no discussion of language agnosticism in the REST bible. You CAN implement logic on the client side, and the constraints are that the client has to download the code through a uniform interface. Browsers implement this via JavaScript, and Keith Guaghan does it via XSLT (as you can see earlier in this thread). Theoretically, you can send a script written in Ruby (or Perl, or Python) from the server and execute it on the client side (Java, Microsoft CLR, browser with plugin or some other platform). That would still be RESTful, and could work for you as long as all of the clients understand how invoke the downloaded scripts. Code-on-demand not orthogonal to REST (in the Roy Fielding sense). It's how a whole bunch of RESTFul Web 2.0 applications effective balance the tension between communication performance and product management. Roy Fielding promises that good things like implicit version management and new feature roll out will happen when you follow REST and specifically uses code-on-demand as a means to that end. "the client’s knowledge of media types and resource communication mechanisms, <snip> may be improved on-the-fly (e.g., code-on-demand)" -- http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven. With all of that said, I've been looking for wars to RESTfully solve the same tension myself. I haven't implemented a solution like this. I just wrote a client that hard codes the type of logic that you describe for the sake of performance, and therefore fundamentally made my application non-RESTful :). -Solomon On Tue, Feb 24, 2009 at 5:03 PM, Cameron, Scott <scott.cameron@...>wrote: > No, I definitely don’t want a language-specific client library. Part > of the reason we’re considering a RESTful approach in the first place is the > promise of a language-agnostic service. > > > > Implementing it on the server side is definitely an option, but I do have > performance concerns with cases that require a large number of property > changes per update. > > > > I’m not sure it’s purely orthogonal to REST. It definitely isn’t central > to the style but it feels like it’s difficult to work around the performance > concerns without breaking the RESTful design constraints. Pushing > everything to the server on a property-by-property basis doesn’t seem > feasible, although it is obvious that this would keep the design RESTful. > > > > Feels like there is tension here in applications with more complicated > state modification requirements… > > > > scott > > > > *From:* Stefan Tilkov [mailto:stefan.tilkov@innoq.com] > *Sent:* February-24-09 1:15 PM > *To:* Cameron, Scott > *Cc:* rest-discuss@yahoogroups.com > *Subject:* Re: [rest-discuss] Complex representation updates > > > > On 24.02.2009, at 20:19, Cameron, Scott wrote: > > > > * PropertyA contains a value that can be updated by incrementing > the current value by1 if PropertyB contains a “true” or 5 if PropertyB > contains a “false”.* > > > > *In a local API library, this would normally be implemented behind a > procedure callinterface. * > > > > If that's the way you want it, you can do the same thing with the client > library for your REST API. Or you might want to implement this on the server > side so that the client doesn't have to care (which seems more reasonable to > me) . > > > > So this all seems very orthogonal to REST in my view. > > > > Stefan > > -- > > Stefan Tilkov, http://www.innoq.com/blog/st/ > >
> That sounds more like doing atomic transaction than batching. In > batching, each command is or can be independent of the other. As such, > they fail or succeed individually. I don't see much in the http spec that would imply the necessity for requests in a pipeline to fail or succeed individually. The constraints are that the server need to respond to the client in the same order it's received the requests, and the client to process the responses in the manner it usually would. > HTTP pipelining uses the same connection to service multiple > requests. It takes both ends to do pipeline. If the client has no idea > that the server supports pipeline, then there is no pipelining. So the > third scenario above is not plausible. You may have misread the scenario. The scenario is where the client pipelines the requests, unaware that the server would treat pipelined requests as a unit, aka a sort of transaction. > Depending on HTTP pipelining to define atomic transaction boundary is > a custom behaviour. The client has to know about that a priori. My question wasn't one of *if* the client needs to understand the additional semantics of batching as a unit using pipelining, it is an accepted factor that this exists as an app protocol that defines how pipelining is supported. My question was, is there any side-effects for clients not knowing that app behaviour. I treated the 3 scenarios: Client knows about pipeline as batch and all is good, client doesn't know pipelining at all (and the server will just not provide the "all as one" unit), client knows about pipelining but not about the batching semantics on top of it. I am suggesting that none of those scenarios would break with the additional behaviour, and it doesn't break (as in MUST and MUST NOT) http. > The server can't conclusively know within a reasonable time if all the > responses have been received by the client. The server, of course, can > define a timeout limit so it can discard the transaction, but does the > data requirement allow for holding dirty data that long? The server cannot do it either with individual requests, why would it gain that functionality in this scenario? Whenever I update things within a single operation, I close my transaction before I return the data to the client, not after they've received it. It is assumed that, for the idempotent requests I'm talking about, if the client assumes failure because of network errors, it will retry, which will have no effect beyond giving the client a full and identical set of responses as the previous time. The only change in behaviour between the request/response model and using pipelining for batching is that the server, whenever it has finished receiving the requests (and there lays the complexity) will process them as a whole and come up with an outcome it will communicate to the client. If the client fails to receive them, it'll retry and get given the exact same answer, until it modifies the request. Seb
That's a good point. I suppose I shouldn't phrase language-agnosticism
as a REST requirement, but rather a requirement of the system I'm
working on. It is, however, something that feels much more naturally
achievable with REST (or maybe more precisely, with ROA as defined by
the RESTful Web Service book) than with an alternative approach (such as
WSDL/SOAP) because of the tendency to use very well-understood
lower-level standards like HTTP, URI, XML and so on. This seems to
increase the chances of interoperating across languages significantly as
compared with higher-level standards that rely on toolkits to work with.
Keith's comment about XSLT sounded like a cool idea but I wanted to read
a little bit more about it before responding. However, almost
everything I found about using XSLT with REST talks about transforming
GET results, not about returning code-on-demand for assisting with
update logic. Are there any public examples of this that you know of?
Our clients will definitely not be running in a browser - most likely in
a server-side web application or a thick client - so I eliminated
something like JavaScript right off the bat. Your comments about
running Ruby scripts inside Java or CLR clients is interesting. I
didn't know you could do that. I'll have to read up on how many of the
languages we care about support that kind of thing. That sounds like it
might be cleaner for clients to interact with than XSLT.
Anyway, thanks for the great information. It definitely sparked some
new things for me look into.
From: Solomon Duskis [mailto:sduskis@...]
Sent: February-24-09 5:48 PM
To: Cameron, Scott
Cc: Stefan Tilkov; rest-discuss@yahoogroups.com
Subject: Re: [rest-discuss] Complex representation updates
I'm just not sure if REST has the premise of language-agnostic services.
Sure, it's a worthwhile goal, but IMHO, it's not a REST requirement.
REST principles come down to: client-Server interaction, statelessness,
caching, uniform interfaces, layered systems and code-on-demand. That's
straight form the dissertation. There's no discussion of language
agnosticism in the REST bible.
You CAN implement logic on the client side, and the constraints are that
the client has to download the code through a uniform interface.
Browsers implement this via JavaScript, and Keith Guaghan does it via
XSLT (as you can see earlier in this thread).
Theoretically, you can send a script written in Ruby (or Perl, or
Python) from the server and execute it on the client side (Java,
Microsoft CLR, browser with plugin or some other platform). That would
still be RESTful, and could work for you as long as all of the clients
understand how invoke the downloaded scripts.
Code-on-demand not orthogonal to REST (in the Roy Fielding sense). It's
how a whole bunch of RESTFul Web 2.0 applications effective balance the
tension between communication performance and product management. Roy
Fielding promises that good things like implicit version management and
new feature roll out will happen when you follow REST and specifically
uses code-on-demand as a means to that end. "the client's knowledge of
media types and resource communication mechanisms, <snip> may be
improved on-the-fly (e.g., code-on-demand)" --
http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven.
With all of that said, I've been looking for wars to RESTfully solve the
same tension myself. I haven't implemented a solution like this. I
just wrote a client that hard codes the type of logic that you describe
for the sake of performance, and therefore fundamentally made my
application non-RESTful :).
-Solomon
On Tue, Feb 24, 2009 at 5:03 PM, Cameron, Scott <scott.cameron@...>
wrote:
No, I definitely don't want a language-specific client library. Part of
the reason we're considering a RESTful approach in the first place is
the promise of a language-agnostic service.
Implementing it on the server side is definitely an option, but I do
have performance concerns with cases that require a large number of
property changes per update.
I'm not sure it's purely orthogonal to REST. It definitely isn't
central to the style but it feels like it's difficult to work around the
performance concerns without breaking the RESTful design constraints.
Pushing everything to the server on a property-by-property basis doesn't
seem feasible, although it is obvious that this would keep the design
RESTful.
Feels like there is tension here in applications with more complicated
state modification requirements...
scott
From: Stefan Tilkov [mailto:stefan.tilkov@...]
Sent: February-24-09 1:15 PM
To: Cameron, Scott
Cc: rest-discuss@...m
Subject: Re: [rest-discuss] Complex representation updates
On 24.02.2009, at 20:19, Cameron, Scott wrote:
PropertyA contains a value that can be updated by incrementing
the current value by1 if PropertyB contains a "true" or 5 if PropertyB
contains a "false".
In a local API library, this would normally be implemented behind a
procedure callinterface.
If that's the way you want it, you can do the same thing with the client
library for your REST API. Or you might want to implement this on the
server side so that the client doesn't have to care (which seems more
reasonable to me) .
So this all seems very orthogonal to REST in my view.
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
On Feb 25, 2009, at 12:46 PM, Cameron, Scott wrote:
>
> That’s a good point. I suppose I shouldn’t phrase language-
> agnosticism as a REST requirement, but rather a requirement of the
> system I’m working on. It is, however, something that feels much
> more naturally achievable with REST (or maybe more precisely, with
> ROA as defined by the RESTful Web Service book) than with an
> alternative approach (such as WSDL/SOAP) because of the tendency to
> use very well-understood lower-level standards like HTTP, URI, XML
> and so on. This seems to increase the chances of interoperating
> across languages significantly as compared with higher-level
> standards that rely on toolkits to work with.
>
>
>
> Keith’s comment about XSLT sounded like a cool idea but I wanted to
> read a little bit more about it before responding. However, almost
> everything I found about using XSLT with REST talks about
> transforming GET results, not about returning code-on-demand for
> assisting with update logic. Are there any public examples of this
> that you know of?
>
>
>
> Our clients will definitely not be running in a browser – most
> likely in a server-side web application or a thick client – so I
> eliminated something like JavaScript right off the bat.
>
Why? This makes no sense.
As for some XSL, here is one that takes a content piece/component,
converts it to a javascript version and inlines referenced and already-
inline JS. This way you limit the number of requests and can be used
cross domain. It is somewhat specific to my needs but cold easily be
used
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
version="2.0" xmlns:x="http://www.w3.org/1999/xhtml"
xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xsl:strip-space elements="*"/>
<xsl:output encoding="UTF-16" method="text"/>
<xsl:template match="/*">
<xsl:text>
FSR.content = "</xsl:text>
<xsl:apply-templates select="* except script, x:script"/>
<xsl:text>";</xsl:text>
<xsl:if test="script[@rel='ui'] | x:script[@rel='ui']">
<xsl:text>
</xsl:text>
<xsl:apply-templates select="script[@rel='ui'] |
x:script[@rel='ui']" mode="include-script"/>
</xsl:if>
<xsl:if test="script[@rel='rules'] | x:script[@rel='rules']">
<xsl:text>
</xsl:text>
<xsl:apply-templates select="script[@rel='rules'] |
x:script[@rel='rules']" mode="include-script"/>
</xsl:if>
</xsl:template>
<xsl:template match="script | x:script"/>
<xsl:template match="*">
<xsl:param name="xml_id"/>
<xsl:text><</xsl:text>
<xsl:value-of select="local-name()"/>
<xsl:apply-templates select="@*"/>
<xsl:text>></xsl:text>
<xsl:apply-templates/>
<xsl:text></</xsl:text>
<xsl:value-of select="local-name()"/>
<xsl:text>></xsl:text>
</xsl:template>
<xsl:template match="@*">
<xsl:text> </xsl:text>
<xsl:value-of select="local-name()"/>
<xsl:text>=\"</xsl:text>
<xsl:sequence select="replace(., '"', '\\"')"/>
<xsl:text>\"</xsl:text>
</xsl:template>
<xsl:template match="text()">
<xsl:sequence select="replace(., '"', '\\"')"/>
</xsl:template>
<xsl:template match="script | x:script" mode="include-script">
<xsl:choose>
<xsl:when test="@src">
<xsl:copy-of
select="unparsed-text(resolve-uri(@src, base-uri(.)))"/>
</xsl:when>
<xsl:otherwise>
<xsl:copy-of select="text()"/>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet>
> >
> > Our clients will definitely not be running in a browser - most
> > likely in a server-side web application or a thick client - so I
> > eliminated something like JavaScript right off the bat.
> >
>
> Why? This makes no sense.
>
Well, it makes complete sense if you consider the scope of my ignorance
about JavaScript. :) I was just about to reply to my own past taking
this particular comment back, but you beat me to it. A few seconds of
searching
<http://en.wikipedia.org/wiki/JavaScript#Uses_outside_web_pages>
glaringly pointed out to me that JavaScript is by no means exclusive to
browsers. I actually knew this to some extent, but wasn't aware of
quite how ubiquitous these embedded interpreters had become.
-----Original Message-----
From: Robert Koberg [mailto:rob@...]
Sent: February-25-09 10:02 AM
To: Cameron, Scott
Cc: Solomon Duskis; Stefan Tilkov; rest-discuss@yahoogroups.com
Subject: Re: [rest-discuss] Complex representation updates
On Feb 25, 2009, at 12:46 PM, Cameron, Scott wrote:
>
> That's a good point. I suppose I shouldn't phrase language-
> agnosticism as a REST requirement, but rather a requirement of the
> system I'm working on. It is, however, something that feels much
> more naturally achievable with REST (or maybe more precisely, with
> ROA as defined by the RESTful Web Service book) than with an
> alternative approach (such as WSDL/SOAP) because of the tendency to
> use very well-understood lower-level standards like HTTP, URI, XML
> and so on. This seems to increase the chances of interoperating
> across languages significantly as compared with higher-level
> standards that rely on toolkits to work with.
>
>
>
> Keith's comment about XSLT sounded like a cool idea but I wanted to
> read a little bit more about it before responding. However, almost
> everything I found about using XSLT with REST talks about
> transforming GET results, not about returning code-on-demand for
> assisting with update logic. Are there any public examples of this
> that you know of?
>
>
>
> Our clients will definitely not be running in a browser - most
> likely in a server-side web application or a thick client - so I
> eliminated something like JavaScript right off the bat.
>
Why? This makes no sense.
As for some XSL, here is one that takes a content piece/component,
converts it to a javascript version and inlines referenced and already-
inline JS. This way you limit the number of requests and can be used
cross domain. It is somewhat specific to my needs but cold easily be
used
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
version="2.0" xmlns:x="http://www.w3.org/1999/xhtml"
xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xsl:strip-space elements="*"/>
<xsl:output encoding="UTF-16" method="text"/>
<xsl:template match="/*">
<xsl:text>
FSR.content = "</xsl:text>
<xsl:apply-templates select="* except script, x:script"/>
<xsl:text>";</xsl:text>
<xsl:if test="script[@rel='ui'] | x:script[@rel='ui']">
<xsl:text>
</xsl:text>
<xsl:apply-templates select="script[@rel='ui'] |
x:script[@rel='ui']" mode="include-script"/>
</xsl:if>
<xsl:if test="script[@rel='rules'] | x:script[@rel='rules']">
<xsl:text>
</xsl:text>
<xsl:apply-templates select="script[@rel='rules'] |
x:script[@rel='rules']" mode="include-script"/>
</xsl:if>
</xsl:template>
<xsl:template match="script | x:script"/>
<xsl:template match="*">
<xsl:param name="xml_id"/>
<xsl:text><</xsl:text>
<xsl:value-of select="local-name()"/>
<xsl:apply-templates select="@*"/>
<xsl:text>></xsl:text>
<xsl:apply-templates/>
<xsl:text></</xsl:text>
<xsl:value-of select="local-name()"/>
<xsl:text>></xsl:text>
</xsl:template>
<xsl:template match="@*">
<xsl:text> </xsl:text>
<xsl:value-of select="local-name()"/>
<xsl:text>=\"</xsl:text>
<xsl:sequence select="replace(., '"', '\\"')"/>
<xsl:text>\"</xsl:text>
</xsl:template>
<xsl:template match="text()">
<xsl:sequence select="replace(., '"', '\\"')"/>
</xsl:template>
<xsl:template match="script | x:script" mode="include-script">
<xsl:choose>
<xsl:when test="@src">
<xsl:copy-of
select="unparsed-text(resolve-uri(@src, base-uri(.)))"/>
</xsl:when>
<xsl:otherwise>
<xsl:copy-of select="text()"/>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet>
On Feb 25, 2009, at 1:21 PM, Cameron, Scott wrote:
> > >
> > > Our clients will definitely not be running in a browser – most
> > > likely in a server-side web application or a thick client – so I
> > > eliminated something like JavaScript right off the bat.
> > >
> >
> > Why? This makes no sense.
> >
>
> Well, it makes complete sense if you consider the scope of my
> ignorance about JavaScript. :) I was just about to reply to my own
> past taking this particular comment back, but you beat me to it. A
> few seconds of searching glaringly pointed out to me that JavaScript
> is by no means exclusive to browsers. I actually knew this to some
> extent, but wasn’t aware of quite how ubiquitous these embedded
> interpreters had become.
>
I actually read it like you 'will definitely be running it in a
browser,' so.... :)
But, what I am doing is having a router servlet pass requests off to
JS files on the server to handle specific requests. So a request comes
for /projects/ -- I will have a script directory that contains all the
request OPTIONS available (HEAD and OPTIONS are defaulted) like:
path/to/runtime/scripts/
|- projects
|- GET.js
|- PUT.js
|- DELETE.js
|- .restrictions
|- GET.json
|- PUT.json
|- DELETE.json
best,
-Rob
>
> -----Original Message-----
> From: Robert Koberg [mailto:rob@...]
> Sent: February-25-09 10:02 AM
> To: Cameron, Scott
> Cc: Solomon Duskis; Stefan Tilkov; rest-discuss@yahoogroups.com
> Subject: Re: [rest-discuss] Complex representation updates
>
>
> On Feb 25, 2009, at 12:46 PM, Cameron, Scott wrote:
>
> >
> > That’s a good point. I suppose I shouldn’t phrase language-
> > agnosticism as a REST requirement, but rather a requirement of the
> > system I’m working on. It is, however, something that feels much
> > more naturally achievable with REST (or maybe more precisely, with
> > ROA as defined by the RESTful Web Service book) than with an
> > alternative approach (such as WSDL/SOAP) because of the tendency to
> > use very well-understood lower-level standards like HTTP, URI, XML
> > and so on. This seems to increase the chances of interoperating
> > across languages significantly as compared with higher-level
> > standards that rely on toolkits to work with.
> >
> >
> >
> > Keith’s comment about XSLT sounded like a cool idea but I wanted to
> > read a little bit more about it before responding. However, almost
> > everything I found about using XSLT with REST talks about
> > transforming GET results, not about returning code-on-demand for
> > assisting with update logic. Are there any public examples of this
> > that you know of?
> >
> >
> >
> > Our clients will definitely not be running in a browser – most
> > likely in a server-side web application or a thick client – so I
> > eliminated something like JavaScript right off the bat.
> >
>
> Why? This makes no sense.
>
> As for some XSL, here is one that takes a content piece/component,
> converts it to a javascript version and inlines referenced and
> already-
> inline JS. This way you limit the number of requests and can be used
> cross domain. It is somewhat specific to my needs but cold easily be
> used
>
> <?xml version="1.0" encoding="UTF-8"?>
> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
> version="2.0" xmlns:x="http://www.w3.org/1999/xhtml"
> xmlns:xs="http://www.w3.org/2001/XMLSchema">
>
> <xsl:strip-space elements="*"/>
> <xsl:output encoding="UTF-16" method="text"/>
>
> <xsl:template match="/*">
> <xsl:text>
> FSR.content = "</xsl:text>
> <xsl:apply-templates select="* except script, x:script"/>
> <xsl:text>";</xsl:text>
> <xsl:if test="script[@rel='ui'] | x:script[@rel='ui']">
> <xsl:text>
> </xsl:text>
> <xsl:apply-templates select="script[@rel='ui'] |
> x:script[@rel='ui']" mode="include-script"/>
> </xsl:if>
> <xsl:if test="script[@rel='rules'] | x:script[@rel='rules']">
> <xsl:text>
> </xsl:text>
> <xsl:apply-templates select="script[@rel='rules'] |
> x:script[@rel='rules']" mode="include-script"/>
> </xsl:if>
> </xsl:template>
>
> <xsl:template match="script | x:script"/>
>
> <xsl:template match="*">
> <xsl:param name="xml_id"/>
> <xsl:text><</xsl:text>
> <xsl:value-of select="local-name()"/>
> <xsl:apply-templates select="@*"/>
> <xsl:text>></xsl:text>
> <xsl:apply-templates/>
> <xsl:text></</xsl:text>
> <xsl:value-of select="local-name()"/>
> <xsl:text>></xsl:text>
> </xsl:template>
>
> <xsl:template match="@*">
> <xsl:text> </xsl:text>
> <xsl:value-of select="local-name()"/>
> <xsl:text>=\"</xsl:text>
> <xsl:sequence select="replace(., '"', '\\"')"/>
> <xsl:text>\"</xsl:text>
> </xsl:template>
>
> <xsl:template match="text()">
> <xsl:sequence select="replace(., '"', '\\"')"/>
> </xsl:template>
>
> <xsl:template match="script | x:script" mode="include-script">
> <xsl:choose>
> <xsl:when test="@src">
> <xsl:copy-of
> select="unparsed-text(resolve-uri(@src, base-uri(.)))"/>
> </xsl:when>
> <xsl:otherwise>
> <xsl:copy-of select="text()"/>
> </xsl:otherwise>
> </xsl:choose>
> </xsl:template>
>
> </xsl:stylesheet>
>
* Sebastien Lambla <seb@...> [2009-02-24 09:30]: > Third and last scenario, the client has no idea that pipelining > is used by the server to process several requests as a unit, > and still pipelines them. > > I fail to see how the client would be implacted by the decision > made by the server to process all those requests as a batch, > provided it is simply expecting individual responses "as > usual". A client that is unaware of your overloading of the meaning of pipelining might decide to send relatively unrelated requests in huge pipelined sequences, driving the asymptotic probability that one of them will fail and cause the entire sequence to fail toward 1. Such a client would then be unable to productively communicate with your server. Your call as to how much of a drawback that might be. But even if you decide it isn’t, why not require aware clients to send along a header that requests this pipeline-means-transaction semantic? What would it cost you? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Are there any multi function, data oriented APIs out there that fully embrace Roy Fielding hypertext constraints - http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven? I'm still a bit fuzzy on the last point he makes: A REST API should be entered with no prior knowledge beyond the initial URI (bookmark)... *Failure here implies that out-of-band information is driving interaction instead of hypertext. * * *I have a few ideas, but no concrete examples of driving system-to-system data-oriented "services" through in-band information. That's level of discoverability that I've only heard about discussed in WS-* systems, but have never seen in practice. Web sites and other human consumable applications can fulfill this requirement because they have the ultimate "discovery engine" we know of as an implicit part of the "system," namely the human brain (or human mind if you'd like to be philosophical about it). The system-to-system interaction that I know about generally require a some level of coupling that make discoverability a much more complex issue. The techniques with which I am familiar almost seem inadequate to the task. Roy recently said on rest-discuss: A lot of people think of systems as static things. Dead things. REST is not going to appeal to those people. All of its constraints are designed to keep systems living longer than we are willing or able to anticipate. Perhaps I'm simply not familiar with the fitting techniques and technologies to create more life-like systems. Do you know of any APIs that implements a few features and are used in system-to-system that fully embrace the REST hypertext constraints? Any insight would be greatly appreciated. Thanks, Solomon Duskis **
Hi Solomon,
Check out http://developer.netflix.com/docs/REST_API_Conventions -
IMO, the most RESTful popular API out there.
Another good example is Atom/AtomPub, but I guess you knew about that.
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
On 02.03.2009, at 23:09, Solomon Duskis wrote:
> Are there any multi function, data oriented APIs out there that
> fully embrace Roy Fielding hypertext constraints - http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven?
>
> I'm still a bit fuzzy on the last point he makes:
>
>
> A REST API should be entered with no prior knowledge beyond the
> initial URI (bookmark)... Failure here implies that out-of-band
> information is driving interaction instead of hypertext.
>
> I have a few ideas, but no concrete examples of driving system-to-
> system data-oriented "services" through in-band information. That's
> level of discoverability that I've only heard about discussed in WS-
> * systems, but have never seen in practice. Web sites and other
> human consumable applications can fulfill this requirement because
> they have the ultimate "discovery engine" we know of as an implicit
> part of the "system," namely the human brain (or human mind if you'd
> like to be philosophical about it).
>
> The system-to-system interaction that I know about generally require
> a some level of coupling that make discoverability a much more
> complex issue. The techniques with which I am familiar almost seem
> inadequate to the task. Roy recently said on rest-discuss:
>
> A lot of people think of systems as static things. Dead things.
> REST is not going to appeal to those people. All of its constraints
> are designed to keep systems living longer than we are willing or
> able to anticipate.
>
> Perhaps I'm simply not familiar with the fitting techniques and
> technologies to create more life-like systems. Do you know of any
> APIs that implements a few features and are used in system-to-system
> that fully embrace the REST hypertext constraints? Any insight
> would be greatly appreciated.
>
> Thanks,
>
> Solomon Duskis
>
>
> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%; font-
> weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; } #ygrp-
> mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!-- #ygrp-
> sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%; line-
> height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom: 10px;
> padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px; font-family:
> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> input, textarea {font:99% arial,helvetica,clean,sans-serif;} #ygrp-
> mlmsg pre, code {font:115% monospace;*font-size:100%;} #ygrp-mlmsg *
> {line-height:1.22em;} #ygrp-text{ font-family: Georgia; } #ygrp-
> text p{ margin: 0 0 1em 0; } dd.last p a { font-family: Verdana;
> font-weight: bold; } #ygrp-vitnav{ padding-top: 10px; font-family:
> Verdana; font-size: 77%; margin: 0; } #ygrp-vitnav a{ padding: 0
> 1px; } #ygrp-mlmsg #logo{ padding-bottom: 10px; } #ygrp-reco
> { margin-bottom: 20px; padding: 0px; } #ygrp-reco #reco-head { font-
> weight: bold; color: #ff7900; } #reco-category{ font-size: 77%; }
> #reco-desc{ font-size: 77%; } #ygrp-vital a{ text-decoration:
> none; } #ygrp-vital a:hover{ text-decoration: underline; } #ygrp-
> sponsor #ov ul{ padding: 0 0 0 8px; margin: 0; } #ygrp-sponsor #ov
> li{ list-style-type: square; padding: 6px 0; font-size: 77%; } #ygrp-
> sponsor #ov li a{ text-decoration: none; font-size: 130%; } #ygrp-
> sponsor #nc{ background-color: #eee; margin-bottom: 20px;
> padding: 0 8px; } #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-
> sponsor .ad #hd1{ font-family: Arial; font-weight: bold; color:
> #628c2a; font-size: 100%; line-height: 122%; } #ygrp-sponsor .ad
> a{ text-decoration: none; } #ygrp-sponsor .ad a:hover{ text-
> decoration: underline; } #ygrp-sponsor .ad p{ margin: 0; } o{font-
> size: 0; } .MsoNormal{ margin: 0 0 0 0; } #ygrp-text tt{ font-size:
> 120%; } blockquote{margin: 0 0 0 4px;} .replbq{margin:4} dd.last p
> span { margin-right: 10px; font-family: Verdana; font-weight:
> bold; } dd.last p span.yshortcuts { margin-right: 0; } div.photo-
> title a, div.photo-title a:active, div.photo-title a:hover,
> div.photo-title a:visited { text-decoration: none; } div.file-title
> a, div.file-title a:active, div.file-title a:hover, div.file-title
> a:visited { text-decoration: none; } #ygrp-msg p { clear: both;
> padding: 15px 0 3px 0; overflow: hidden; } #ygrp-msg p span { color:
> #1E66AE; font-weight: bold; } div#ygrp-mlmsg #ygrp-msg p a
> span.yshortcuts { font-family: Verdana; font-size: 10px; font-
> weight: normal; } #ygrp-msg p a { font-family: Verdana; font-size:
> 10px; } #ygrp-mlmsg a { color: #1E66AE; } div.attach-table div div a
> { text-decoration: none; } div.attach-table { width: 400px; } -->
On Mon, Mar 2, 2009 at 2:09 PM, Solomon Duskis <sduskis@...> wrote:
> Are there any multi function, data oriented APIs out there that fully
> embrace Roy Fielding hypertext constraints -
> http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven?
>
> I'm still a bit fuzzy on the last point he makes:
>
> A REST API should be entered with no prior knowledge beyond the initial URI
> (bookmark)... Failure here implies that out-of-band information is driving
> interaction instead of hypertext.
>
> I have a few ideas, but no concrete examples of driving system-to-system
> data-oriented "services" through in-band information. That's level of
> discoverability that I've only heard about discussed in WS-* systems, but
> have never seen in practice. Web sites and other human consumable
> applications can fulfill this requirement because they have the ultimate
> "discovery engine" we know of as an implicit part of the "system," namely
> the human brain (or human mind if you'd like to be philosophical about it).
>
> The system-to-system interaction that I know about generally require a some
> level of coupling that make discoverability a much more complex issue. The
> techniques with which I am familiar almost seem inadequate to the task. Roy
> recently said on rest-discuss:
>
> A lot of people think of systems as static things. Dead things. REST is not
> going to appeal to those people. All of its constraints are designed to keep
> systems living longer than we are willing or able to anticipate.
>
> Perhaps I'm simply not familiar with the fitting techniques and technologies
> to create more life-like systems. Do you know of any APIs that implements a
> few features and are used in system-to-system that fully embrace the REST
> hypertext constraints? Any insight would be greatly appreciated.
Subbu Allamaraju recently published an article at InfoQ about
describing REST based applications that obey this constraint:
http://www.infoq.com/articles/subbu-allamaraju-rest
which points out that, if you want to go whole hog in this direction,
you stop describing the URI structure of your application (like we see
in most REST API descriptions), and start talking about the "rel"
(relationship) values that can be used to identify semantically
interesting hyperlinks that the client might want to follow. His
examples use a <link> element modeled after the way that Atom and HTML
define it, which seems to be a popular trend for REST APIs that use
XML.
After a day of working with a colleague designing some new REST based
APIs, I was musing about this while watching one of the video blogs I
enjoy (Hak5 from revision3.com), where they have occasional stories
about remotely controlled tanks that can fire nerf missiles on
command. How to model the control of such a thing with a REST API?
The basic CRUD type operations map pretty cleanly. Presumably, the
well-known URI of the service will offer me a link I can use for
creating a new tank in the first place. And, the representation I get
back can include a "self" link so I can reference it with a GET
(retrieve an updated representation), a PUT (update properties), or a
DELETE (remove this tank from my collection). But how does one model
actually firing the missile? One idea that seems plausible is to
include a link element with a "fire" relationship, and document (in
your API spec) that a POST to this URI will cause the missile to be
launched.
<tank>
<name>My First Tank</name>
<missile-state>LOADED</missile-state>
<link rel="self" href="http://tanks-r-us.example.com/tanks/0123"/>
<!-- POST to this link to prime the spring or whatever
actually launches the missile -->
<link rel="ready" href="http://tanks-r-us.example.com/..."
title="Ready Launcher"/>
<!-- POST to this link to aim the launcher at the specified
horizontal and vertical coordinates -->
<link rel="aim" href="http://tanks-r-us.example/com/..."
title="Aim Tank"/>
<!-- POST to this link to fire the missle -->
<link rel="fire" href="http://tanks-r-us.example.com/..."
title="Fire Missile"/>
...
</tank>
Presumably, the "fire" link would only be presented by the server when
the tank was in a state where this operation makes sense (i.e. a
"fire" link is included only when the missile is currently LOADED).
But the server should be prepared to handle the case where a client
tried to fire the missile after someone else had already fired it,
because they had retrieved their representation earlier.
A POST makes the most sense, because firing the missile is definitely
not idempotent :-). Among other things, it will have side effects
that change the missile-state of my tank (probably first to FIRING,
then to EMPTY) which I can monitor by doing polled GETs, or being
notified by some out-of-band event mechanism. And the usual semantics
for error responses seem to fit pretty well, too:
* 202 -- if it takes a non-trivial amount of time to fire the missile,
the server might accept the request
and return a URI to monitor for completion
* 401 -- who the heck are you
* 403 -- sorry , you're not allowed to fire my missile
* 409 -- nobody can fire a missile when the launcher is empty (presumably
someone else beat you to the punch, so your reference to this URI is stale)
In the particular use case of firing, there isn't much need for a
request entity (although, if a tank had more than one missile
launcher, you might model things by including a field in the request
entity to select which launcher to fire). But, in principle, you
could include an encapsulation of whatever information is needed for
the server to do what you want it to do.
Earlier threads have discussed firing off (possibly transactional)
business logic. The same sort of approach would work there.
Craig McClanahan
>
> Thanks,
>
> Solomon Duskis
>
Craig: love the example. other things to model: - selecting available ammo (possibly request available ammo first, etc.) - changing locations (set co-ords, change gears, directions, etc.) - checking fuel, onboarding additional fuel - tracking targets - dealing w/ incoming rounds (condition of the tank, etc.) gets you thinkin', eh? mca http://amundsen.com/blog/ On Mon, Mar 2, 2009 at 19:48, Craig McClanahan <craigmcc@...> wrote: > - Show quoted text - > On Mon, Mar 2, 2009 at 2:09 PM, Solomon Duskis <sduskis@...> wrote: >> Are there any multi function, data oriented APIs out there that fully >> embrace Roy Fielding hypertext constraints - >> http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven? >> >> I'm still a bit fuzzy on the last point he makes: >> >> A REST API should be entered with no prior knowledge beyond the initial URI >> (bookmark)... Failure here implies that out-of-band information is driving >> interaction instead of hypertext. >> >> I have a few ideas, but no concrete examples of driving system-to-system >> data-oriented "services" through in-band information. That's level of >> discoverability that I've only heard about discussed in WS-* systems, but >> have never seen in practice. Web sites and other human consumable >> applications can fulfill this requirement because they have the ultimate >> "discovery engine" we know of as an implicit part of the "system," namely >> the human brain (or human mind if you'd like to be philosophical about it). >> >> The system-to-system interaction that I know about generally require a some >> level of coupling that make discoverability a much more complex issue. The >> techniques with which I am familiar almost seem inadequate to the task. Roy >> recently said on rest-discuss: >> >> A lot of people think of systems as static things. Dead things. REST is not >> going to appeal to those people. All of its constraints are designed to keep >> systems living longer than we are willing or able to anticipate. >> >> Perhaps I'm simply not familiar with the fitting techniques and technologies >> to create more life-like systems. Do you know of any APIs that implements a >> few features and are used in system-to-system that fully embrace the REST >> hypertext constraints? Any insight would be greatly appreciated. > > Subbu Allamaraju recently published an article at InfoQ about > describing REST based applications that obey this constraint: > > http://www.infoq.com/articles/subbu-allamaraju-rest > > which points out that, if you want to go whole hog in this direction, > you stop describing the URI structure of your application (like we see > in most REST API descriptions), and start talking about the "rel" > (relationship) values that can be used to identify semantically > interesting hyperlinks that the client might want to follow. His > examples use a <link> element modeled after the way that Atom and HTML > define it, which seems to be a popular trend for REST APIs that use > XML. > > After a day of working with a colleague designing some new REST based > APIs, I was musing about this while watching one of the video blogs I > enjoy (Hak5 from revision3.com), where they have occasional stories > about remotely controlled tanks that can fire nerf missiles on > command. How to model the control of such a thing with a REST API? > > The basic CRUD type operations map pretty cleanly. Presumably, the > well-known URI of the service will offer me a link I can use for > creating a new tank in the first place. And, the representation I get > back can include a "self" link so I can reference it with a GET > (retrieve an updated representation), a PUT (update properties), or a > DELETE (remove this tank from my collection). But how does one model > actually firing the missile? One idea that seems plausible is to > include a link element with a "fire" relationship, and document (in > your API spec) that a POST to this URI will cause the missile to be > launched. > > <tank> > <name>My First Tank</name> > <missile-state>LOADED</missile-state> > <link rel="self" href="http://tanks-r-us.example.com/tanks/0123"/> > <!-- POST to this link to prime the spring or whatever > actually launches the missile --> > <link rel="ready" href="http://tanks-r-us.example.com/..." > title="Ready Launcher"/> > <!-- POST to this link to aim the launcher at the specified > horizontal and vertical coordinates --> > <link rel="aim" href="http://tanks-r-us.example/com/..." > title="Aim Tank"/> > <!-- POST to this link to fire the missle --> > <link rel="fire" href="http://tanks-r-us.example.com/..." > title="Fire Missile"/> > ... > </tank> > > Presumably, the "fire" link would only be presented by the server when > the tank was in a state where this operation makes sense (i.e. a > "fire" link is included only when the missile is currently LOADED). > But the server should be prepared to handle the case where a client > tried to fire the missile after someone else had already fired it, > because they had retrieved their representation earlier. > > A POST makes the most sense, because firing the missile is definitely > not idempotent :-). Among other things, it will have side effects > that change the missile-state of my tank (probably first to FIRING, > then to EMPTY) which I can monitor by doing polled GETs, or being > notified by some out-of-band event mechanism. And the usual semantics > for error responses seem to fit pretty well, too: > * 202 -- if it takes a non-trivial amount of time to fire the missile, > the server might accept the request > and return a URI to monitor for completion > * 401 -- who the heck are you > * 403 -- sorry , you're not allowed to fire my missile > * 409 -- nobody can fire a missile when the launcher is empty (presumably > someone else beat you to the punch, so your reference to this URI is stale) > > In the particular use case of firing, there isn't much need for a > request entity (although, if a tank had more than one missile > launcher, you might model things by including a field in the request > entity to select which launcher to fire). But, in principle, you > could include an encapsulation of whatever information is needed for > the server to do what you want it to do. > > Earlier threads have discussed firing off (possibly transactional) > business logic. The same sort of approach would work there. > > Craig McClanahan > >> >> Thanks, >> >> Solomon Duskis >> > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Mar 2, 2009, at 8:08 PM, mike amundsen wrote: > Craig: > > love the example. > > other things to model: > - selecting available ammo (possibly request available ammo first, > etc.) > - changing locations (set co-ords, change gears, directions, etc.) > - checking fuel, onboarding additional fuel > - tracking targets > - dealing w/ incoming rounds (condition of the tank, etc.) > > gets you thinkin', eh? > yea -- thinkin' that you want to be on the other side with continuous, open communication :) -Rob > > > mca > http://amundsen.com/blog/ > > On Mon, Mar 2, 2009 at 19:48, Craig McClanahan <craigmcc@...> > wrote: > > - Show quoted text - > > On Mon, Mar 2, 2009 at 2:09 PM, Solomon Duskis <sduskis@...> > wrote: > >> Are there any multi function, data oriented APIs out there that > fully > >> embrace Roy Fielding hypertext constraints - > >> http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven? > >> > >> I'm still a bit fuzzy on the last point he makes: > >> > >> A REST API should be entered with no prior knowledge beyond the > initial URI > >> (bookmark)... Failure here implies that out-of-band information > is driving > >> interaction instead of hypertext. > >> > >> I have a few ideas, but no concrete examples of driving system-to- > system > >> data-oriented "services" through in-band information. That's > level of > >> discoverability that I've only heard about discussed in WS-* > systems, but > >> have never seen in practice. Web sites and other human consumable > >> applications can fulfill this requirement because they have the > ultimate > >> "discovery engine" we know of as an implicit part of the > "system," namely > >> the human brain (or human mind if you'd like to be philosophical > about it). > >> > >> The system-to-system interaction that I know about generally > require a some > >> level of coupling that make discoverability a much more complex > issue. The > >> techniques with which I am familiar almost seem inadequate to the > task. Roy > >> recently said on rest-discuss: > >> > >> A lot of people think of systems as static things. Dead things. > REST is not > >> going to appeal to those people. All of its constraints are > designed to keep > >> systems living longer than we are willing or able to anticipate. > >> > >> Perhaps I'm simply not familiar with the fitting techniques and > technologies > >> to create more life-like systems. Do you know of any APIs that > implements a > >> few features and are used in system-to-system that fully embrace > the REST > >> hypertext constraints? Any insight would be greatly appreciated. > > > > Subbu Allamaraju recently published an article at InfoQ about > > describing REST based applications that obey this constraint: > > > > http://www.infoq.com/articles/subbu-allamaraju-rest > > > > which points out that, if you want to go whole hog in this > direction, > > you stop describing the URI structure of your application (like we > see > > in most REST API descriptions), and start talking about the "rel" > > (relationship) values that can be used to identify semantically > > interesting hyperlinks that the client might want to follow. His > > examples use a <link> element modeled after the way that Atom and > HTML > > define it, which seems to be a popular trend for REST APIs that use > > XML. > > > > After a day of working with a colleague designing some new REST > based > > APIs, I was musing about this while watching one of the video > blogs I > > enjoy (Hak5 from revision3.com), where they have occasional stories > > about remotely controlled tanks that can fire nerf missiles on > > command. How to model the control of such a thing with a REST API? > > > > The basic CRUD type operations map pretty cleanly. Presumably, the > > well-known URI of the service will offer me a link I can use for > > creating a new tank in the first place. And, the representation I > get > > back can include a "self" link so I can reference it with a GET > > (retrieve an updated representation), a PUT (update properties), > or a > > DELETE (remove this tank from my collection). But how does one > model > > actually firing the missile? One idea that seems plausible is to > > include a link element with a "fire" relationship, and document (in > > your API spec) that a POST to this URI will cause the missile to be > > launched. > > > > <tank> > > <name>My First Tank</name> > > <missile-state>LOADED</missile-state> > > <link rel="self" href="http://tanks-r-us.example.com/tanks/0123 > "/> > > <!-- POST to this link to prime the spring or whatever > > actually launches the missile --> > > <link rel="ready" href="http://tanks-r-us.example.com/..." > > title="Ready Launcher"/> > > <!-- POST to this link to aim the launcher at the specified > > horizontal and vertical coordinates --> > > <link rel="aim" href="http://tanks-r-us.example/com/..." > > title="Aim Tank"/> > > <!-- POST to this link to fire the missle --> > > <link rel="fire" href="http://tanks-r-us.example.com/..." > > title="Fire Missile"/> > > ... > > </tank> > > > > Presumably, the "fire" link would only be presented by the server > when > > the tank was in a state where this operation makes sense (i.e. a > > "fire" link is included only when the missile is currently LOADED). > > But the server should be prepared to handle the case where a client > > tried to fire the missile after someone else had already fired it, > > because they had retrieved their representation earlier. > > > > A POST makes the most sense, because firing the missile is > definitely > > not idempotent :-). Among other things, it will have side effects > > that change the missile-state of my tank (probably first to FIRING, > > then to EMPTY) which I can monitor by doing polled GETs, or being > > notified by some out-of-band event mechanism. And the usual > semantics > > for error responses seem to fit pretty well, too: > > * 202 -- if it takes a non-trivial amount of time to fire the > missile, > > the server might accept the request > > and return a URI to monitor for completion > > * 401 -- who the heck are you > > * 403 -- sorry , you're not allowed to fire my missile > > * 409 -- nobody can fire a missile when the launcher is empty > (presumably > > someone else beat you to the punch, so your reference to this URI > is stale) > > > > In the particular use case of firing, there isn't much need for a > > request entity (although, if a tank had more than one missile > > launcher, you might model things by including a field in the request > > entity to select which launcher to fire). But, in principle, you > > could include an encapsulation of whatever information is needed for > > the server to do what you want it to do. > > > > Earlier threads have discussed firing off (possibly transactional) > > business logic. The same sort of approach would work there. > > > > Craig McClanahan > > > >> > >> Thanks, > >> > >> Solomon Duskis > >> > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > >
> yea -- thinkin' that you want to be on the other side with continuous, open > communication :) > LOL! since REST does not require HTTP[1], this same stateless pattern will work over any "continuous, open" application protocols. mca http://amundsen.com/blog/ [1] http://tech.groups.yahoo.com/group/rest-discuss/message/8343 On Mon, Mar 2, 2009 at 20:15, Robert Koberg <rob@...> wrote: > > On Mar 2, 2009, at 8:08 PM, mike amundsen wrote: > >> Craig: >> >> love the example. >> >> other things to model: >> - selecting available ammo (possibly request available ammo first, etc.) >> - changing locations (set co-ords, change gears, directions, etc.) >> - checking fuel, onboarding additional fuel >> - tracking targets >> - dealing w/ incoming rounds (condition of the tank, etc.) >> >> gets you thinkin', eh? >> > > > yea -- thinkin' that you want to be on the other side with continuous, open > communication :) > > -Rob > > >> - Show quoted text - >> >> mca >> http://amundsen.com/blog/ >> >> On Mon, Mar 2, 2009 at 19:48, Craig McClanahan <craigmcc@...> wrote: >> > - Show quoted text - >> > On Mon, Mar 2, 2009 at 2:09 PM, Solomon Duskis <sduskis@...> >> > wrote: >> >> Are there any multi function, data oriented APIs out there that fully >> >> embrace Roy Fielding hypertext constraints - >> >> http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven? >> >> >> >> I'm still a bit fuzzy on the last point he makes: >> >> >> >> A REST API should be entered with no prior knowledge beyond the initial >> >> URI >> >> (bookmark)... Failure here implies that out-of-band information is >> >> driving >> >> interaction instead of hypertext. >> >> >> >> I have a few ideas, but no concrete examples of driving >> >> system-to-system >> >> data-oriented "services" through in-band information. That's level of >> >> discoverability that I've only heard about discussed in WS-* systems, >> >> but >> >> have never seen in practice. Web sites and other human consumable >> >> applications can fulfill this requirement because they have the >> >> ultimate >> >> "discovery engine" we know of as an implicit part of the "system," >> >> namely >> >> the human brain (or human mind if you'd like to be philosophical about >> >> it). >> >> >> >> The system-to-system interaction that I know about generally require a >> >> some >> >> level of coupling that make discoverability a much more complex issue. >> >> The >> >> techniques with which I am familiar almost seem inadequate to the task. >> >> Roy >> >> recently said on rest-discuss: >> >> >> >> A lot of people think of systems as static things. Dead things. REST >> >> is not >> >> going to appeal to those people. All of its constraints are designed to >> >> keep >> >> systems living longer than we are willing or able to anticipate. >> >> >> >> Perhaps I'm simply not familiar with the fitting techniques and >> >> technologies >> >> to create more life-like systems. Do you know of any APIs that >> >> implements a >> >> few features and are used in system-to-system that fully embrace the >> >> REST >> >> hypertext constraints? Any insight would be greatly appreciated. >> > >> > Subbu Allamaraju recently published an article at InfoQ about >> > describing REST based applications that obey this constraint: >> > >> > http://www.infoq.com/articles/subbu-allamaraju-rest >> > >> > which points out that, if you want to go whole hog in this direction, >> > you stop describing the URI structure of your application (like we see >> > in most REST API descriptions), and start talking about the "rel" >> > (relationship) values that can be used to identify semantically >> > interesting hyperlinks that the client might want to follow. His >> > examples use a <link> element modeled after the way that Atom and HTML >> > define it, which seems to be a popular trend for REST APIs that use >> > XML. >> > >> > After a day of working with a colleague designing some new REST based >> > APIs, I was musing about this while watching one of the video blogs I >> > enjoy (Hak5 from revision3.com), where they have occasional stories >> > about remotely controlled tanks that can fire nerf missiles on >> > command. How to model the control of such a thing with a REST API? >> > >> > The basic CRUD type operations map pretty cleanly. Presumably, the >> > well-known URI of the service will offer me a link I can use for >> > creating a new tank in the first place. And, the representation I get >> > back can include a "self" link so I can reference it with a GET >> > (retrieve an updated representation), a PUT (update properties), or a >> > DELETE (remove this tank from my collection). But how does one model >> > actually firing the missile? One idea that seems plausible is to >> > include a link element with a "fire" relationship, and document (in >> > your API spec) that a POST to this URI will cause the missile to be >> > launched. >> > >> > <tank> >> > <name>My First Tank</name> >> > <missile-state>LOADED</missile-state> >> > <link rel="self" >> > href="http://tanks-r-us.example.com/tanks/0123"/> >> > <!-- POST to this link to prime the spring or whatever >> > actually launches the missile --> >> > <link rel="ready" href="http://tanks-r-us.example.com/..." >> > title="Ready Launcher"/> >> > <!-- POST to this link to aim the launcher at the specified >> > horizontal and vertical coordinates --> >> > <link rel="aim" href="http://tanks-r-us.example/com/..." >> > title="Aim Tank"/> >> > <!-- POST to this link to fire the missle --> >> > <link rel="fire" href="http://tanks-r-us.example.com/..." >> > title="Fire Missile"/> >> > ... >> > </tank> >> > >> > Presumably, the "fire" link would only be presented by the server when >> > the tank was in a state where this operation makes sense (i.e. a >> > "fire" link is included only when the missile is currently LOADED). >> > But the server should be prepared to handle the case where a client >> > tried to fire the missile after someone else had already fired it, >> > because they had retrieved their representation earlier. >> > >> > A POST makes the most sense, because firing the missile is definitely >> > not idempotent :-). Among other things, it will have side effects >> > that change the missile-state of my tank (probably first to FIRING, >> > then to EMPTY) which I can monitor by doing polled GETs, or being >> > notified by some out-of-band event mechanism. And the usual semantics >> > for error responses seem to fit pretty well, too: >> > * 202 -- if it takes a non-trivial amount of time to fire the missile, >> > the server might accept the request >> > and return a URI to monitor for completion >> > * 401 -- who the heck are you >> > * 403 -- sorry , you're not allowed to fire my missile >> > * 409 -- nobody can fire a missile when the launcher is empty >> > (presumably >> > someone else beat you to the punch, so your reference to this URI is >> > stale) >> > >> > In the particular use case of firing, there isn't much need for a >> > request entity (although, if a tank had more than one missile >> > launcher, you might model things by including a field in the request >> > entity to select which launcher to fire). But, in principle, you >> > could include an encapsulation of whatever information is needed for >> > the server to do what you want it to do. >> > >> > Earlier threads have discussed firing off (possibly transactional) >> > business logic. The same sort of approach would work there. >> > >> > Craig McClanahan >> > >> >> >> >> Thanks, >> >> >> >> Solomon Duskis >> >> >> > >> > >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> > >> >> > >
On Mon, Mar 2, 2009 at 5:15 PM, Robert Koberg <rob@...> wrote:
>
> On Mar 2, 2009, at 8:08 PM, mike amundsen wrote:
>
>> Craig:
>>
>> love the example.
>>
>> other things to model:
>> - selecting available ammo (possibly request available ammo first,
>> etc.)
>> - changing locations (set co-ords, change gears, directions, etc.)
>> - checking fuel, onboarding additional fuel
>> - tracking targets
>> - dealing w/ incoming rounds (condition of the tank, etc.)
>>
>> gets you thinkin', eh?
>>
>
>
> yea -- thinkin' that you want to be on the other side with continuous,
> open communication :)
>
> -Rob
Or, if things go poorly, there's always the Star Trek solution:
<link rel="start-self-destruct-sequence" href="..."/>
Just hope the network doesn't go down first if you want to turn it off :-).
Craig
>
>
>>
>>
>> mca
>> http://amundsen.com/blog/
>>
>> On Mon, Mar 2, 2009 at 19:48, Craig McClanahan <craigmcc@...>
>> wrote:
>> > - Show quoted text -
> - Show quoted text -
>> > On Mon, Mar 2, 2009 at 2:09 PM, Solomon Duskis <sduskis@...>
>> wrote:
>> >> Are there any multi function, data oriented APIs out there that
>> fully
>> >> embrace Roy Fielding hypertext constraints -
>> >> http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven?
>> >>
>> >> I'm still a bit fuzzy on the last point he makes:
>> >>
>> >> A REST API should be entered with no prior knowledge beyond the
>> initial URI
>> >> (bookmark)... Failure here implies that out-of-band information
>> is driving
>> >> interaction instead of hypertext.
>> >>
>> >> I have a few ideas, but no concrete examples of driving system-to-
>> system
>> >> data-oriented "services" through in-band information. That's
>> level of
>> >> discoverability that I've only heard about discussed in WS-*
>> systems, but
>> >> have never seen in practice. Web sites and other human consumable
>> >> applications can fulfill this requirement because they have the
>> ultimate
>> >> "discovery engine" we know of as an implicit part of the
>> "system," namely
>> >> the human brain (or human mind if you'd like to be philosophical
>> about it).
>> >>
>> >> The system-to-system interaction that I know about generally
>> require a some
>> >> level of coupling that make discoverability a much more complex
>> issue. The
>> >> techniques with which I am familiar almost seem inadequate to the
>> task. Roy
>> >> recently said on rest-discuss:
>> >>
>> >> A lot of people think of systems as static things. Dead things.
>> REST is not
>> >> going to appeal to those people. All of its constraints are
>> designed to keep
>> >> systems living longer than we are willing or able to anticipate.
>> >>
>> >> Perhaps I'm simply not familiar with the fitting techniques and
>> technologies
>> >> to create more life-like systems. Do you know of any APIs that
>> implements a
>> >> few features and are used in system-to-system that fully embrace
>> the REST
>> >> hypertext constraints? Any insight would be greatly appreciated.
>> >
>> > Subbu Allamaraju recently published an article at InfoQ about
>> > describing REST based applications that obey this constraint:
>> >
>> > http://www.infoq.com/articles/subbu-allamaraju-rest
>> >
>> > which points out that, if you want to go whole hog in this
>> direction,
>> > you stop describing the URI structure of your application (like we
>> see
>> > in most REST API descriptions), and start talking about the "rel"
>> > (relationship) values that can be used to identify semantically
>> > interesting hyperlinks that the client might want to follow. His
>> > examples use a <link> element modeled after the way that Atom and
>> HTML
>> > define it, which seems to be a popular trend for REST APIs that use
>> > XML.
>> >
>> > After a day of working with a colleague designing some new REST
>> based
>> > APIs, I was musing about this while watching one of the video
>> blogs I
>> > enjoy (Hak5 from revision3.com), where they have occasional stories
>> > about remotely controlled tanks that can fire nerf missiles on
>> > command. How to model the control of such a thing with a REST API?
>> >
>> > The basic CRUD type operations map pretty cleanly. Presumably, the
>> > well-known URI of the service will offer me a link I can use for
>> > creating a new tank in the first place. And, the representation I
>> get
>> > back can include a "self" link so I can reference it with a GET
>> > (retrieve an updated representation), a PUT (update properties),
>> or a
>> > DELETE (remove this tank from my collection). But how does one
>> model
>> > actually firing the missile? One idea that seems plausible is to
>> > include a link element with a "fire" relationship, and document (in
>> > your API spec) that a POST to this URI will cause the missile to be
>> > launched.
>> >
>> > <tank>
>> > <name>My First Tank</name>
>> > <missile-state>LOADED</missile-state>
>> > <link rel="self" href="http://tanks-r-us.example.com/tanks/0123
>> "/>
>> > <!-- POST to this link to prime the spring or whatever
>> > actually launches the missile -->
>> > <link rel="ready" href="http://tanks-r-us.example.com/..."
>> > title="Ready Launcher"/>
>> > <!-- POST to this link to aim the launcher at the specified
>> > horizontal and vertical coordinates -->
>> > <link rel="aim" href="http://tanks-r-us.example/com/..."
>> > title="Aim Tank"/>
>> > <!-- POST to this link to fire the missle -->
>> > <link rel="fire" href="http://tanks-r-us.example.com/..."
>> > title="Fire Missile"/>
>> > ...
>> > </tank>
>> >
>> > Presumably, the "fire" link would only be presented by the server
>> when
>> > the tank was in a state where this operation makes sense (i.e. a
>> > "fire" link is included only when the missile is currently LOADED).
>> > But the server should be prepared to handle the case where a client
>> > tried to fire the missile after someone else had already fired it,
>> > because they had retrieved their representation earlier.
>> >
>> > A POST makes the most sense, because firing the missile is
>> definitely
>> > not idempotent :-). Among other things, it will have side effects
>> > that change the missile-state of my tank (probably first to FIRING,
>> > then to EMPTY) which I can monitor by doing polled GETs, or being
>> > notified by some out-of-band event mechanism. And the usual
>> semantics
>> > for error responses seem to fit pretty well, too:
>> > * 202 -- if it takes a non-trivial amount of time to fire the
>> missile,
>> > the server might accept the request
>> > and return a URI to monitor for completion
>> > * 401 -- who the heck are you
>> > * 403 -- sorry , you're not allowed to fire my missile
>> > * 409 -- nobody can fire a missile when the launcher is empty
>> (presumably
>> > someone else beat you to the punch, so your reference to this URI
>> is stale)
>> >
>> > In the particular use case of firing, there isn't much need for a
>> > request entity (although, if a tank had more than one missile
>> > launcher, you might model things by including a field in the request
>> > entity to select which launcher to fire). But, in principle, you
>> > could include an encapsulation of whatever information is needed for
>> > the server to do what you want it to do.
>> >
>> > Earlier threads have discussed firing off (possibly transactional)
>> > business logic. The same sort of approach would work there.
>> >
>> > Craig McClanahan
>> >
>> >>
>> >> Thanks,
>> >>
>> >> Solomon Duskis
>> >>
>> >
>> >
>> > ------------------------------------
>> >
>> > Yahoo! Groups Links
> - Show quoted text -
>> >
>> >
>> >
>> >
>>
>>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Sorry Stefan, I did mean to reply all :)
Perhaps the Netflix API would be considered RESTful, but should we be able
to discover ALL of the links like Roy Fielding suggests? He did say
straight out that "there can be only one" when it comes to bookmarks. Is he
putting forth an impossible constraint on data-driven APIs with that rule,
or is there more that can be done to implement that constraint?
Craig and Subbu's use of "rel" seems like a good start and I think that
building on that idea can lead to a "there can be only one" compliant
RESTful application. I think it may be a problem that may be solved with an
"As Simple As Possible," well known media-type plus a guide on how to
develop "rel" dictionaries.
I do want to develop this idea a bit further and get your collective take on
the subject, but I think I'll start another thread. Continuing on this
thread seems like a risky proposition given the emphasis on the definition
of tank behavior.
-Solomon
On Tue, Mar 3, 2009 at 2:01 AM, Stefan Tilkov <stefan.tilkov@...>wrote:
> On 02.03.2009, at 23:40, Solomon Duskis wrote:
>
> The Netflix seems like a great starting piont, but isn't "out-of-band
> information" driving the interaction? Don't you have to "bookmark" quite a
> few URLs in order to use the API? The out of bounds information lives at
> http://developer.netflix.com/docs/REST_API_Conventions
>
>
> I don't think so; most of the URIs are discovered via <link> elements.
>
> Stefan
>
> P.S. Did you intentionally reply to me only instead of to the list?
>
>
> -Solomon
>
> On Mon, Mar 2, 2009 at 5:19 PM, Stefan Tilkov <stefan.tilkov@...>wrote:
>
>> Hi Solomon,
>>
>> Check out http://developer.netflix.com/docs/REST_API_Conventions -
>> IMO, the most RESTful popular API out there.
>>
>> Another good example is Atom/AtomPub, but I guess you knew about that.
>>
>> Stefan
>> --
>> Stefan Tilkov, http://www.innoq.com/blog/st/
>>
>> On 02.03.2009, at 23:09, Solomon Duskis wrote:
>>
>> > Are there any multi function, data oriented APIs out there that
>> > fully embrace Roy Fielding hypertext constraints -
>> http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven?
>> >
>> > I'm still a bit fuzzy on the last point he makes:
>> >
>> >
>> > A REST API should be entered with no prior knowledge beyond the
>> > initial URI (bookmark)... Failure here implies that out-of-band
>> > information is driving interaction instead of hypertext.
>> >
>> > I have a few ideas, but no concrete examples of driving system-to-
>> > system data-oriented "services" through in-band information. That's
>> > level of discoverability that I've only heard about discussed in WS-
>> > * systems, but have never seen in practice. Web sites and other
>> > human consumable applications can fulfill this requirement because
>> > they have the ultimate "discovery engine" we know of as an implicit
>> > part of the "system," namely the human brain (or human mind if you'd
>> > like to be philosophical about it).
>> >
>> > The system-to-system interaction that I know about generally require
>> > a some level of coupling that make discoverability a much more
>> > complex issue. The techniques with which I am familiar almost seem
>> > inadequate to the task. Roy recently said on rest-discuss:
>> >
>> > A lot of people think of systems as static things. Dead things.
>> > REST is not going to appeal to those people. All of its constraints
>> > are designed to keep systems living longer than we are willing or
>> > able to anticipate.
>> >
>> > Perhaps I'm simply not familiar with the fitting techniques and
>> > technologies to create more life-like systems. Do you know of any
>> > APIs that implements a few features and are used in system-to-system
>> > that fully embrace the REST hypertext constraints? Any insight
>> > would be greatly appreciated.
>> >
>> > Thanks,
>> >
>> > Solomon Duskis
>> >
>> >
>> > <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
>> > margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
>> > solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%; font-
>> > weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
>> > #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; } #ygrp-
>> > mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!-- #ygrp-
>> > sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
>> > #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%; line-
>> > height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom: 10px;
>> > padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px; font-family:
>> > arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
>> > #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
>> > input, textarea {font:99% arial,helvetica,clean,sans-serif;} #ygrp-
>> > mlmsg pre, code {font:115% monospace;*font-size:100%;} #ygrp-mlmsg *
>> > {line-height:1.22em;} #ygrp-text{ font-family: Georgia; } #ygrp-
>> > text p{ margin: 0 0 1em 0; } dd.last p a { font-family: Verdana;
>> > font-weight: bold; } #ygrp-vitnav{ padding-top: 10px; font-family:
>> > Verdana; font-size: 77%; margin: 0; } #ygrp-vitnav a{ padding: 0
>> > 1px; } #ygrp-mlmsg #logo{ padding-bottom: 10px; } #ygrp-reco
>> > { margin-bottom: 20px; padding: 0px; } #ygrp-reco #reco-head { font-
>> > weight: bold; color: #ff7900; } #reco-category{ font-size: 77%; }
>> > #reco-desc{ font-size: 77%; } #ygrp-vital a{ text-decoration:
>> > none; } #ygrp-vital a:hover{ text-decoration: underline; } #ygrp-
>> > sponsor #ov ul{ padding: 0 0 0 8px; margin: 0; } #ygrp-sponsor #ov
>> > li{ list-style-type: square; padding: 6px 0; font-size: 77%; } #ygrp-
>> > sponsor #ov li a{ text-decoration: none; font-size: 130%; } #ygrp-
>> > sponsor #nc{ background-color: #eee; margin-bottom: 20px;
>> > padding: 0 8px; } #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-
>> > sponsor .ad #hd1{ font-family: Arial; font-weight: bold; color:
>> > #628c2a; font-size: 100%; line-height: 122%; } #ygrp-sponsor .ad
>> > a{ text-decoration: none; } #ygrp-sponsor .ad a:hover{ text-
>> > decoration: underline; } #ygrp-sponsor .ad p{ margin: 0; } o{font-
>> > size: 0; } .MsoNormal{ margin: 0 0 0 0; } #ygrp-text tt{ font-size:
>> > 120%; } blockquote{margin: 0 0 0 4px;} .replbq{margin:4} dd.last p
>> > span { margin-right: 10px; font-family: Verdana; font-weight:
>> > bold; } dd.last p span.yshortcuts { margin-right: 0; } div.photo-
>> > title a, div.photo-title a:active, div.photo-title a:hover,
>> > div.photo-title a:visited { text-decoration: none; } div.file-title
>> > a, div.file-title a:active, div.file-title a:hover, div.file-title
>> > a:visited { text-decoration: none; } #ygrp-msg p { clear: both;
>> > padding: 15px 0 3px 0; overflow: hidden; } #ygrp-msg p span { color:
>> > #1E66AE; font-weight: bold; } div#ygrp-mlmsg #ygrp-msg p a
>> > span.yshortcuts { font-family: Verdana; font-size: 10px; font-
>> > weight: normal; } #ygrp-msg p a { font-family: Verdana; font-size:
>> > 10px; } #ygrp-mlmsg a { color: #1E66AE; } div.attach-table div div a
>> > { text-decoration: none; } div.attach-table { width: 400px; } -->
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
>
>
Solomon Duskis wrote: > Craig and Subbu's use of "rel" seems like a good start and I think that > building on that idea can lead to a "there can be only one" compliant > RESTful application. I think it may be a problem that may be solved > with an "As Simple As Possible," well known media-type plus a guide on > how to develop "rel" dictionaries. This is basically what RDF provides: a common data model (and a bunch of defined formats for it) and a simple, distributed way for defining your "rel" values, encouraging you to re-use the ones defined by others so that a Web service client needs less built-in knowledge about your service. And Linked Data [1] is all about taking one entry point and traversing the data Web from there dynamically by following the links provided (choosing those which have a "rel" type that matches your intentions). Linked Data is mainly about linking up open data of course but its principles can easily be applied to closed applications which, I think, would benefit from that. Just one way of doing it, but I more and more think that the ideas behind Linked Data and REST overlap largely. :-) Regards, Simon [1] http://linkeddata.org/
I was afraid someone was going to bring up RDF :). Thanks for the advice, though. I'll take a look at linkeddata. -Solomon On Tue, Mar 3, 2009 at 10:58 AM, Simon Reinhardt <simon.reinhardt@...>wrote: > Solomon Duskis wrote: > > Craig and Subbu's use of "rel" seems like a good start and I think that > > building on that idea can lead to a "there can be only one" compliant > > RESTful application. I think it may be a problem that may be solved > > with an "As Simple As Possible," well known media-type plus a guide on > > how to develop "rel" dictionaries. > > This is basically what RDF provides: a common data model (and a bunch of > defined formats for it) and a simple, distributed way for defining your > "rel" values, encouraging you to re-use the ones defined by others so that a > Web service client needs less built-in knowledge about your service. > And Linked Data [1] is all about taking one entry point and traversing the > data Web from there dynamically by following the links provided (choosing > those which have a "rel" type that matches your intentions). Linked Data is > mainly about linking up open data of course but its principles can easily be > applied to closed applications which, I think, would benefit from that. > Just one way of doing it, but I more and more think that the ideas behind > Linked Data and REST overlap largely. :-) > > Regards, > Simon > > [1] http://linkeddata.org/ > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Mar 3, 2009, at 6:11 AM, Solomon Duskis wrote: > Sorry Stefan, I did mean to reply all :) > > Perhaps the Netflix API would be considered RESTful, but should we > be able to discover ALL of the links like Roy Fielding suggests? > He did say straight out that "there can be only one" when it comes > to bookmarks. Is he putting forth an impossible constraint on data- > driven APIs with that rule, or is there more that can be done to > implement that constraint? > I think you misread that. There might be many different entry points to an application, each of which is bookmarkable. The point was that the client only needs to know one of them, not that there is only one of them to know. ....Roy
The relevance of the REST architectural style to Linked Data and OWL/SKOS/etc has been nagging away in the back of my mind for the last few months. The idea of defining (and even considering the maintenance overhead of) an OWL snapshot of a knowledge domain and binding to it via restricted vocabulary metadata fills me with fear. It seems fraught with all the same contract versioning issues as WSDL, DCOM, etc. Knowledge and meaning are continually evolving products of the dynamic social context within which they exist, and that fact surely needs to be addressed in any workable content binding approach... In REST terms, concepts can obviously be modelled as URI-addressable resources and published in different representation formats. For example, the location concept "London, UK" can be modelled as a resource with the address http://dbpedia.org/resource/London and then published in representation formats including RDF, N3, KML, GeoRSS, etc. However I am unsure how the self-describing message and HATEOS constraints translate in this context? The best I can come up with is that the DBpedia, Freebase, OpenCyc ontologies should be viewed as the knowledge representation equivalents of standard MIME types (or metamedia types?), and that hypermedia should be used for runtime binding to the immediate sibling nodes of a concept within those standard ontologies (thereby avoiding the versioned contract binding nightmare). Any approach like that would make me feel a lot more comfortable that just tagging stuff with DBpedia URIs, which seems to violate a whole raft of basic architectural principles e.g. encapsulation and separate of interface/implementation. To my (clearly limited) mind, these seem like really important questions and I don't think they are receiving enough consideration in linked data/semantic web circles at present? regards Julian -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Simon Reinhardt Sent: 03 March 2009 15:58 Cc: Rest List Subject: Re: [rest-discuss] This is REST Solomon Duskis wrote: > Craig and Subbu's use of "rel" seems like a good start and I think that > building on that idea can lead to a "there can be only one" compliant > RESTful application. I think it may be a problem that may be solved > with an "As Simple As Possible," well known media-type plus a guide on > how to develop "rel" dictionaries. This is basically what RDF provides: a common data model (and a bunch of defined formats for it) and a simple, distributed way for defining your "rel" values, encouraging you to re-use the ones defined by others so that a Web service client needs less built-in knowledge about your service. And Linked Data [1] is all about taking one entry point and traversing the data Web from there dynamically by following the links provided (choosing those which have a "rel" type that matches your intentions). Linked Data is mainly about linking up open data of course but its principles can easily be applied to closed applications which, I think, would benefit from that. Just one way of doing it, but I more and more think that the ideas behind Linked Data and REST overlap largely. :-) Regards, Simon [1] http://linkeddata.org/ ------------------------------------ Yahoo! Groups Links Please note that the BBC monitors e-mails sent or received. Further communication will signify your consent to this This e-mail has been sent by one of the following wholly-owned subsidiaries of the BBC: BBC Worldwide Limited, Registration Number: 1420028 England, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ BBC World News Limited, Registration Number: 04514407 England, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ BBC World Distribution Limited, Registration Number: 04514408, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ
[to the list now...] I agree that these concerns are perhaps not being adequately address in the SemWeb/LinkedData world. Roy F. talks about late binding in 5.2.1.1 "Resources and Resource Identifiers", and I've often thought there was an essential tension there with (some approaches to) the Semantic Web -- almost as if the idea was to skirt around the fact that representations are bound to a resource as late in the process as possible -- this is what makes the "follow your nose" idea of traversing links in hypermedia so important. It's important that we distinguish resources from representations here -- if the SemWeb describes/trades in resources it'll work out (although that is, I suspect, difficult in practice). Unfortunately, there is a temptation to equate a URI with a representation -- it dereferences as a representation but *not until* runtime. And making assertions about that representation may be taking "time" out of the equation. As you say "hypermedia should be used for runtime binding to the immediate sibling nodes of a concept within those standard ontologies" -- seems like an excellent point. I wonder what implications that might have -- or maybe this is already the approach being taken? --peter keane On Tue, Mar 3, 2009 at 11:53 AM, Julian Everett <julian.everett@...>wrote: > The relevance of the REST architectural style to Linked Data and > OWL/SKOS/etc has been nagging away in the back of my mind for the last > few months. > > The idea of defining (and even considering the maintenance overhead of) > an OWL snapshot of a knowledge domain and binding to it via restricted > vocabulary metadata fills me with fear. It seems fraught with all the > same contract versioning issues as WSDL, DCOM, etc. Knowledge and > meaning are continually evolving products of the dynamic social context > within which they exist, and that fact surely needs to be addressed in > any workable content binding approach... > > In REST terms, concepts can obviously be modelled as URI-addressable > resources and published in different representation formats. For > example, the location concept "London, UK" can be modelled as a resource > with the address http://dbpedia.org/resource/London and then published > in representation formats including RDF, N3, KML, GeoRSS, etc. However I > am unsure how the self-describing message and HATEOS constraints > translate in this context? The best I can come up with is that the > DBpedia, Freebase, OpenCyc ontologies should be viewed as the knowledge > representation equivalents of standard MIME types (or metamedia types?), > and that hypermedia should be used for runtime binding to the immediate > sibling nodes of a concept within those standard ontologies (thereby > avoiding the versioned contract binding nightmare). Any approach like > that would make me feel a lot more comfortable that just tagging stuff > with DBpedia URIs, which seems to violate a whole raft of basic > architectural principles e.g. encapsulation and separate of > interface/implementation. > > To my (clearly limited) mind, these seem like really important questions > and I don't think they are receiving enough consideration in linked > data/semantic web circles at present? > > regards > > Julian > > -----Original Message----- > From: rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>[mailto: > rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>] > On Behalf Of Simon Reinhardt > Sent: 03 March 2009 15:58 > Cc: Rest List > Subject: Re: [rest-discuss] This is REST > > Solomon Duskis wrote: > > Craig and Subbu's use of "rel" seems like a good start and I think > that > > building on that idea can lead to a "there can be only one" compliant > > RESTful application. I think it may be a problem that may be solved > > with an "As Simple As Possible," well known media-type plus a guide on > > > how to develop "rel" dictionaries. > > This is basically what RDF provides: a common data model (and a bunch of > defined formats for it) and a simple, distributed way for defining your > "rel" values, encouraging you to re-use the ones defined by others so > that a Web service client needs less built-in knowledge about your > service. > And Linked Data [1] is all about taking one entry point and traversing > the data Web from there dynamically by following the links provided > (choosing those which have a "rel" type that matches your intentions). > Linked Data is mainly about linking up open data of course but its > principles can easily be applied to closed applications which, I think, > would benefit from that. > Just one way of doing it, but I more and more think that the ideas > behind Linked Data and REST overlap largely. :-) > > Regards, > Simon > > [1] http://linkeddata.org/ > > ------------------------------------ > > Yahoo! Groups Links > > Please note that the BBC monitors e-mails sent or received. Further > communication will signify your consent to this > > This e-mail has been sent by one of the following wholly-owned subsidiaries > of the BBC: > > BBC Worldwide Limited, Registration Number: 1420028 England, Registered > Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ > BBC World News Limited, Registration Number: 04514407 England, Registered > Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ > BBC World Distribution Limited, Registration Number: 04514408, Registered > Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ > >
The URI is the thing. The author of a resource knows what exact semantics an identifier intends to refer to. Semantics is always application-specific. The reader should always understand the author in an application. Am I missing something? Cheers, Dong On Tue, Mar 3, 2009 at 1:29 PM, Peter Keane <pkeane@...> wrote: > [to the list now...] > > I agree that these concerns are perhaps not being adequately address in the > SemWeb/LinkedData world. Roy F. talks about late binding in 5.2.1.1 > "Resources and Resource Identifiers", and I've often thought there was an > essential tension there with (some approaches to) the Semantic Web -- almost > as if the idea was to skirt around the fact that representations are bound > to a resource as late in the process as possible -- this is what makes the > "follow your nose" idea of traversing links in hypermedia so important. > It's important that we distinguish resources from representations here -- if > the SemWeb describes/trades in resources it'll work out (although that is, I > suspect, difficult in practice). Unfortunately, there is a temptation to > equate a URI with a representation -- it dereferences as a representation > but *not until* runtime. And making assertions about that representation may > be taking "time" out of the equation. > > As you say "hypermedia should be used for runtime binding to the immediate > sibling nodes of a concept within those standard ontologies" -- seems like > an excellent point. I wonder what implications that might have -- or maybe > this is already the approach being taken? > > --peter keane > > On Tue, Mar 3, 2009 at 11:53 AM, Julian Everett <julian.everett@...> > wrote: >> >> The relevance of the REST architectural style to Linked Data and >> OWL/SKOS/etc has been nagging away in the back of my mind for the last >> few months. >> >> The idea of defining (and even considering the maintenance overhead of) >> an OWL snapshot of a knowledge domain and binding to it via restricted >> vocabulary metadata fills me with fear. It seems fraught with all the >> same contract versioning issues as WSDL, DCOM, etc. Knowledge and >> meaning are continually evolving products of the dynamic social context >> within which they exist, and that fact surely needs to be addressed in >> any workable content binding approach... >> >> In REST terms, concepts can obviously be modelled as URI-addressable >> resources and published in different representation formats. For >> example, the location concept "London, UK" can be modelled as a resource >> with the address http://dbpedia.org/resource/London and then published >> in representation formats including RDF, N3, KML, GeoRSS, etc. However I >> am unsure how the self-describing message and HATEOS constraints >> translate in this context? The best I can come up with is that the >> DBpedia, Freebase, OpenCyc ontologies should be viewed as the knowledge >> representation equivalents of standard MIME types (or metamedia types?), >> and that hypermedia should be used for runtime binding to the immediate >> sibling nodes of a concept within those standard ontologies (thereby >> avoiding the versioned contract binding nightmare). Any approach like >> that would make me feel a lot more comfortable that just tagging stuff >> with DBpedia URIs, which seems to violate a whole raft of basic >> architectural principles e.g. encapsulation and separate of >> interface/implementation. >> >> To my (clearly limited) mind, these seem like really important questions >> and I don't think they are receiving enough consideration in linked >> data/semantic web circles at present? >> >> regards >> >> Julian >> >> -----Original Message----- >> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] >> On Behalf Of Simon Reinhardt >> Sent: 03 March 2009 15:58 >> Cc: Rest List >> Subject: Re: [rest-discuss] This is REST >> >> Solomon Duskis wrote: >> > Craig and Subbu's use of "rel" seems like a good start and I think >> that >> > building on that idea can lead to a "there can be only one" compliant >> > RESTful application. I think it may be a problem that may be solved >> > with an "As Simple As Possible," well known media-type plus a guide on >> >> > how to develop "rel" dictionaries. >> >> This is basically what RDF provides: a common data model (and a bunch of >> defined formats for it) and a simple, distributed way for defining your >> "rel" values, encouraging you to re-use the ones defined by others so >> that a Web service client needs less built-in knowledge about your >> service. >> And Linked Data [1] is all about taking one entry point and traversing >> the data Web from there dynamically by following the links provided >> (choosing those which have a "rel" type that matches your intentions). >> Linked Data is mainly about linking up open data of course but its >> principles can easily be applied to closed applications which, I think, >> would benefit from that. >> Just one way of doing it, but I more and more think that the ideas >> behind Linked Data and REST overlap largely. :-) >> >> Regards, >> Simon >> >> [1] http://linkeddata.org/ >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> Please note that the BBC monitors e-mails sent or received. Further >> communication will signify your consent to this >> >> This e-mail has been sent by one of the following wholly-owned >> subsidiaries of the BBC: >> >> BBC Worldwide Limited, Registration Number: 1420028 England, Registered >> Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ >> BBC World News Limited, Registration Number: 04514407 England, Registered >> Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ >> BBC World Distribution Limited, Registration Number: 04514408, Registered >> Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ > >
Julian,
I think I get what the concern is, but I may not be reading you correctly. Are you concerned that a user agent, when interpreting linked data, would be tasked with rigidly conforming to a particular version(s) of OWL ontologies and not have the ability to adapt to evolving meaning?
Or are you trying to find a way to eliminate the need for contract versioning altogether?
If it's the former, there was good paper a few years back that may provide insights to this challenge:
Named Graphs, Provenance and Trust
http://www.www2005.org/cdrom/docs/p613.pdf
Though it doesn't answer everything. ( I agree there's not enough discussion of the implications of Linked Data and provenance or versioning, or also, how to denote the effects and semantics of updating linked data with POST or PUT).
But the basic jist is this -- ontologies are similar to relational database schemas, and they need to be versioned like data models. But the benefit is that they're 'open world' and can thus be extended or made equivalent to concepts in other ontologies. Which extensions or equivalences you want to consume comes down to the level of trust you place on them -- especially if they're consumed dynamically over hypermedia.
The practice of using named graphs gives you a construct of creating multiple worlds of data that may have different interpretations associated with them, and a language like SPARQL gives you the ability to query across these named graphs with a 'dynamic closed world assumption'.
I suspect that, practically speaking, any agent is going to have to bind to _some_ version of the ontology that it understood, perhaps dynamically extending it as it goes along (based on hypermedia sensing trusted links), but frankly we're (as an industry) far away from even level of sophistication in our agents :-) There are some interesting papers out there on "knowledge programming with sensing" that I think would be quiet inspirational to anyone looking at how a next-generation linked data agent might work.
Cheers
Stu
________________________________
From: Julian Everett <julian.everett@...>
To: Simon Reinhardt <simon.reinhardt@koeln.de>; Rest List <rest-discuss@yahoogroups.com>
Sent: Tuesday, March 3, 2009 9:53:51 AM
Subject: [rest-discuss] Linked Data and REST architectural style (was: This is REST)
The relevance of the REST architectural style to Linked Data and
OWL/SKOS/etc has been nagging away in the back of my mind for the last
few months.
The idea of defining (and even considering the maintenance overhead of)
an OWL snapshot of a knowledge domain and binding to it via restricted
vocabulary metadata fills me with fear. It seems fraught with all the
same contract versioning issues as WSDL, DCOM, etc. Knowledge and
meaning are continually evolving products of the dynamic social context
within which they exist, and that fact surely needs to be addressed in
any workable content binding approach...
In REST terms, concepts can obviously be modelled as URI-addressable
resources and published in different representation formats. For
example, the location concept "London, UK" can be modelled as a resource
with the address http://dbpedia. org/resource/ London and then published
in representation formats including RDF, N3, KML, GeoRSS, etc. However I
am unsure how the self-describing message and HATEOS constraints
translate in this context? The best I can come up with is that the
DBpedia, Freebase, OpenCyc ontologies should be viewed as the knowledge
representation equivalents of standard MIME types (or metamedia types?),
and that hypermedia should be used for runtime binding to the immediate
sibling nodes of a concept within those standard ontologies (thereby
avoiding the versioned contract binding nightmare). Any approach like
that would make me feel a lot more comfortable that just tagging stuff
with DBpedia URIs, which seems to violate a whole raft of basic
architectural principles e.g. encapsulation and separate of
interface/implement ation.
To my (clearly limited) mind, these seem like really important questions
and I don't think they are receiving enough consideration in linked
data/semantic web circles at present?
regards
Julian
-----Original Message-----
From: rest-discuss@ yahoogroups. com [mailto:rest-discuss@ yahoogroups. com]
On Behalf Of Simon Reinhardt
Sent: 03 March 2009 15:58
Cc: Rest List
Subject: Re: [rest-discuss] This is REST
Solomon Duskis wrote:
> Craig and Subbu's use of "rel" seems like a good start and I think
that
> building on that idea can lead to a "there can be only one" compliant
> RESTful application. I think it may be a problem that may be solved
> with an "As Simple As Possible," well known media-type plus a guide on
> how to develop "rel" dictionaries.
This is basically what RDF provides: a common data model (and a bunch of
defined formats for it) and a simple, distributed way for defining your
"rel" values, encouraging you to re-use the ones defined by others so
that a Web service client needs less built-in knowledge about your
service.
And Linked Data [1] is all about taking one entry point and traversing
the data Web from there dynamically by following the links provided
(choosing those which have a "rel" type that matches your intentions).
Linked Data is mainly about linking up open data of course but its
principles can easily be applied to closed applications which, I think,
would benefit from that.
Just one way of doing it, but I more and more think that the ideas
behind Linked Data and REST overlap largely. :-)
Regards,
Simon
[1] http://linkeddata. org/
------------ --------- --------- ------
Yahoo! Groups Links
Please note that the BBC monitors e-mails sent or received. Further communication will signify your consent to this
This e-mail has been sent by one of the following wholly-owned subsidiaries of the BBC:
BBC Worldwide Limited, Registration Number: 1420028 England, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ
BBC World News Limited, Registration Number: 04514407 England, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ
BBC World Distribution Limited, Registration Number: 04514408, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ
__________________________________________________________________
Be smarter than spam. See how smart SpamGuard is at giving junk email the boot with the All-new Yahoo! Mail. Click on Options in Mail and switch to New Mail today or register for free at http://mail.yahoo.caIn a recent blog post entitled "the foaf+ssl paradigm shift" I show how the foaf+ssl protocol helps create RESTful web identity for distributed open yet secure social networks http://blogs.sun.com/bblfish/entry/the_foaf_ssl_paradigm_shift I hope this helps explain what this is all about in a fun way. Henry Blog: http://blogs.sun.com/bblfish
Does the reality imply that the entry-barrier of semantic web approaches is higher than what industry, or developers, or normal web users can accept? I am always scared by the epigram of "So many good ideas are never heard from again once they embark in a voyage on the semantic gulf. " Cheers, Dong On Tue, Mar 3, 2009 at 3:46 PM, Stuart Charlton <stuartcharlton@yahoo.com> wrote: > > Julian, > I think I get what the concern is, but I may not be reading you correctly. > Are you concerned that a user agent, when interpreting linked data, would > be tasked with rigidly conforming to a particular version(s) of OWL > ontologies and not have the ability to adapt to evolving meaning? > Or are you trying to find a way to eliminate the need for contract > versioning altogether? > If it's the former, there was good paper a few years back that may provide > insights to this challenge: > Named Graphs, Provenance and Trust > http://www.www2005.org/cdrom/docs/p613.pdf > Though it doesn't answer everything. ( I agree there's not enough > discussion of the implications of Linked Data and provenance or versioning, > or also, how to denote the effects and semantics of updating linked data > with POST or PUT). > But the basic jist is this -- ontologies are similar to relational database > schemas, and they need to be versioned like data models. But the benefit is > that they're 'open world' and can thus be extended or made equivalent to > concepts in other ontologies. Which extensions or equivalences you want > to consume comes down to the level of trust you place on them -- especially > if they're consumed dynamically over hypermedia. > The practice of using named graphs gives you a construct of creating > multiple worlds of data that may have different interpretations associated > with them, and a language like SPARQL gives you the ability to query across > these named graphs with a 'dynamic closed world assumption'. > I suspect that, practically speaking, any agent is going to have to bind to > _some_ version of the ontology that it understood, perhaps dynamically > extending it as it goes along (based on hypermedia sensing trusted links), > but frankly we're (as an industry) far away from even level of > sophistication in our agents :-) There are some interesting papers out > there on "knowledge programming with sensing" that I think would be quiet > inspirational to anyone looking at how a next-generation linked data agent > might work. > Cheers > Stu > > > > ________________________________ > From: Julian Everett <julian.everett@...> > To: Simon Reinhardt <simon.reinhardt@...>; Rest List > <rest-discuss@yahoogroups.com> > Sent: Tuesday, March 3, 2009 9:53:51 AM > Subject: [rest-discuss] Linked Data and REST architectural style (was: This > is REST) > > The relevance of the REST architectural style to Linked Data and > OWL/SKOS/etc has been nagging away in the back of my mind for the last > few months. > > The idea of defining (and even considering the maintenance overhead of) > an OWL snapshot of a knowledge domain and binding to it via restricted > vocabulary metadata fills me with fear. It seems fraught with all the > same contract versioning issues as WSDL, DCOM, etc. Knowledge and > meaning are continually evolving products of the dynamic social context > within which they exist, and that fact surely needs to be addressed in > any workable content binding approach... > > In REST terms, concepts can obviously be modelled as URI-addressable > resources and published in different representation formats. For > example, the location concept "London, UK" can be modelled as a resource > with the address http://dbpedia. org/resource/ London and then published > in representation formats including RDF, N3, KML, GeoRSS, etc. However I > am unsure how the self-describing message and HATEOS constraints > translate in this context? The best I can come up with is that the > DBpedia, Freebase, OpenCyc ontologies should be viewed as the knowledge > representation equivalents of standard MIME types (or metamedia types?), > and that hypermedia should be used for runtime binding to the immediate > sibling nodes of a concept within those standard ontologies (thereby > avoiding the versioned contract binding nightmare). Any approach like > that would make me feel a lot more comfortable that just tagging stuff > with DBpedia URIs, which seems to violate a whole raft of basic > architectural principles e.g. encapsulation and separate of > interface/implement ation. > > To my (clearly limited) mind, these seem like really important questions > and I don't think they are receiving enough consideration in linked > data/semantic web circles at present? > > regards > > Julian > > -----Original Message----- > From: rest-discuss@ yahoogroups. com [mailto:rest-discuss@ yahoogroups. com] > On Behalf Of Simon Reinhardt > Sent: 03 March 2009 15:58 > Cc: Rest List > Subject: Re: [rest-discuss] This is REST > > Solomon Duskis wrote: >> Craig and Subbu's use of "rel" seems like a good start and I think > that >> building on that idea can lead to a "there can be only one" compliant >> RESTful application. I think it may be a problem that may be solved >> with an "As Simple As Possible," well known media-type plus a guide on > >> how to develop "rel" dictionaries. > > This is basically what RDF provides: a common data model (and a bunch of > defined formats for it) and a simple, distributed way for defining your > "rel" values, encouraging you to re-use the ones defined by others so > that a Web service client needs less built-in knowledge about your > service. > And Linked Data [1] is all about taking one entry point and traversing > the data Web from there dynamically by following the links provided > (choosing those which have a "rel" type that matches your intentions). > Linked Data is mainly about linking up open data of course but its > principles can easily be applied to closed applications which, I think, > would benefit from that. > Just one way of doing it, but I more and more think that the ideas > behind Linked Data and REST overlap largely. :-) > > Regards, > Simon > > [1] http://linkeddata. org/ > > ------------ --------- --------- ------ > > Yahoo! Groups Links > > Please note that the BBC monitors e-mails sent or received. Further > communication will signify your consent to this > > This e-mail has been sent by one of the following wholly-owned subsidiaries > of the BBC: > > BBC Worldwide Limited, Registration Number: 1420028 England, Registered > Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ > BBC World News Limited, Registration Number: 04514407 England, Registered > Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ > BBC World Distribution Limited, Registration Number: 04514408, Registered > Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ > > ________________________________ > Be smarter than spam. See how smart SpamGuard is at giving junk email the > boot with the All-new Yahoo! Mail >
Thank you very much for the clarification. If I understand correctly, the single bookmark constraint means that any initial access to the REST system must allow discoverability to every accessible part of the system through some degree of linkage sepration. There is some path from every point A to point B and back again. There's some way to discover the entire system using any entry point. Websites do this all the time. The homepage can get you just about anywhere on the site through a degree of separation, and everything links back to the homepage. If you access the website with a bookmark any where on the site and still link to every other available resource with some set of clicks and form entries. All of the REST APIs that I've seen have: - resources that are commonly bookmarked but no other resources link back to them. - closed subsystems -- resources that interlink but don't link to the rest of the API. In other words clients of those REST APIs *must* use some out of band information, such as human readable API documentation, in order to invoke functionality. Perhaps it's a problem with the API's media types not interlinking. However, even if the media types did fully interlink, I simply haven't seen any client-side techniques to perform that discovery of that interlinking in a clean systemic way. The problem here is either: A) I don't get it. I'm missing something fundamental. B) There's more work to be done here, but it's doable. This is a great opportunity, similar to what http://www.infoq.com/articles/subbu-allamaraju-rest discusses C) Website discover works because there's a human driving the navigation. REST API discoverability requires a complex "discovery engine" making REST APIs too complex to discover effectively without that engine. The barrier to entry is too big, *<http://tech.groups.yahoo.com/group/rest-discuss/message/12186> * Given the discussion in the previous thread, it seems like B. Is my understanding of the theory and practices described here correctest? -Solomon On Tue, Mar 3, 2009 at 12:05 PM, Roy T. Fielding <fielding@...> wrote: > On Mar 3, 2009, at 6:11 AM, Solomon Duskis wrote: > >> Sorry Stefan, I did mean to reply all :) >> >> Perhaps the Netflix API would be considered RESTful, but should we be able >> to discover ALL of the links like Roy Fielding suggests? He did say >> straight out that "there can be only one" when it comes to bookmarks. Is he >> putting forth an impossible constraint on data-driven APIs with that rule, >> or is there more that can be done to implement that constraint? >> >> > I think you misread that. There might be many different entry points > to an application, each of which is bookmarkable. The point was that > the client only needs to know one of them, not that there is only > one of them to know. > > ....Roy > >
Hi Stuart Thanks for the link - that's a very interesting paper. > Are you concerned that a user agent, when interpreting linked data, would be tasked with rigidly conforming to a particular version(s) of OWL ontologies and not have the ability to adapt to evolving meaning? Exactly. And I think the ability to adapt to evolving meaning will be dependent on both how the ontology is published and how the client consumes it. Going back to the location example, let's say I want to start adding structured location metadata to some content which I am publishing online: my motivation for doing so is to leverage the ontology to provide a richer set of related keywords in order to increase site visitors' content discovery options and drive cross linking. So I start coding.. Developer Groundhog Day #1 I decide to use GeoNames IDs for the location metadata, and hook a drop-down in my CMS into the GeoNames web service. When a user selects a country or town, I then make a second call back out to GeoNames to pull back the sibling IDs of that node and store both the selected ID and siblings. My content gets published enriched with a list of related GeoNames IDs, and I'm done! Then I discover a problem. Someone really needs a feed of my content, but everything on their site uses GeoRSS format for location. Then someone else asks me a feed but they need the location data expressed as DBpedia resource URIs. I am "helpfully" just about to add new fields to the CMS for GeoRSS and DBpedia IDs and then update all the legacy tagged content, when thankfully someone catches me and forcibly restrains me. Developer Groundhog Day #2 So I think "must separate resource from representation, must separate resource from representation" and start coding again. I resolve to identify a true primary key as the internal implementation of my location resource, and settle on long/lat plus a context identifier (e.g. town, village, continent). I add an adapter layer to my web application that translates my internal implementation into a public standard external interface of GeoNames ID, KML, GeoRSS, DBpedia IDs. Ideally I would like the client to be able to specify their desired representation format content-negotiation style, but I'm not sure how that could work practically so opt for a simple configuration option for each client. I'm done! Then I discover a problem. Tibet is liberated; the Basque country declares independence; Croatia, Serbia and Slovenia merge into the United Slovak States; and a whole subset of my inferred location tags are now wrong. I am "helpfully" just about to update all the legacy tagged content, when thankfully someone catches me and forcibly restrains me. Developer Groundhog Day #3 I decide I should be late-binding to ontologies at content delivery time rather than early-binding at authoring time. I also realise that I should be storing as little metadata possible: I just need my core internal primary key values. Instead, I add more functionality to my adapter layer and get it to request hypermedia links to sibling nodes in the relevant ontology as well as performing the format translation. In that way my application becomes properly decoupled from ontology versioning issues, and I am always able to publish the current and most relevant set of related links for any piece of content. Happy days. Which maps almost exactly to my experiences of people doing service development: Developer Groundhog Day #1 Naïve YAGNI: bleed the internals of my domain model into the outside world via auto-generated WSDL/XSDs. Clients are coupled to my implementation details, I can't change anything and end up in world of pain. Developer Groundhog Day #2 Contract versioning: abstract domain model behind DTO/adapter layer which is then exposed via versioned WSDL/XSDs. With each new version, system complexity and costs spiral until ultimately no longer viable. Developer Groundhog Day #3 REST: one codebase, one version, properly decoupled clients. Happy days. Thoughts anyone? thanks a lot Julian From: Stuart Charlton [mailto:stuartcharlton@...] Sent: 03 March 2009 21:47 To: Julian Everett; Rest List Subject: Re: [rest-discuss] Linked Data and REST architectural style (was: This is REST) Julian, I think I get what the concern is, but I may not be reading you correctly. Are you concerned that a user agent, when interpreting linked data, would be tasked with rigidly conforming to a particular version(s) of OWL ontologies and not have the ability to adapt to evolving meaning? Or are you trying to find a way to eliminate the need for contract versioning altogether? If it's the former, there was good paper a few years back that may provide insights to this challenge: Named Graphs, Provenance and Trust http://www.www2005.org/cdrom/docs/p613.pdf Though it doesn't answer everything. ( I agree there's not enough discussion of the implications of Linked Data and provenance or versioning, or also, how to denote the effects and semantics of updating linked data with POST or PUT). But the basic jist is this -- ontologies are similar to relational database schemas, and they need to be versioned like data models. But the benefit is that they're 'open world' and can thus be extended or made equivalent to concepts in other ontologies. Which extensions or equivalences you want to consume comes down to the level of trust you place on them -- especially if they're consumed dynamically over hypermedia. The practice of using named graphs gives you a construct of creating multiple worlds of data that may have different interpretations associated with them, and a language like SPARQL gives you the ability to query across these named graphs with a 'dynamic closed world assumption'. I suspect that, practically speaking, any agent is going to have to bind to _some_ version of the ontology that it understood, perhaps dynamically extending it as it goes along (based on hypermedia sensing trusted links), but frankly we're (as an industry) far away from even level of sophistication in our agents :-) There are some interesting papers out there on "knowledge programming with sensing" that I think would be quiet inspirational to anyone looking at how a next-generation linked data agent might work. Cheers Stu ________________________________ From: Julian Everett <julian.everett@...> To: Simon Reinhardt <simon.reinhardt@...>; Rest List <rest-discuss@yahoogroups.com> Sent: Tuesday, March 3, 2009 9:53:51 AM Subject: [rest-discuss] Linked Data and REST architectural style (was: This is REST) The relevance of the REST architectural style to Linked Data and OWL/SKOS/etc has been nagging away in the back of my mind for the last few months. The idea of defining (and even considering the maintenance overhead of) an OWL snapshot of a knowledge domain and binding to it via restricted vocabulary metadata fills me with fear. It seems fraught with all the same contract versioning issues as WSDL, DCOM, etc. Knowledge and meaning are continually evolving products of the dynamic social context within which they exist, and that fact surely needs to be addressed in any workable content binding approach... In REST terms, concepts can obviously be modelled as URI-addressable resources and published in different representation formats. For example, the location concept "London, UK" can be modelled as a resource with the address http://dbpedia. org/resource/ London <http://dbpedia.org/resource/London> and then published in representation formats including RDF, N3, KML, GeoRSS, etc. However I am unsure how the self-describing message and HATEOS constraints translate in this context? The best I can come up with is that the DBpedia, Freebase, OpenCyc ontologies should be viewed as the knowledge representation equivalents of standard MIME types (or metamedia types?), and that hypermedia should be used for runtime binding to the immediate sibling nodes of a concept within those standard ontologies (thereby avoiding the versioned contract binding nightmare). Any approach like that would make me feel a lot more comfortable that just tagging stuff with DBpedia URIs, which seems to violate a whole raft of basic architectural principles e.g. encapsulation and separate of interface/implement ation. To my (clearly limited) mind, these seem like really important questions and I don't think they are receiving enough consideration in linked data/semantic web circles at present? regards Julian -----Original Message----- From: rest-discuss@ yahoogroups. com <mailto:rest-discuss%40yahoogroups.com> [mailto:rest-discuss@ yahoogroups. com <mailto:rest-discuss%40yahoogroups.com> ] On Behalf Of Simon Reinhardt Sent: 03 March 2009 15:58 Cc: Rest List Subject: Re: [rest-discuss] This is REST Solomon Duskis wrote: > Craig and Subbu's use of "rel" seems like a good start and I think that > building on that idea can lead to a "there can be only one" compliant > RESTful application. I think it may be a problem that may be solved > with an "As Simple As Possible," well known media-type plus a guide on > how to develop "rel" dictionaries. This is basically what RDF provides: a common data model (and a bunch of defined formats for it) and a simple, distributed way for defining your "rel" values, encouraging you to re-use the ones defined by others so that a Web service client needs less built-in knowledge about your service. And Linked Data [1] is all about taking one entry point and traversing the data Web from there dynamically by following the links provided (choosing those which have a "rel" type that matches your intentions). Linked Data is mainly about linking up open data of course but its principles can easily be applied to closed applications which, I think, would benefit from that. Just one way of doing it, but I more and more think that the ideas behind Linked Data and REST overlap largely. :-) Regards, Simon [1] http://linkeddata. org/ <http://linkeddata.org/> ------------ --------- --------- ------ Yahoo! Groups Links Please note that the BBC monitors e-mails sent or received. Further communication will signify your consent to this This e-mail has been sent by one of the following wholly-owned subsidiaries of the BBC: BBC Worldwide Limited, Registration Number: 1420028 England, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ BBC World News Limited, Registration Number: 04514407 England, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ BBC World Distribution Limited, Registration Number: 04514408, Registered Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ ________________________________ Be smarter than spam. See how smart SpamGuard is at giving junk email the boot with the All-new Yahoo! Mail <http://ca.promos.yahoo.com/newmail/overview2/>
Solomon Duskis <sduskis@...> writes:
> All of the REST APIs that I've seen have:
> - resources that are commonly bookmarked but no other resources link back to them.Â
> - closed subsystems -- resources that interlink but don't link to the rest of the API.Â
>
> In other words clients of those REST APIs must use some out of band information, such as human readable API documentation, in order to invoke functionality. Perhaps it's a problem with the API's
> media types not interlinking. However, even if the media types did fully interlink, I simply haven't seen any client-side techniques to perform that discovery of that interlinking in a clean
> systemic way.
>
> The problem here is either:
>
> A) I don't get it. I'm missing something fundamental.
> B) There's more work to be done here, but it's doable. This is a great opportunity, similar to what http://www.infoq.com/articles/subbu-allamaraju-rest discusses
> C) Website discover works because there's a human driving the navigation. REST API discoverability requires a complex "discovery engine" making REST APIs too complex to discover effectively
> without that engine. The barrier to entry is too big,Â
>
> Given the discussion in the previous thread, it seems like B. Is my understanding of the theory and practices described here correctest?
… or:
D) People are misusing the term REST in describing their APIs.
A good example is some new flickr API I saw last night
<http://code.flickr.com/blog/2009/03/03/panda-tuesday-the-history-of-the-panda-new-apis-explore-and-you/>,
which uses its own mechanism for cache/polling control, doesn't use
URLs/hypermedia, &c.
I think you're correct in your assessment of the sad state of support
for RESTful APIs. I'd imagine something that used both an out-of-band
description to allow user-agent/clients to build reasonable stubs and
document the potential state space (urls, media-types, which values in
those media types correspond to links, with what semantics, &c.), as
well as navigating the run-time representations to interact with the
service. I don't think I've seen anything strong along those lines.
Though another part of REST is the idea that the media types are widely
and well known, which is at odds with service-specific media types and
service-description documents. People do talk about using Atom/APP and
Microformats/HTML to describe their apps, though. RDF and RDF Forms are
another approach along these lines.
--
...jsled
http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
It depends on how far you take it.
I think SPARQL provides a very practical, reasonable way to build applications with RDF data. Toolkits like Jena or Python's RDFlib are wonderful. There are clear wins over using plain XML parsing or XQuery, in my opinion.
On the other hand, getting too wrapped up with OWL will be a problem. Most Object-oriented programming languages have mucked up logical concepts like inheritance to mean something that doesn't always fit the mathematical term. And understanding the tradeoffs of what reasoner to use, what OWL flavour to use, etc. are still difficult to understand for someone that hasn't come from a knowledge representation background. And finally, OWL doesn't really help you do data validation, which is very foreign to people used to building data models.
I've tried to avoid these pitfalls in our work at Elastra by focusing on vanilla OWL semantics (mostly OWL Lite) so we can have SOME structure in our ontology, but otherwise using SPARQL construct queries to do the heavy lifting for validation, inference, etc.
Cheers
Stu
________________________________
From: Dong Liu <edongliu@...>
To: Rest List <rest-discuss@yahoogroups.com>
Cc: Stuart Charlton <stuartcharlton@...>
Sent: Tuesday, March 3, 2009 2:50:04 PM
Subject: Re: [rest-discuss] Linked Data and REST architectural style (was: This is REST)
Does the reality imply that the entry-barrier of semantic web
approaches is higher than what industry, or developers, or normal web
users can accept?
I am always scared by the epigram of "So many good ideas are never
heard from again once they embark in a voyage on the semantic gulf. "
Cheers,
Dong
On Tue, Mar 3, 2009 at 3:46 PM, Stuart Charlton
<stuartcharlton@ yahoo.com> wrote:
>
> Julian,
> I think I get what the concern is, but I may not be reading you correctly.
> Are you concerned that a user agent, when interpreting linked data, would
> be tasked with rigidly conforming to a particular version(s) of OWL
> ontologies and not have the ability to adapt to evolving meaning?
> Or are you trying to find a way to eliminate the need for contract
> versioning altogether?
> If it's the former, there was good paper a few years back that may provide
> insights to this challenge:
> Named Graphs, Provenance and Trust
> http://www.www2005. org/cdrom/ docs/p613. pdf
> Though it doesn't answer everything. ( I agree there's not enough
> discussion of the implications of Linked Data and provenance or versioning,
> or also, how to denote the effects and semantics of updating linked data
> with POST or PUT).
> But the basic jist is this -- ontologies are similar to relational database
> schemas, and they need to be versioned like data models. But the benefit is
> that they're 'open world' and can thus be extended or made equivalent to
> concepts in other ontologies. Which extensions or equivalences you want
> to consume comes down to the level of trust you place on them -- especially
> if they're consumed dynamically over hypermedia.
> The practice of using named graphs gives you a construct of creating
> multiple worlds of data that may have different interpretations associated
> with them, and a language like SPARQL gives you the ability to query across
> these named graphs with a 'dynamic closed world assumption'.
> I suspect that, practically speaking, any agent is going to have to bind to
> _some_ version of the ontology that it understood, perhaps dynamically
> extending it as it goes along (based on hypermedia sensing trusted links),
> but frankly we're (as an industry) far away from even level of
> sophistication in our agents :-) There are some interesting papers out
> there on "knowledge programming with sensing" that I think would be quiet
> inspirational to anyone looking at how a next-generation linked data agent
> might work.
> Cheers
> Stu
>
>
>
> ____________ _________ _________ __
> From: Julian Everett <julian.everett@ bbc.com>
> To: Simon Reinhardt <simon.reinhardt@ koeln.de>; Rest List
> <rest-discuss@ yahoogroups. com>
> Sent: Tuesday, March 3, 2009 9:53:51 AM
> Subject: [rest-discuss] Linked Data and REST architectural style (was: This
> is REST)
>
> The relevance of the REST architectural style to Linked Data and
> OWL/SKOS/etc has been nagging away in the back of my mind for the last
> few months.
>
> The idea of defining (and even considering the maintenance overhead of)
> an OWL snapshot of a knowledge domain and binding to it via restricted
> vocabulary metadata fills me with fear. It seems fraught with all the
> same contract versioning issues as WSDL, DCOM, etc. Knowledge and
> meaning are continually evolving products of the dynamic social context
> within which they exist, and that fact surely needs to be addressed in
> any workable content binding approach...
>
> In REST terms, concepts can obviously be modelled as URI-addressable
> resources and published in different representation formats. For
> example, the location concept "London, UK" can be modelled as a resource
> with the address http://dbpedia. org/resource/ London and then published
> in representation formats including RDF, N3, KML, GeoRSS, etc. However I
> am unsure how the self-describing message and HATEOS constraints
> translate in this context? The best I can come up with is that the
> DBpedia, Freebase, OpenCyc ontologies should be viewed as the knowledge
> representation equivalents of standard MIME types (or metamedia types?),
> and that hypermedia should be used for runtime binding to the immediate
> sibling nodes of a concept within those standard ontologies (thereby
> avoiding the versioned contract binding nightmare). Any approach like
> that would make me feel a lot more comfortable that just tagging stuff
> with DBpedia URIs, which seems to violate a whole raft of basic
> architectural principles e.g. encapsulation and separate of
> interface/implement ation.
>
> To my (clearly limited) mind, these seem like really important questions
> and I don't think they are receiving enough consideration in linked
> data/semantic web circles at present?
>
> regards
>
> Julian
>
> -----Original Message-----
> From: rest-discuss@ yahoogroups. com [mailto:rest- discuss@ yahoogroups. com]
> On Behalf Of Simon Reinhardt
> Sent: 03 March 2009 15:58
> Cc: Rest List
> Subject: Re: [rest-discuss] This is REST
>
> Solomon Duskis wrote:
>> Craig and Subbu's use of "rel" seems like a good start and I think
> that
>> building on that idea can lead to a "there can be only one" compliant
>> RESTful application. I think it may be a problem that may be solved
>> with an "As Simple As Possible," well known media-type plus a guide on
>
>> how to develop "rel" dictionaries.
>
> This is basically what RDF provides: a common data model (and a bunch of
> defined formats for it) and a simple, distributed way for defining your
> "rel" values, encouraging you to re-use the ones defined by others so
> that a Web service client needs less built-in knowledge about your
> service.
> And Linked Data [1] is all about taking one entry point and traversing
> the data Web from there dynamically by following the links provided
> (choosing those which have a "rel" type that matches your intentions).
> Linked Data is mainly about linking up open data of course but its
> principles can easily be applied to closed applications which, I think,
> would benefit from that.
> Just one way of doing it, but I more and more think that the ideas
> behind Linked Data and REST overlap largely. :-)
>
> Regards,
> Simon
>
> [1] http://linkeddata. org/
>
> ------------ --------- --------- ------
>
> Yahoo! Groups Links
>
> Please note that the BBC monitors e-mails sent or received. Further
> communication will signify your consent to this
>
> This e-mail has been sent by one of the following wholly-owned subsidiaries
> of the BBC:
>
> BBC Worldwide Limited, Registration Number: 1420028 England, Registered
> Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ
> BBC World News Limited, Registration Number: 04514407 England, Registered
> Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ
> BBC World Distribution Limited, Registration Number: 04514408, Registered
> Address: BBC Media Centre, 201 Wood Lane, London, W12 7TQ
>
> ____________ _________ _________ __
> Be smarter than spam. See how smart SpamGuard is at giving junk email the
> boot with the All-new Yahoo! Mail
>
__________________________________________________________________
Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your favourite sites. Download it now at
http://ca.toolbar.yahoo.com.On Mar 4, 2009, at 3:31 AM, Solomon Duskis wrote: > Thank you very much for the clarification. > > If I understand correctly, the single bookmark constraint means > that any initial access to the REST system must allow > discoverability to every accessible part of the system through some > degree of linkage sepration. There is some path from every point A > to point B and back again. There's some way to discover the entire > system using any entry point. > No. This is an application of computers to do something useful, not a math problem or a reachability analysis. Any given client might only be able to reach 1% of a system and still be successful at doing what they wanted to do. The simplest example of that is a system using authentication and role-based access control. In any case, this line of thought is missing the point of REST. If the application state is entirely defined by the client's workspace (set of representations) and all possible transitions away from that state are presented in those representations, then the potential size of the overall system is irrelevant. Each state can be considered independently. That is fundamental to the use of state machines to simplify the understanding of complex systems. The client only needs to know what it has in that workspace, at that time, and so any pre-definition of URI layout or WSDL-like service semantics is an absolute waste of time as far as a RESTful architecture is concerned. External artifacts might help the developers communicate about or improve the design of their system, just as readable URIs will help a human user understand where they are in a hypertext user-interface, but those external artifacts must not have a role in the runtime architecture if the system is truly hypertext-driven. ....Roy
Comments below
On Wed, Mar 4, 2009 at 9:18 PM, Darrel Miller <darrel.miller@...>wrote:
>
> On Wed, Mar 4, 2009 at 6:31 AM, Solomon Duskis <sduskis@...> wrote:
>
>> C) Website discover works because there's a human driving the
>> navigation. REST API discoverability requires a complex "discovery engine"
>> making REST APIs too complex to discover effectively without that engine.
>> The barrier to entry is too big,
>>
>
>
>
>> __,_._,_
>>
>
> I don't understand why you believe it is difficult to get a client
> application to discover links. Writing client side code to finding and
> follow links in an XML or HTML based document is quite trivial.
>
> Darrel
>
Good question Darrel; it's straight to the point :). I hope that this email
will answer that question. I also hope that this will either prove or
disprove that I understand what Roy Fielding said in the last email to this
group.
Yes, you can create a system that can follow *arbitrary *XML or HTML links.
However, REST APIs are being used to build other systems. Those system
require *specific functionality* from the API at *specific *points in the
interaction (a.k.a.the client workspace). I haven't seen a working "REST
API" that provides a method to derive ALL of the URLs for that timely
specific functionality based on a single entry point.
I'm going to give an example of this with a non-existant, theoretical REST
API from a company called Metflix (not to be confused with any other APIs,
for legal reasons).
(Bear with me here, it's going to take some time to get to the punch line.)
Metflix is a website that has all movies that relate to the Mets football
franchise (which of course does not exist, and has no relation to any
company what-so-ever). You can:
- login (metflix.com/login),
- manage your Queue of Mets movies (/myqueue, as a starting point plus
management functionality),
- view a list of the latest movies (/movies/),
- view information about a Mets movie (/movies/{id}),
- search for Mets movies (/search?term={searchTerm}).
Of course there are plenty of other whiz-bang features, but we can limit our
discussion to those :).
The Metflix website got so successful, that it created a RESTful API so that
other applications can be built around its services. The URLS of the
service just happen to have the same URLs as the site, but are prefixed by
'/api'.
I want to build a fancy Flash UI client for that service, that basically
exposes the same services, but just looks much nicer (Note: their affiliate
program is fantastic... I'll make a ton of money that way).
My Flash app requires the user to login, then he or she can manage the
queue, search for Mets movies, and consequently view details about movies
from either the queue or the search results.
My fancy Flash client will POST to 'metflix.com/api/login', and get a 200
status code and an auth token. Then it will show your queue, (which as you
remember, is found at '/api/myqueue' and of course the auth token has to be
used here) in one panel, which has links to the individual movies in the
queue, and another panel will show you latest movies (which as you remember
is found at '/api/movies') which also has links to movies. There's also a
search box that performs a GET to ('/search?term=' + searchbox.text).
(I almost got to the punchline... wait for it)
The process of linking to invidual movies from the queue, the movie list and
search results is RESTful. You GET a list of movies (name + link) back, and
can click on those links to "transition state" to view the movie details.
The problem is that for the sake of my specific requirements, I hardcoded
the URLs for the queue, the current movie list and the search box. I didn't
"discover" those URLs based on the result of my login request (which happens
to be my bookmark and entry point to the system). Notice that I also
hard-coded the query parameter that needs to be used for my search term.
Based on my (clearly limited) understanding of REST, all of that hardcoding
of URLs in my Flash UI, which is a client of the Metflix.com, is a violation
of the HATEOAS/hyptertext constraint. Based on my (personal, limited)
observation, there are no RESTful APIs that implement the hypertext
constraint any better.
Even if I did get back URLs, my Flash would still need to find the *current
value *of *a specific URL *for a specific task (like the URL for the 'list
of current movies', the URL for the 'movies in my queue' and the URL and the
name of the query param of the search form). The Metflix API needs to
somehow provide my client with a set of URLs that I need for my next
potential logical tasks (my "workspace," if I understandy Roy Fielding
correctly). It also needs to assign a means of of identifying how those
URLs map to a specific functionality that's different from the URLs
themselves. For example It's not proper to expect the client to know what
'/api/movies' is. The server needs to provide another piece of
well-understood information from which my Flash UI can *interpret *that the
meaning of the '/api/movies' link is 'the current list of movies.'
Websites, unlike REST APIs, do have a clever method of link
identification... It's the natural language found between and around the '<a
href="..">' and the '</a>' tags. There's an "interpretation engine" that
can understand that natural language, and discover the features of the
pages, with the help of a User Agent (such as a web browser). (BTW, the
more I think about it, the more I apprciate th thought put into HTML and the
rest of the REST ecosystem).
Is there a clever universal method for identifying ALL appropriate links in
a client of a REST API such that you can start with a given URL/bookmark
(like login), and not have to hardcode any "workspace" URLs other than the
first? Is this identification process to complex without a human to
interpret the meaning of the links?
Take a look at Craig McLanahan's first message in this thread... IMHO, it
has some important clues to these answers :)
*
Darrel*, does that answer the question satisfactorally?
*Roy*, if you're reading this, did I get it right this time?
-Solomon Duskis
--- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@...> wrote:
>
> Comments below
>
> On Wed, Mar 4, 2009 at 9:18 PM, Darrel Miller <darrel.miller@...>wrote:
>
> >
> > On Wed, Mar 4, 2009 at 6:31 AM, Solomon Duskis <sduskis@...> wrote:
> >
> >> C) Website discover works because there's a human driving the
> >> navigation. REST API discoverability requires a complex "discovery engine"
> >> making REST APIs too complex to discover effectively without that engine.
> >> The barrier to entry is too big,
> >>
> >
> >
> >
> >> __,_._,_
> >>
> >
> > I don't understand why you believe it is difficult to get a client
> > application to discover links. Writing client side code to finding and
> > follow links in an XML or HTML based document is quite trivial.
> >
> > Darrel
> >
>
> Good question Darrel; it's straight to the point :). I hope that this email
> will answer that question. I also hope that this will either prove or
> disprove that I understand what Roy Fielding said in the last email to this
> group.
>
> Yes, you can create a system that can follow *arbitrary *XML or HTML links.
> However, REST APIs are being used to build other systems. Those system
> require *specific functionality* from the API at *specific *points in the
> interaction (a.k.a.the client workspace). I haven't seen a working "REST
> API" that provides a method to derive ALL of the URLs for that timely
> specific functionality based on a single entry point.
>
> I'm going to give an example of this with a non-existant, theoretical REST
> API from a company called Metflix (not to be confused with any other APIs,
> for legal reasons).
>
> (Bear with me here, it's going to take some time to get to the punch line.)
>
> Metflix is a website that has all movies that relate to the Mets football
> franchise (which of course does not exist, and has no relation to any
> company what-so-ever). You can:
>
> - login (metflix.com/login),
> - manage your Queue of Mets movies (/myqueue, as a starting point plus
> management functionality),
> - view a list of the latest movies (/movies/),
> - view information about a Mets movie (/movies/{id}),
> - search for Mets movies (/search?term={searchTerm}).
>
> Of course there are plenty of other whiz-bang features, but we can limit our
> discussion to those :).
>
> The Metflix website got so successful, that it created a RESTful API so that
> other applications can be built around its services. The URLS of the
> service just happen to have the same URLs as the site, but are prefixed by
> '/api'.
>
> I want to build a fancy Flash UI client for that service, that basically
> exposes the same services, but just looks much nicer (Note: their affiliate
> program is fantastic... I'll make a ton of money that way).
>
> My Flash app requires the user to login, then he or she can manage the
> queue, search for Mets movies, and consequently view details about movies
> from either the queue or the search results.
>
> My fancy Flash client will POST to 'metflix.com/api/login', and get a 200
> status code and an auth token. Then it will show your queue, (which as you
> remember, is found at '/api/myqueue' and of course the auth token has to be
> used here) in one panel, which has links to the individual movies in the
> queue, and another panel will show you latest movies (which as you remember
> is found at '/api/movies') which also has links to movies. There's also a
> search box that performs a GET to ('/search?term=' + searchbox.text).
>
> (I almost got to the punchline... wait for it)
>
> The process of linking to invidual movies from the queue, the movie list and
> search results is RESTful. You GET a list of movies (name + link) back, and
> can click on those links to "transition state" to view the movie details.
>
> The problem is that for the sake of my specific requirements, I hardcoded
> the URLs for the queue, the current movie list and the search box. I didn't
> "discover" those URLs based on the result of my login request (which happens
> to be my bookmark and entry point to the system). Notice that I also
> hard-coded the query parameter that needs to be used for my search term.
>
> Based on my (clearly limited) understanding of REST, all of that hardcoding
> of URLs in my Flash UI, which is a client of the Metflix.com, is a violation
> of the HATEOAS/hyptertext constraint. Based on my (personal, limited)
> observation, there are no RESTful APIs that implement the hypertext
> constraint any better.
>
> Even if I did get back URLs, my Flash would still need to find the *current
> value *of *a specific URL *for a specific task (like the URL for the 'list
> of current movies', the URL for the 'movies in my queue' and the URL and the
> name of the query param of the search form). The Metflix API needs to
> somehow provide my client with a set of URLs that I need for my next
> potential logical tasks (my "workspace," if I understandy Roy Fielding
> correctly). It also needs to assign a means of of identifying how those
> URLs map to a specific functionality that's different from the URLs
> themselves. For example It's not proper to expect the client to know what
> '/api/movies' is. The server needs to provide another piece of
> well-understood information from which my Flash UI can *interpret *that the
> meaning of the '/api/movies' link is 'the current list of movies.'
>
> Websites, unlike REST APIs, do have a clever method of link
> identification... It's the natural language found between and around the '<a
> href="..">' and the '</a>' tags. There's an "interpretation engine" that
> can understand that natural language, and discover the features of the
> pages, with the help of a User Agent (such as a web browser). (BTW, the
> more I think about it, the more I apprciate th thought put into HTML and the
> rest of the REST ecosystem).
>
> Is there a clever universal method for identifying ALL appropriate links in
> a client of a REST API such that you can start with a given URL/bookmark
> (like login), and not have to hardcode any "workspace" URLs other than the
> first? Is this identification process to complex without a human to
> interpret the meaning of the links?
>
> Take a look at Craig McLanahan's first message in this thread... IMHO, it
> has some important clues to these answers :)
> *
> Darrel*, does that answer the question satisfactorally?
>
> *Roy*, if you're reading this, did I get it right this time?
>
> -Solomon Duskis
>
Are you expecting that a client should "guess" the next state transition out of the blue? I'm not sure this is possible. Some knowledge has to be provided (I think) out of band that explains "relationships" in some way such that a client can be programmed to traverse the state transitions. The key (in my opinion) is that the out of band information are not URIs but rather "identifiers" for relationships e.g. Atom uses "rel" to indicate relationships and basically monikers like "self" have meaning within the protocol.
Eb
Solomon Duskis wrote: > I haven't seen a working "REST API" that provides a method to derive ALL of the URLs > for that timely specific functionality based on a single entry point. Here is an experimental api that accesses the MSDN documentation and the community generated content. http://lab.msdn.microsoft.com/restapi/ It uses the XHTML media type. I don't believe you need any other information to start using it. Ebenezer Ikonne wrote: > The key (in my opinion) is that the out of band information are not URIs but > rather "identifiers" for relationships e.g. Atom uses "rel" to indicate relationships > and basically monikers like "self" have meaning within the protocol. Those rel values are not "out of band" they are defined in the Atom spec which is related to the media type that is returned. To my knowledge if you want a client to interpret your own rel values you have two choices, you can create your own media-type and define the rel values or you can download code (a la AJAX) that knows what to do with those rel values. Darrel
You are (almost) right I think. The difference is: Given the
representation contains
<link rel="some-concept" ref="/some-uri" />
you don't hardcode the string "/some-uri" into your client, but rather
the string "some-concept".
Of course your program can't interpret new values of rel on the fly
(unless it's some fancy AI, but let's not get there). You are of
course better off if you use values for rel that are widely understood
- this is the reason for efforts like this:
http://tools.ietf.org/html/draft-nottingham-http-link-header-04
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
On 05.03.2009, at 05:03, Solomon Duskis wrote:
>
> Comments below
>
> On Wed, Mar 4, 2009 at 9:18 PM, Darrel Miller
> <darrel.miller@...> wrote:
>
> On Wed, Mar 4, 2009 at 6:31 AM, Solomon Duskis <sduskis@...>
> wrote:
> C) Website discover works because there's a human driving the
> navigation. REST API discoverability requires a complex "discovery
> engine" making REST APIs too complex to discover effectively without
> that engine. The barrier to entry is too big,
>
>
>
> __,_._,_
>
> I don't understand why you believe it is difficult to get a client
> application to discover links. Writing client side code to finding
> and follow links in an XML or HTML based document is quite trivial.
>
> Darrel
>
> Good question Darrel; it's straight to the point :). I hope that
> this email will answer that question. I also hope that this will
> either prove or disprove that I understand what Roy Fielding said in
> the last email to this group.
>
> Yes, you can create a system that can follow arbitrary XML or HTML
> links. However, REST APIs are being used to build other systems.
> Those system require specific functionality from the API at specific
> points in the interaction (a.k.a.the client workspace). I haven't
> seen a working "REST API" that provides a method to derive ALL of
> the URLs for that timely specific functionality based on a single
> entry point.
>
> I'm going to give an example of this with a non-existant,
> theoretical REST API from a company called Metflix (not to be
> confused with any other APIs, for legal reasons).
>
> (Bear with me here, it's going to take some time to get to the punch
> line.)
>
> Metflix is a website that has all movies that relate to the Mets
> football franchise (which of course does not exist, and has no
> relation to any company what-so-ever). You can:
> • login (metflix.com/login),
> • manage your Queue of Mets movies (/myqueue, as a starting point
> plus management functionality),
> • view a list of the latest movies (/movies/),
> • view information about a Mets movie (/movies/{id}),
> • search for Mets movies (/search?term={searchTerm}).
> Of course there are plenty of other whiz-bang features, but we can
> limit our discussion to those :).
>
> The Metflix website got so successful, that it created a RESTful API
> so that other applications can be built around its services. The
> URLS of the service just happen to have the same URLs as the site,
> but are prefixed by '/api'.
>
> I want to build a fancy Flash UI client for that service, that
> basically exposes the same services, but just looks much nicer
> (Note: their affiliate program is fantastic... I'll make a ton of
> money that way).
>
> My Flash app requires the user to login, then he or she can manage
> the queue, search for Mets movies, and consequently view details
> about movies from either the queue or the search results.
>
> My fancy Flash client will POST to 'metflix.com/api/login', and get
> a 200 status code and an auth token. Then it will show your queue,
> (which as you remember, is found at '/api/myqueue' and of course the
> auth token has to be used here) in one panel, which has links to the
> individual movies in the queue, and another panel will show you
> latest movies (which as you remember is found at '/api/movies')
> which also has links to movies. There's also a search box that
> performs a GET to ('/search?term=' + searchbox.text).
>
> (I almost got to the punchline... wait for it)
>
> The process of linking to invidual movies from the queue, the movie
> list and search results is RESTful. You GET a list of movies (name
> + link) back, and can click on those links to "transition state" to
> view the movie details.
>
> The problem is that for the sake of my specific requirements, I
> hardcoded the URLs for the queue, the current movie list and the
> search box. I didn't "discover" those URLs based on the result of
> my login request (which happens to be my bookmark and entry point to
> the system). Notice that I also hard-coded the query parameter that
> needs to be used for my search term.
>
> Based on my (clearly limited) understanding of REST, all of that
> hardcoding of URLs in my Flash UI, which is a client of the
> Metflix.com, is a violation of the HATEOAS/hyptertext constraint.
> Based on my (personal, limited) observation, there are no RESTful
> APIs that implement the hypertext constraint any better.
>
> Even if I did get back URLs, my Flash would still need to find the
> current value of a specific URL for a specific task (like the URL
> for the 'list of current movies', the URL for the 'movies in my
> queue' and the URL and the name of the query param of the search
> form). The Metflix API needs to somehow provide my client with a
> set of URLs that I need for my next potential logical tasks (my
> "workspace," if I understandy Roy Fielding correctly). It also
> needs to assign a means of of identifying how those URLs map to a
> specific functionality that's different from the URLs themselves.
> For example It's not proper to expect the client to know what '/api/
> movies' is. The server needs to provide another piece of well-
> understood information from which my Flash UI can interpret that the
> meaning of the '/api/movies' link is 'the current list of movies.'
>
> Websites, unlike REST APIs, do have a clever method of link
> identification... It's the natural language found between and around
> the '<a href="..">' and the '</a>' tags. There's an "interpretation
> engine" that can understand that natural language, and discover the
> features of the pages, with the help of a User Agent (such as a web
> browser). (BTW, the more I think about it, the more I apprciate th
> thought put into HTML and the rest of the REST ecosystem).
>
> Is there a clever universal method for identifying ALL appropriate
> links in a client of a REST API such that you can start with a given
> URL/bookmark (like login), and not have to hardcode any "workspace"
> URLs other than the first? Is this identification process to
> complex without a human to interpret the meaning of the links?
>
> Take a look at Craig McLanahan's first message in this thread...
> IMHO, it has some important clues to these answers :)
>
> Darrel, does that answer the question satisfactorally?
>
> Roy, if you're reading this, did I get it right this time?
>
> -Solomon Duskis
>
>
> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%; font-
> weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; } #ygrp-
> mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!-- #ygrp-
> sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%; line-
> height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom: 10px;
> padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px; font-family:
> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> input, textarea {font:99% arial,helvetica,clean,sans-serif;} #ygrp-
> mlmsg pre, code {font:115% monospace;*font-size:100%;} #ygrp-mlmsg *
> {line-height:1.22em;} #ygrp-text{ font-family: Georgia; } #ygrp-
> text p{ margin: 0 0 1em 0; } dd.last p a { font-family: Verdana;
> font-weight: bold; } #ygrp-vitnav{ padding-top: 10px; font-family:
> Verdana; font-size: 77%; margin: 0; } #ygrp-vitnav a{ padding: 0
> 1px; } #ygrp-mlmsg #logo{ padding-bottom: 10px; } #ygrp-reco
> { margin-bottom: 20px; padding: 0px; } #ygrp-reco #reco-head { font-
> weight: bold; color: #ff7900; } #reco-category{ font-size: 77%; }
> #reco-desc{ font-size: 77%; } #ygrp-vital a{ text-decoration:
> none; } #ygrp-vital a:hover{ text-decoration: underline; } #ygrp-
> sponsor #ov ul{ padding: 0 0 0 8px; margin: 0; } #ygrp-sponsor #ov
> li{ list-style-type: square; padding: 6px 0; font-size: 77%; } #ygrp-
> sponsor #ov li a{ text-decoration: none; font-size: 130%; } #ygrp-
> sponsor #nc{ background-color: #eee; margin-bottom: 20px;
> padding: 0 8px; } #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-
> sponsor .ad #hd1{ font-family: Arial; font-weight: bold; color:
> #628c2a; font-size: 100%; line-height: 122%; } #ygrp-sponsor .ad
> a{ text-decoration: none; } #ygrp-sponsor .ad a:hover{ text-
> decoration: underline; } #ygrp-sponsor .ad p{ margin: 0; } o{font-
> size: 0; } .MsoNormal{ margin: 0 0 0 0; } #ygrp-text tt{ font-size:
> 120%; } blockquote{margin: 0 0 0 4px;} .replbq{margin:4} dd.last p
> span { margin-right: 10px; font-family: Verdana; font-weight:
> bold; } dd.last p span.yshortcuts { margin-right: 0; } div.photo-
> title a, div.photo-title a:active, div.photo-title a:hover,
> div.photo-title a:visited { text-decoration: none; } div.file-title
> a, div.file-title a:active, div.file-title a:hover, div.file-title
> a:visited { text-decoration: none; } #ygrp-msg p { clear: both;
> padding: 15px 0 3px 0; overflow: hidden; } #ygrp-msg p span { color:
> #1E66AE; font-weight: bold; } div#ygrp-mlmsg #ygrp-msg p a
> span.yshortcuts { font-family: Verdana; font-size: 10px; font-
> weight: normal; } #ygrp-msg p a { font-family: Verdana; font-size:
> 10px; } #ygrp-mlmsg a { color: #1E66AE; } div.attach-table div div a
> { text-decoration: none; } div.attach-table { width: 400px; } -->
*RE: XHTML *It looks like a promising foundation for RESTful API Media Types. You have a whole bunch of class="<some identifier>" on your <a> tags (plus other tags) that a programmatic client can use to identify the meaning link (rather than relying on the href itself). I do think that HTML is the media to emulate. "class" is used successfully to perform the kind of semantic markup that atom's "rel" has. "class" can also be multi-valued (which I don't know if that's possible with "rel") and *is currently used* by programmatic constructs to infer meaning outside of the href. "class" also has HTML siblings in the task of identification. There's also the "id" tag in the "a" element, the body of the "a" element, and "<label for=''>...</label>" for form elements. RESTful API Media Types need a way to integrate these types of tactics in a consistent way. Beyond the Media Type issues, another thing that needs to be explored is how to use this information the the client side for non-browser clients. The programmatic constructs that use "class" (such as JavaScript, and even CSS) usually fall under the role of code-on-demand, but the general techniques can be used for programmatic constructs that fill the role of remote client. Once we have a consistent approach to Media Types, we can start constructing "User Agent" APIs that ease the task of traversing specific RESTful API Media Types that share common elements of semantic markup. *RE: I don't believe you need any other information to start using it.* Before a programmatic client can perform a specific task it must either know a single "class" value , or a set of "class" values to follow. IMHO, that's shared "out of band" information, but it's not driving the interaction; meaning those semantics are the crux of what you need to know to interact, but they don't define lower-level URI construction semantics. A bookmark + shared media types + shared semantics should provide a way for the client can discover the specific resources it needs from the server. you can create your own media-type and define the rel values I'm not going to put words in your mouth, but it sounds like there needs to be a shared understanding of semantics. That shared understanding can come from a community consensus like Stefan Tikolov suggests: Of course your program can't interpret new values of rel on the fly (unless > it's some fancy AI, but let's not get there). You are of > course better off if you use values for rel that are widely understood - > this is the reason for efforts like this: > > http://tools.ietf.org/html/draft-nottingham-http-link-header-04 It's also perfectly legitimate for a RESTful API to create its own dictionary of values, along with its own Media Types. It may also provide a few entry points for different client types. Those should the likely be the only items on a RESTful API documentation page :). (It's also likely that the API will have a set of example URLs to show some specific functionality, but should discourage clients from using those URLs directly, and encourage the use of semantics) or you can download code (a la AJAX) that knows what to do with those rel > values that knows what to do with those rel values. I agree that this is a great option and should be explored, especially outside the generic web browser. Like I said earlier, I can see a great place for "User Agent" platforms that know about a common Media Type rules and knows how run ubiquitous scripting languages... More on that later :) Thanks for the discussion guys! This has been greatly enlightening. -Solomon On Thu, Mar 5, 2009 at 12:50 AM, Darrel Miller <darrel.miller@...>wrote: > Solomon Duskis wrote: > > I haven't seen a working "REST API" that provides a method to derive ALL > of the URLs > > for that timely specific functionality based on a single entry point. > > > Here is an experimental api that accesses the MSDN documentation and the > community generated content. > > http://lab.msdn.microsoft.com/restapi/ > It uses the XHTML media type. I don't believe you need any other > information to start using it. > > > Ebenezer Ikonne wrote: > > The key (in my opinion) is that the out of band information are not URIs > but > > rather "identifiers" for relationships e.g. Atom uses "rel" to indicate > relationships > > and basically monikers like "self" have meaning within the protocol. > > Those rel values are not "out of band" they are defined in the Atom spec > which is related to the media type that is returned. To my knowledge if you > want a client to interpret your own rel values you have two choices, you can > create your own media-type and define the rel values or you can download > code (a la AJAX) that knows what to do with those rel values. > > Darrel > >
> Those rel values are not "out of band" they are defined in the Atom spec > which is related to the media type that is returned. To my knowledge if you > want a client to interpret your own rel values you have two choices, you can > create your own media-type and define the rel values or you can download > code (a la AJAX) that knows what to do with those rel values. > > Darrel > Well in a sense they are, because the protocol exists in text that is outside of the client. The client is not determining what "rel" inline, its programmed with a priori knowledge of the transitions. provided by all "rel" values. Having said that, I think we're splitting hairs here. :)
Here is an interesting survey of rel values in use on the web. http://blog.unto.net/web/a-survey-of-rel-values-on-the-web/ Excerpt: > found a staggering 1.8M unique rel value strings in use, with many used only once or > twice across all the web. In fact, the top 6 most-frequently-used rel values accounted > for 80% of all usage, and the top 11 alone were responsible for 90% of all usage. On Thu, Mar 5, 2009 at 7:39 AM, Ebenezer Ikonne <amaeze@...> wrote: > > Well in a sense they are, because the protocol exists in text that is outside of the client. > The client is not determining what "rel" inline, its programmed with a priori knowledge of the > transitions. provided by all "rel" values. > Yes the client must understand the media type before hand. However the significant difference is that when the client follows a link, the media-type is in the header of the response. The client knows how to parse the message based only on the content of the message and its prior knowledge of the media type. In so many so called "RESTful" API's that I see the client retrieves application/xml from endpoint http://site.org/xyz and it must know that the application/xml at this endpoint contains a specific vocabulary. Darrel
> Yes the client must understand the media type before hand. �However > the significant difference is that when the client follows a link, the > media-type is in the header of the response. �The client knows how to > parse the message based only on the content of the message and its > prior knowledge of the media type. �In so many so called "RESTful" > API's that I see the client retrieves application/xml from endpoint > http://site.org/xyz and it must know that the application/xml at this > endpoint contains a specific vocabulary. > > Darrel > I think we are in some agreement here. Now the debate for whether to use generic media-types versus specific media-types is slightly different (IMO). I believe in specific media-types, but in both cases, a priori is still required. The flexibility of the client is severely hampered when using generic media-types. Eb
Hi,
I have a simple q: if TCP is a reliable protocol and HTTP uses TCP, how come HTTP is viewed as unreliable? A specific example that demonstrates would be very welcome.
Also, which reliable HTTP solution is considered the best and why: POE, HTTPLR, SOA-Rity or Joe Gregorio's best practice in Restify Day Trader?
Thanks,
Sean.
--- In rest-discuss@yahoogroups.com, Sean Kennedy <seandkennedy@...> wrote: > > Hi, > I have a simple q: if TCP is a reliable protocol and HTTP uses TCP, how come HTTP is viewed as unreliable? A specific example that demonstrates would be very welcome. > Also, which reliable HTTP solution is considered the best and why: POE, HTTPLR, SOA-Rity or Joe Gregorio's best practice in Restify Day Trader? > > Thanks, > Sean. > Is HTTP itself viewed as unreliable the results of the operation done over HTTP "could" be unreliable e.g. server could timeout internally after processing the request halfway. I believe the solutions you mention address this type of unreliability and not transport unreliability per se. I could be wrong though. I don't have experience with any of the solutions to really have an opinion. Eb
Some personal opinions: Reliability is a probability. Saying HTTP is reliable or not reliable just mean to compare its reliability in an specific environment and usage is higher or lower than an expected value. More efforts (both programming and computation) is needed to make it more reliable. And often this implies extra cost. To make an assumption of unreliability is good for producing reliable solutions. Cheers, Dong On Thu, Mar 5, 2009 at 9:22 AM, Sean Kennedy <seandkennedy@...> wrote: > Hi, > I have a simple q: if TCP is a reliable protocol and HTTP uses TCP, how > come HTTP is viewed as unreliable? A specific example that demonstrates > would be very welcome. > Also, which reliable HTTP solution is considered the best and why: POE, > HTTPLR, SOA-Rity or Joe Gregorio's best practice in Restify Day Trader? > > Thanks, > Sean. > > -- http://dongnotes.blogspot.com/
* Sean Kennedy <seandkennedy@...> [2009-03-05 16:25]: > I have a simple q: if TCP is a reliable protocol and HTTP uses > TCP, how come HTTP is viewed as unreliable? You’re getting your layers mixed up. TCP is a reliable transport protocol; HTTP is an unreliable application protoocol. A reliable transport will do what’s necessary to ensure that all the bytes sent by either side will reach the other side, and in the right order. But how that stream of bytes is interpreted is a question of the application protocol, which is the level at which HTTP resides, and HTTP makes no guarantees about how HTTP requests will be processed. F.ex. if you send a POST request, the server might close the connection before sending you a result – no timeout or anything, the server just shuts down the connection as soon as it receives your request. As far as the TCP layer is concerned, everything is in perfect shape: all bytes sent by both sides are received and and the connection is closed properly. However, you still have no idea whether the POST request was processed and what the result was, because in terms of HTTP semantics, no response was sent. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Sean Kennedy <seandkennedy@...> [2009-03-06 15:40]: > Aristotle, > > Thanks for that. Would it be fair to say that in your example > scenario, that the client (at the HTTP) layer does know that > the request was successfully received by the server but thats > all - no idea as to whether the request was processed... Yes. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
I am not sure if this scenario is proper for the reliability of HTTP. Both HTTP 1.0 and 1.1 have comments on such scenarios. 1.0: > Note: If the client is sending data, server implementations on TCP > should be careful to ensure that the client acknowledges receipt > of the packet(s) containing the response prior to closing the > input connection. > 1.1: > A client, server, or proxy MAY close the transport connection at any > time. For example, a client might have started to send a new request > at the same time that the server has decided to close the "idle" > connection. From the server's point of view, the connection is being > closed while it was idle, but from the client's point of view, a > request is in progress. > > This means that clients, servers, and proxies MUST be able to recover > from asynchronous close events. Client software SHOULD reopen the > transport connection and retransmit the aborted sequence of requests > without user interaction so long as the request sequence is > idempotent (see section 9.1.2). Non-idempotent methods or sequences > MUST NOT be automatically retried, although user agents MAY offer a > human operator the choice of retrying the request(s). Confirmation by > user-agent software with semantic understanding of the application > MAY substitute for user confirmation. The automatic retry SHOULD NOT > be repeated if the second sequence of requests fails. > > Servers SHOULD always respond to at least one request per connection, > if at all possible. Servers SHOULD NOT close a connection in the > middle of transmitting a response, unless a network or client failure > is suspected. > Cheers, Dong On Fri, Mar 6, 2009 at 6:06 AM, Aristotle Pagaltzis <pagaltzis@...> wrote: > * Sean Kennedy <seandkennedy@...> [2009-03-05 16:25]: > >> I have a simple q: if TCP is a reliable protocol and HTTP uses >> TCP, how come HTTP is viewed as unreliable? > > You’re getting your layers mixed up. > > TCP is a reliable transport protocol; HTTP is an unreliable > application protoocol. A reliable transport will do what’s > necessary to ensure that all the bytes sent by either side will > reach the other side, and in the right order. But how that stream > of bytes is interpreted is a question of the application > protocol, which is the level at which HTTP resides, and HTTP > makes no guarantees about how HTTP requests will be processed. > > F.ex. if you send a POST request, the server might close the > connection before sending you a result – no timeout or anything, > the server just shuts down the connection as soon as it receives > your request. As far as the TCP layer is concerned, everything is > in perfect shape: all bytes sent by both sides are received and > and the connection is closed properly. However, you still have no > idea whether the POST request was processed and what the result > was, because in terms of HTTP semantics, no response was sent. > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> > -- http://dongnotes.blogspot.com/
Hi, Sean Kennedy wrote: > > > Hi, > I have a simple q: if TCP is a reliable protocol and HTTP uses TCP, > how come HTTP is viewed as unreliable? A specific example that > demonstrates would be very welcome. Perhaps this thread might be of interest (regarding the reliability of TCP): http://lkml.indiana.edu/hypermail/linux/kernel/0106.1/1154.html Best wishes, Bruno.
good info. On Fri, Mar 6, 2009 at 12:04 PM, Bruno Harbulot < Bruno.Harbulot@...> wrote: > Hi, > > > Sean Kennedy wrote: > > > > > > Hi, > > I have a simple q: if TCP is a reliable protocol and HTTP uses TCP, > > how come HTTP is viewed as unreliable? A specific example that > > demonstrates would be very welcome. > > Perhaps this thread might be of interest (regarding the reliability of > TCP): http://lkml.indiana.edu/hypermail/linux/kernel/0106.1/1154.html > > Best wishes, > > Bruno. > > >
Out of historical interest, I'm trying to find out the motivation behind trying to add stateful communication to HTTP (in the form of cookies, URL rewriting and similar approaches). In hindsight, trying to make HTTP stateful seems to be such an obviously bad idea that I wonder whether I'm just blind, or whether there really were no good reasons. Any pointers, or historical recollections, would be appreciated. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Mar 7, 2009, at 6:20 AM, Stefan Tilkov wrote: > Out of historical interest, I'm trying to find out the motivation > behind trying to add stateful communication to HTTP (in the form of > cookies, URL rewriting and similar approaches). > > In hindsight, trying to make HTTP stateful seems to be such an > obviously bad idea that I wonder whether I'm just blind, or whether > there really were no good reasons. > > Any pointers, or historical recollections, would be appreciated. > See Brian Behlendorf's early proposal for a State header field (mostly to keep track of netnews-style articles read) and later discussion of Netscape Cookies, both on www-talk archives. There were also a lot of contemporaneous complaints about statelessness with regards to gateways, particularly of the screen-scraping kind interfacing with old mainframe apps. ....Roy
Hi Craig,
Craig McClanahan wrote:
>
>
> On Tue, Feb 17, 2009 at 4:50 AM, Jeff Thorn <jeff@...
> <mailto:jeff%40thorntechnologies.com>> wrote:
> > Hi Craig,
> > Thanks for the response. I haven't committed 100% to a particular
> framework
> > yet. Out of curiosity, how would you implement it in Jersery?
> >
>
> Jersey 1.0.2 (recently released) includes a mechanism to provide
> filters that are invoked either globally, or on particular resource
> URIs. In addition, you can use a filter to inject a security context
> that includes logic to perform role based authorization. To see an
> example of this in action, check out the "atompub-contacts-server"
> example in the "samples" directory. In particular, look at class
> "com.sun.jersey.samples.contacts.auth.SecurityFilter".
I've just had a look at this sample SecurityFilter, and I've noticed
that you had modelled "user" as a role.
We've had a discussion on a similar topic on the Restlet-code mailing
list [1][2]. I was saying that "owner" (same is "user" in your example)
was not a role as such, but a property of the resource.
I'll quote my own post in [1]:
> [...] assuming that you can express the owner from the relevant part
> of the URI seems dangerous. It might work in some cases with "pretty
> URIs", but in practice, that's going to be hard to generalise.
> Suppose you have "message" resources (for example, for a forum) all
> stored in a DB. Each message row contains the content of the message and
> which user wrote it (at least). Only authors of a given message and
> admins can delete this message. You're going to have URI of the form
> "/message/{messageId}". "Admin" is going to be a role, but there's no
> way you can simply express the "owner" property as a role in the
> configuration file. (Again, I'd be happy to be proven wrong with an
> example.)
> As I've said in the previous thread, "owner" is not usually a role in
> the RBAC sense.
> In the servlet model, it's quite hard to have "owner" being a role for a
> particular resource for example. The only way can think of it would be
> to have a filter load the resource before it reaches the servlet, decide
> whether or not to give the user that "owner" role and only then protect
> the target servlet. It sounds feasible, but I'm not sure if it's very
> natural.
> If you do the simplistic comparison of Servlet to Restlet by mapping
> servlet filters to restlet filters and servlet to resource, you end up
> having some logic that would be natural to have in the resource class
> (loading the row from the DB) in the filter. This doesn't seem like good
> practice in Restlet either.
The sample SecurityFilter in Jersey seems to rely on the fact that it's
a "pretty" URI from which the owner name can be deduced.
Although URIs ought to be opaque for the client application, they don't
have to be opaque for the server, so that's fair enough. However, I'm
not convinced that model can work well in general.
(a) Even if you can tell whether a user is the owner from the URI
("this.principal.getName().endsWith(pathParam)" in this example), it
means that the SecurityFilter has to know about this URI structure and
won't be easy to re-use in another context;
(b) When there is no way to guess the role from what you get from
URI and in the user Principal, this is likely to have to duplicate (or
to split) some of the task of the resource class into the filter (for
example, by loading the owner from a database, probably based on the
resource type, or perhaps by extracting who has which role from the
resource itself). In addition, if these role mappings are loaded from a
database (where the resource data is too), all this should probably be
done as part of the same transaction.
Could you give a bit of background on why you decided to model the owner
(or "user" in your example) as a role?
More generally, do you have examples of RBAC systems (Servlet/Java or
not) where being the owner of a specific resource is modelled as a role?
Best wishes,
Bruno.
[1]
http://restlet.tigris.org/ds/viewMessage.do?dsForumId=7458&dsMessageId=1256759
[2]
http://restlet.tigris.org/ds/viewMessage.do?dsForumId=7458&dsMessageId=1134688
I'll just say that of all the REST constraints, the statelessness one is the least well understood. Cookies and URL rewriting and similar approaches are not necessarily violations of REST, nor are they bad decisions. If you read the text closely, it talks about statelessness as being the easiest to build from the server perspective, which is obviously true. But 'ease of building' is by no means the only criteria for evaluating one part of a system. There are a large number of very scalable and performant web sites that retain state on the server. If you want a very high performant web site, you have to move a bunch of state into a cache of some kind, beit memcached, APC, or any of the many other solutions. You then have to tie the client into a server that holds the client's cached state. These systems are definitely more complicated than a stateless web server, but it's all about making trade-offs in building distributed systems. Cheers, Dave 2009/3/7 Roy T. Fielding <fielding@...>: > On Mar 7, 2009, at 6:20 AM, Stefan Tilkov wrote: > >> Out of historical interest, I'm trying to find out the motivation >> behind trying to add stateful communication to HTTP (in the form of >> cookies, URL rewriting and similar approaches). >> >> In hindsight, trying to make HTTP stateful seems to be such an >> obviously bad idea that I wonder whether I'm just blind, or whether >> there really were no good reasons. >> >> Any pointers, or historical recollections, would be appreciated. >> > > See Brian Behlendorf's early proposal for a State header field > (mostly to keep track of netnews-style articles read) and later > discussion of Netscape Cookies, both on www-talk archives. > > There were also a lot of contemporaneous complaints about > statelessness with regards to gateways, particularly of the > screen-scraping kind interfacing with old mainframe apps. > > ....Roy > >
I see what you're saying Dave, but those trade-offs are based - practically speaking - on available tooling (design time, standards, frameworks, infrastructure etc). Historically, it seems to me that industry was geared more towards providing proprietary, black box, atomic solutions and tooling. An environment where interoperability is constrained purposefully by stake-holders' natural desire to control supply (propreitary standards), and demand (controlling/owning state). As I understand it, a lot of the properties of RESTful architecture actually play against the forces of this desired environment quite directly - decentralisation existing as the apparent antithesis of value creation. I actually believe this process to have been a necessary part of an on-going evolution, where concentrations of value provided the fuel and combustion required to 'build up speed'. Now; it seems that a combination of the information age, the semantic web, and open source/standards are breaking barriers and ecouraging new business approaches to building distributed systems - spurred on by the current and future state of the world economy, no doubt. So (to me anyway!) it appears the controlling factors have been more economic, political, and social, than directly technical. Cheers, Mike > I'll just say that of all the REST constraints, the statelessness one > is the least well understood. Cookies and URL rewriting and similar > approaches are not necessarily violations of REST, nor are they bad > decisions. If you read the text closely, it talks about statelessness > as being the easiest to build from the server perspective, which is > obviously true. But 'ease of building' is by no means the only > criteria for evaluating one part of a system. > > There are a large number of very scalable and performant web sites > that retain state on the server. If you want a very high performant > web site, you have to move a bunch of state into a cache of some kind, > beit memcached, APC, or any of the many other solutions. You then > have to tie the client into a server that holds the client's cached > state. These systems are definitely more complicated than a stateless > web server, but it's all about making trade-offs in building > distributed systems. > > Cheers, > Dave >
It seems to me most of the threads here in recent weeks are off-topic. It isn't the end of the world if a non-RESTful interface is included in an otherwise-RESTful API. Perhaps there should be an http-batch- discuss group? There certainly are many interesting proposals floating around out there, none of which can ever meet the constraints of the uniform connector interface, unfortunately. This makes sense: http://tech.groups.yahoo.com/group/rest-discuss/message/12139 Not only does HTTP not provide a mechanism to carry out multiple operations for *any* of its request methods, but neither would any RESTful protocol. The REST style is about the junction between the application and the network. The style allows the independent evolution of client and server logic in an application -- provided the interface remains uniform for all resources. If I create an HTML form with a list of URLs, each with its own checkbox, and one "DELETE" button, it's trivial to write a POST handler to perform a batch delete in one transaction. It's even RESTful, if my application makes no other use of POST. If I'm also using POST to accept new content, though, my application's POST semantics become clear as mud. Which is why the common case of deletion is separated out as its own method in a RESTful protocol to begin with. In the common case of the Web, DELETE traffic is a tiny fraction of GET traffic. So it just doesn't _matter_ that some sort of client logic like my HTML form can accomplish the same objective as discrete DELETE requests in one round trip, in terms of bandwidth. Splitting hairs. Same with batch updates. The bandwidth conserved by caching GET traffic is an order of magnitude greater than that consumed by fringe cases where bandwidth could be saved by batching multiple DELETE (or PUT or POST) requests. REST optimizes for GET, not batch processing. If my application supports RESTful, resource-by-resource deletion *and* provides an HTML interface for performing batch deletion (or updates), no big deal! There's nothing wrong with pragmatism or ease of use. Eventually, forms technology will catch up (looks like), and these outmoded interfaces may be replaced. Clients may evolve independently of the server. So long as I understand that the Platonic (or Royonic) ideal here is that eventually my application only assigns one meaning to POST, and the client logic fires off a bunch of DELETE requests. In the case of my HTML form, it wouldn't look any different to the user. HTTP leaves plenty of wiggle room for how its methods may be used. The key to a REST API is defining discrete semantics for each method. The objective is an application which "constrain[s] the interface to a consistent set of semantics for all resources" under its control. This means if you want POST to accept new content, then it means only that, for all content-types. NOT: sometimes accept new content, sometimes delete content, sometimes update content, depending on media type (or header, or parameter) or some other metric or batch-response trigger. The very notion of batch processing seems antithetical to REST, since (just like my HTML form) it decreases visibility and reliability for the sake of optimizing for uncommon cases. REST makes a fine hammer. Batch processing is a screw. -Eric
And however REST is independent of the protocol, and the constraint is "a uniform interface", not a HTTP-like uniform interface. Look at these definitions of Webdav methods: The WebDAVBMOVE Method is similar to the MOVE Method<http://msdn.microsoft.com/en-us/library/aa142926%28EXCHG.65%29.aspx>but it is used to ***move one or more target resources*** to a destination. The WebDAVBDELETE Method is similar to the DELETE Method<http://msdn.microsoft.com/en-us/library/aa142839%28EXCHG.65%29.aspx>but it is used to ***delete one or more target resources***. Now it can be arguable if Webdav is a RESTful protocol, but you can surely build a RESTfull app on top of it... So I think don't agree when you say "but neither would any RESTful protocol." But I agree on the the HTTP side of it. _______________________________________________ Melhores cumprimentos / Beir beannacht / Best regards António Manuel dos Santos Mota mobile: +353(0)877718363 mailto: amsmota@gmail.com altmail: amsmota@... skype: amsmota msn: antoniomsmota@... profile: www.linkedin.com/in/amsmota _______________________________________________ 2009/3/12 Eric J. Bowman <eric@bisonsystems.net> > It seems to me most of the threads here in recent weeks are off-topic. > It isn't the end of the world if a non-RESTful interface is included in > an otherwise-RESTful API. Perhaps there should be an http-batch- > discuss group? There certainly are many interesting proposals floating > around out there, none of which can ever meet the constraints of the > uniform connector interface, unfortunately. This makes sense: > > http://tech.groups.yahoo.com/group/rest-discuss/message/12139 > > Not only does HTTP not provide a mechanism to carry out multiple > operations for *any* of its request methods, but neither would any > RESTful protocol. The REST style is about the junction between the > application and the network. The style allows the independent > evolution of client and server logic in an application -- provided the > interface remains uniform for all resources. > > If I create an HTML form with a list of URLs, each with its own > checkbox, and one "DELETE" button, it's trivial to write a POST handler > to perform a batch delete in one transaction. It's even RESTful, if > my application makes no other use of POST. If I'm also using POST > to accept new content, though, my application's POST semantics become > clear as mud. Which is why the common case of deletion is separated > out as its own method in a RESTful protocol to begin with. > > In the common case of the Web, DELETE traffic is a tiny fraction of GET > traffic. So it just doesn't _matter_ that some sort of client logic > like my HTML form can accomplish the same objective as discrete DELETE > requests in one round trip, in terms of bandwidth. Splitting hairs. > Same with batch updates. The bandwidth conserved by caching GET > traffic is an order of magnitude greater than that consumed by fringe > cases where bandwidth could be saved by batching multiple DELETE (or > PUT or POST) requests. REST optimizes for GET, not batch processing. > > If my application supports RESTful, resource-by-resource deletion *and* > provides an HTML interface for performing batch deletion (or updates), > no big deal! There's nothing wrong with pragmatism or ease of use. > Eventually, forms technology will catch up (looks like), and these > outmoded interfaces may be replaced. Clients may evolve independently > of the server. So long as I understand that the Platonic (or Royonic) > ideal here is that eventually my application only assigns one meaning > to POST, and the client logic fires off a bunch of DELETE requests. In > the case of my HTML form, it wouldn't look any different to the user. > > HTTP leaves plenty of wiggle room for how its methods may be used. The > key to a REST API is defining discrete semantics for each method. The > objective is an application which "constrain[s] the interface to a > consistent set of semantics for all resources" under its control. This > means if you want POST to accept new content, then it means only that, > for all content-types. NOT: sometimes accept new content, sometimes > delete content, sometimes update content, depending on media type (or > header, or parameter) or some other metric or batch-response trigger. > > The very notion of batch processing seems antithetical to REST, since > (just like my HTML form) it decreases visibility and reliability for > the sake of optimizing for uncommon cases. REST makes a fine hammer. > Batch processing is a screw. > > -Eric > >
Hi thank you for reading my post Is there any tutorial about REST and netbeans? I mean does NetBeans provides some facilities for developing REST client as it does for JAX-WS clients? Thanks, Pavan.
If you're looking for a java-based HTTP client that knows how to seamlessly transform XML and JSon to object, use the Jersey client API or the RESTEasy Client API. They probably don't meet the definition of REST clients, but that's a topic for another day :) -Solomon On Mon, Mar 16, 2009 at 10:46 PM, pavan.potti <pavan.potti@...> wrote: > > Hi > thank you for reading my post > Is there any tutorial about REST and netbeans? > I mean does NetBeans provides some facilities for developing REST client as > it does for JAX-WS clients? > > Thanks, > Pavan. > > >
> > And however REST is independent of the protocol, and the constraint > is "a uniform interface", not a HTTP-like uniform interface. > I'm aware of that. But, I don't see how batching would be RESTful even if the method in question were FTP-derived instead of HTTP-derived. In any uniform interface, each operation is given its own method. When a method is given more than one function, the uniform interface constraint is broken -- like using POST to accept new data while also using POST to tunnel batch deletion. When a method is defined which describes a batch operation, say BMOVE or BDELETE, the uniform interface constraint is broken just the same. In terms of visibility, the nature of the request can't be determined at the protocol level. In a uniform interface design, a client makes a request that an operation be performed on the URI _of_ that request. Batch requests are, by nature, RPC requests where the relevant URIs are included in message bodies instead of being request targets themselves. Sending protocol-level instructions to the server in an entity body (BMOVE, BDELETE) instead of as part of the request smacks of RPC design. In REST, a side effect of creating a resource may be the creation of another resource, but since the client didn't request that other resource's creation, the client doesn't need to be notified of it. However, if the client is requesting a whole bunch of URIs to be deleted by passing a list of URIs to some other URI, the server response is either "total failure" or "read this document to see how things went" for the URI-by-URI breakdown. The success or failure of the individual operations is hidden in a document instead of being visible at the protocol level as a status response. This is a far cry from the uniform interface approach, where if a client wants to delete multiple resources, it makes a DELETE request to each unique URI, and receives a succeed/fail response for each operation -- visibility instead of opacity. I would argue that BDELETE is a stateful request. The server must track multiple operations before responding to the client, and do this reliably in case of interruption; both issues are cleanly avoided in a REST request, where the contents of the request entity never contain protocol-level instructions for the server to carry out. > > Now it can be arguable if Webdav is a RESTful protocol, but you can > surely build a RESTfull app on top of it... So I think don't agree > when you say "but neither would any RESTful protocol." But I agree on > the the HTTP side of it. > Allow me to correct my phrasing: I should have said "API" there instead of "protocol" (for that matter, I said "HTTP" where I meant "RFC 2616"). I've come to the conclusion that there's no such thing as a RESTful protocol -- there can only be RESTful APIs. A RESTful API may be written using Atom Protocol, yet I've also seen Atom implementations that defy not only REST, but common sense as well. ;-) That a RESTful API may be written using WebDAV as its protocol, I have no doubt. But, this is not to say that all methods described in HTTP (including those in WebDAV) conform to the uniform interface. WebDAV is orthogonally useful for REST development, bearing in mind that its problem space is not the "common case of the Web" but the specialized case of remote filesystem manipulation. RESTful batch processing is still a red herring. HTTP batch processing is not, it's just off-topic here. :-) -Eric
Hi all, I'm pondering the use of a claim-based system for a REST architecture I'm developing. I have a few ideas on how to roll a custom http auth scheme to support SAML in an http-friendly way (aka at the http message level, not through redirects) Is anyone aware of some early RFC / standardization work going in this direction? Seb
> Batch requests are, by nature, RPC requests where the relevant URIs are > included in message bodies instead of being request targets themselves. > Sending protocol-level instructions to the server in an entity body > (BMOVE, BDELETE) instead of as part of the request smacks of RPC design. If by Batch you mean having semantics by which a set of operations should all succeed or fail as one, then I'm disagreeing that it's "by nature" RPC. Unless you consider any form of transaction boundary RPC. > I would argue that BDELETE > is a stateful request. The server must track multiple operations > before responding to the client, and do this reliably in case of > interruption; both issues are cleanly avoided in a REST request, where > the contents of the request entity never contain protocol-level > instructions for the server to carry out. State within one operation may exist, but it's internal to the server and it's implementation. One server will often keep state around while it goes and create multiple new resources on a simple action invoked through a POST and I don't hear anyone complain about it. If a resource creation can engender many other resources being created, why would the deletion of one resource ending up in many resources being deleted be an issue?
Sebastien Lambla wrote: > > > Batch requests are, by nature, RPC requests where the relevant URIs > > are included in message bodies instead of being request targets > > themselves. Sending protocol-level instructions to the server in an > > entity body (BMOVE, BDELETE) instead of as part of the request > > smacks of RPC design. > > If by Batch you mean having semantics by which a set of operations > should all succeed or fail as one, then I'm disagreeing that it's "by > nature" RPC. > Hmmm, no, what I'm calling a batch operation in HTTP is any method where the client sends the server a list of URIs to be operated on. I'm saying that sending a list of URIs to the server, instead of interacting with each URI in request-response fashion, is inherently un-RESTful. > > Unless you consider any form of transaction boundary RPC. > That's a good question; I assume you mean, say, MOVE or COPY methods. Using PUT to copy, or PUT and DELETE to move, seems to be the uniform interface approach. Whereas MOVE and COPY require some URI other than the one being interacted with, to effect the transaction. Since it's just one URI instead of a list, I wouldn't call MOVE and COPY batch operations -- but they smack of RPC just the same. > > > I would argue that BDELETE > > is a stateful request. The server must track multiple operations > > before responding to the client, and do this reliably in case of > > interruption; both issues are cleanly avoided in a REST request, > > where the contents of the request entity never contain > > protocol-level instructions for the server to carry out. > > State within one operation may exist, but it's internal to the server > and it's implementation. One server will often keep state around > while it goes and create multiple new resources on a simple action > invoked through a POST and I don't hear anyone complain about it. > Because the client didn't request the creation of those ancillary resources. The server responded 'success' to the POST, freed up those resources to handle other requests, then created the ancillary resources. With BDELETE, the server has to keep the connection open until each request in the batch succeeds or fails, in order to generate a detailed response entity for the client. While this doesn't rise to the level of storing state between requests, since we're only talking about one request, it still seems like "shared context" to me. > > If a resource creation can engender many other resources being > created, why would the deletion of one resource ending up in many > resources being deleted be an issue? > What's at issue is what the client requested, not what the server does. If a client requests that an Atom Feed resource be DELETEd, it's up to developer discretion whether the constituent Atom Entry resources are deleted or not. If collection members are deleted when the feed is deleted, the client doesn't need notification beyond success/failure regarding the request URI for the collection, which is all the client requested. The problem arises when the client wants to request the deletion of entries 3, 7 and 10 from within a feed, as a single operation. Since the client is requesting each deletion, it needs to be notified of the success/failure of each action. Here's where we break from REST architecture -- sending a list of URIs to the server, and receiving a "success" response whose entity must be parsed to determine the results of the operation for each URI in the list. This is not a uniform interface, a uniform interface consists of a request/response to each URI the client wants to perform an operation on. > > If by Batch you mean having semantics by which a set of operations > should all succeed or fail as one, then I'm disagreeing that it's "by > nature" RPC. > If member entries are deleted when a collection is deleted, that set of operations succeeds or fails as one, but that's just opaque application behavior, not a batch request. When the entity of the client request is attempting to give multiple instructions to the server, it's a batch request. Where the client makes a request to one URI to affect changes to some other resource, whose URI must be included but is not the target of the request, it's an RPC operation (MOVE, COPY). -Eric
Hmm; perhaps see http://oauth.net/, http://www.hueniverse.com/hueniverse/2009/03/oauth-core-10-reborn.html --> http://tools.ietf.org/html/draft-hammer-oauth-01. Sebastien Lambla wrote: > > Hi all, > > > > I'm pondering the use of a claim-based system for a REST architecture > I'm developing. I have a few ideas on how to roll a custom http auth > scheme to support SAML in an http-friendly way (aka at the http > message level, not through redirects) > > > > Is anyone aware of some early RFC / standardization work going in this > direction? > > > > Seb > > > >
Hello All,
Glad to be a part of serious REST group. I just started to learn about it, been searching from a month now and then for the tutorials.no tutorial ws found on net for building,explaining the RESTful webservice (Stress on simple). I was looking for a simple RESTful webservice, so that it is clear what codes goes for GET and POST .. All i can find was an example from netbeans. I am using netbeans but the example stated was somewhat complex for the beginners like me.. please guide me where to found those examples..and REST client making tutorials. A simple webservice..like the Calculator for SOAP..
Yours Sincerely,
Pavan.
> > How about doing this in the following way: > > 1. use PUT to create a composite resource that contains all the > resource that are going to be deleted at the "same" time. Of course, > the server side should know the purpose of this PUT, and return the > URI of the created composite resource. > 2. use DELETE to delete the composite resource. > Let's use Atom as an example, here. The application developer could have the server interpret the deletion of a collection, as a request to delete all member resources in that collection. A user could then create a collection, like any other, for the purpose of deleting it and whatever entries it references. The tradeoff in such a configuration, is eliminating the ability to delete a collection *without* deleting its member resources. Ask yourself if any reduction in API functionality is acceptable, for the purpose of optimizing DELETE -- a method whose traffic doesn't amount to a very big slice of the overall network-traffic pie to begin with. > > In this way, both the client side and the server side have clear > understanding of what each operation and each URI mean. > > I feel it is more explicit and clear than sending a POST with many > "delete". > Ah, but the question is, would this be more explicit and clear than having the client make atomic DELETE requests to the desired URIs, in accordance with Atom Protocol and REST? There is no possibility for misunderstanding the purpose of an atomic DELETE request made against a specific URI, or the resulting response status. From the standpoint of the user deleting a collection for the purpose of batch-deleting member resources, the problem is one of visibility. Using atomic, URI-by-URI DELETE requests tells intermediaries to expire any cached representations of the deleted resources. Using an opaque server behavior, i.e. relying on the server to behave a certain way when a collection is deleted, won't cause intermediaries to expire member resources, meaning the user who performed the DELETE could still potentially dereference the "deleted" resources from a cache. This confusion certainly doesn't arise when each resource is deleted by making a DELETE request against its URI and receiving a "success" status code, a visible transaction that an intermediary can understand and act upon by expiring cached representations. The user won't (or shouldn't) experience a reload re-rendering a representation of a resource he thinks he's just deleted, using RESTful URI-by-URI deletion. -Eric
I agree that to create a composite resource will introduce some semantic confusion. This post was a reply to the question in http://tech.groups.yahoo.com/group/rest-discuss/message/12138 Although the context of the original question of including multiple resources in a DELETE was not clear, I assumed that the delete task of those resource should be atomic. That is, if successful, all resources are deleted, or if failed, none of the resources is deleted. Separate DELETE request one after the other can not achieve this goal. Cheers, Dong On Wed, Mar 18, 2009 at 4:23 AM, Eric J. Bowman <eric@...>wrote: > > > > How about doing this in the following way: > > > > 1. use PUT to create a composite resource that contains all the > > resource that are going to be deleted at the "same" time. Of course, > > the server side should know the purpose of this PUT, and return the > > URI of the created composite resource. > > 2. use DELETE to delete the composite resource. > > > > Let's use Atom as an example, here. The application developer could > have the server interpret the deletion of a collection, as a request to > delete all member resources in that collection. A user could then > create a collection, like any other, for the purpose of deleting it and > whatever entries it references. > > The tradeoff in such a configuration, is eliminating the ability to > delete a collection *without* deleting its member resources. Ask > yourself if any reduction in API functionality is acceptable, for the > purpose of optimizing DELETE -- a method whose traffic doesn't amount > to a very big slice of the overall network-traffic pie to begin with. > > > > > In this way, both the client side and the server side have clear > > understanding of what each operation and each URI mean. > > > > I feel it is more explicit and clear than sending a POST with many > > "delete". > > > > Ah, but the question is, would this be more explicit and clear than > having the client make atomic DELETE requests to the desired URIs, in > accordance with Atom Protocol and REST? There is no possibility for > misunderstanding the purpose of an atomic DELETE request made against a > specific URI, or the resulting response status. > > From the standpoint of the user deleting a collection for the purpose > of batch-deleting member resources, the problem is one of visibility. > Using atomic, URI-by-URI DELETE requests tells intermediaries to expire > any cached representations of the deleted resources. Using an opaque > server behavior, i.e. relying on the server to behave a certain way > when a collection is deleted, won't cause intermediaries to expire > member resources, meaning the user who performed the DELETE could still > potentially dereference the "deleted" resources from a cache. > > This confusion certainly doesn't arise when each resource is deleted by > making a DELETE request against its URI and receiving a "success" > status code, a visible transaction that an intermediary can understand > and act upon by expiring cached representations. The user won't (or > shouldn't) experience a reload re-rendering a representation of a > resource he thinks he's just deleted, using RESTful URI-by-URI deletion. > > -Eric > -- http://dongnotes.blogspot.com/
Pardon me for barging in on this thread, but I wanted to ask a general question about composite resources, batching, etc. First the assumptions: 1 assume both the user-agent and origin server have full agreement on the semantics of MOVE, COPY, and/or Batch DELETE as HTTP methods 2 assume the URI used for these actions is the same one used for a typical POST factory (MOVE /customers/, COPY /customers/, BDELETE /customers/) 3 assume a single resource can be send to the origin server that contains all the details to handle the above methods 4 assume the origin server can enforce atomicity for these methods 5 assume the origin server already marks all GETs affected by these methods (the underlying resource representations) with Cache-Control:no-cache and/or Pragma:no-cache Now the question: Setting aside the issue of whether these methods qualify as REST-ful, are there still folks who would discourage implementing composites, batching? If yes, why? Are there other considerations that I've not spelled out here? Alternately, assume for item #1, only POST or PUT is used (not MOVE, COPY, BDELETE, etc.). Also assume for #3 that the internet media type reflects the intention of the user-agent (application/vnd.customers-move+xml, /vnd.customers-bdelete+xml, etc.). Does this modification make the process more/less desirable? mca http://amundsen.com/blog/ On Wed, Mar 18, 2009 at 11:34, Dong Liu <edongliu@gmail.com> wrote: > I agree that to create a composite resource will introduce some semantic > confusion. > > This post was a reply to the question in > http://tech.groups.yahoo.com/group/rest-discuss/message/12138 > > Although the context of the original question of including multiple > resources in a DELETE was not clear, I assumed that the delete task of those > resource should be atomic. That is, if successful, all resources are > deleted, or if failed, none of the resources is deleted. Separate DELETE > request one after the other can not achieve this goal. > > Cheers, > > Dong > > On Wed, Mar 18, 2009 at 4:23 AM, Eric J. Bowman <eric@...> > wrote: >> >> > >> > How about doing this in the following way: >> > >> > 1. use PUT to create a composite resource that contains all the >> > resource that are going to be deleted at the "same" time. Of course, >> > the server side should know the purpose of this PUT, and return the >> > URI of the created composite resource. >> > 2. use DELETE to delete the composite resource. >> > >> >> Let's use Atom as an example, here. The application developer could >> have the server interpret the deletion of a collection, as a request to >> delete all member resources in that collection. A user could then >> create a collection, like any other, for the purpose of deleting it and >> whatever entries it references. >> >> The tradeoff in such a configuration, is eliminating the ability to >> delete a collection *without* deleting its member resources. Ask >> yourself if any reduction in API functionality is acceptable, for the >> purpose of optimizing DELETE -- a method whose traffic doesn't amount >> to a very big slice of the overall network-traffic pie to begin with. >> >> > >> > In this way, both the client side and the server side have clear >> > understanding of what each operation and each URI mean. >> > >> > I feel it is more explicit and clear than sending a POST with many >> > "delete". >> > >> >> Ah, but the question is, would this be more explicit and clear than >> having the client make atomic DELETE requests to the desired URIs, in >> accordance with Atom Protocol and REST? There is no possibility for >> misunderstanding the purpose of an atomic DELETE request made against a >> specific URI, or the resulting response status. >> >> From the standpoint of the user deleting a collection for the purpose >> of batch-deleting member resources, the problem is one of visibility. >> Using atomic, URI-by-URI DELETE requests tells intermediaries to expire >> any cached representations of the deleted resources. Using an opaque >> server behavior, i.e. relying on the server to behave a certain way >> when a collection is deleted, won't cause intermediaries to expire >> member resources, meaning the user who performed the DELETE could still >> potentially dereference the "deleted" resources from a cache. >> >> This confusion certainly doesn't arise when each resource is deleted by >> making a DELETE request against its URI and receiving a "success" >> status code, a visible transaction that an intermediary can understand >> and act upon by expiring cached representations. The user won't (or >> shouldn't) experience a reload re-rendering a representation of a >> resource he thinks he's just deleted, using RESTful URI-by-URI deletion. >> >> -Eric > > > > -- > http://dongnotes.blogspot.com/ > > >
On Tue, Feb 24, 2009 at 8:38 AM, Dong Liu <edongliu@...> wrote: > Hi all, > > How about doing this in the following way: > > 1. use PUT to create a composite resource that contains all the > resource that are going to be deleted at the "same" time. Of course, > the server side should know the purpose of this PUT, and return the > URI of the created composite resource. > 2. use DELETE to delete the composite resource. For this to work the entire PUT/DELETE combo needs to be atomic by operating on a composite resource that is unique to each set of resources being deleted. For that to happen, we need an addition step (before 1) to agree on the unique composite resource, or have the server act in an uncommon way and accept PUT on one resource to create another (as suggested here). So there's a subtle difference between this pair of PUT/DELETE and any other pair of PUT/DELETE out there. From my experience, when this code gets written under pressure of deadline, or maintained by someone else, or trying to troubleshoot a side effect of one resource from the set not being deleted, the subtle difference becomes a week long affair. The smaller the difference the more time spent dealing with it. From an engineering standpoint I would recommend and use POST because it's more explicit and clear. Assaf > > > In this way, both the client side and the server side have clear > understanding of what each operation and each URI mean. > > I feel it is more explicit and clear than sending a POST with many > "delete". > > Cheers, > > Dong > > -- > http://dongnotes.blogspot.com/ > > > ------------------------------------ > > Yahoo! Groups Links > > > >
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > In the common case of the Web, DELETE traffic is a tiny fraction of GET > traffic. So it just doesn't _matter_ that some sort of client logic > like my HTML form can accomplish the same objective as discrete DELETE > requests in one round trip, in terms of bandwidth. Splitting hairs. > Same with batch updates. The bandwidth conserved by caching GET > traffic is an order of magnitude greater than that consumed by fringe > cases where bandwidth could be saved by batching multiple DELETE (or > PUT or POST) requests. REST optimizes for GET, not batch processing. > What about the (so far) less common cases outside the Web, such as Enterprise-level distributed applications? There are certainly many examples in that domain where high-volume writing (create/update/delete) is a requirement and efficiency is key (as is atomicity). Are you suggesting that as an architecture style in its "pure" form, REST is only appropriate for the common case of read-heavy Web applications? This seems to sell the style short. Certainly, in its current form it does not directly address some of the problems associated with write-intensive applications. But the fundamental constraints still hold value for these apps, and so far I haven't seen any better alternative for building general purpose distibuted systems. Is the style so set in stone that new applications for it shouldn't be explored, even if it means looking at things from a slightly unorthodox angle? Who knows, maybe it turns out that it is possible to solve some of these problem in a way that is at least consistent with the fundamental constraints of REST. It seems to me that this forum is exactly the right place to explore that kind of thing. Cheers, scott
At Wed, 18 Mar 2009 04:23:15 -0600, Eric J. Bowman wrote: > Let's use Atom as an example, here. The application developer could > have the server interpret the deletion of a collection, as a request to > delete all member resources in that collection. A user could then > create a collection, like any other, for the purpose of deleting it and > whatever entries it references. > > The tradeoff in such a configuration, is eliminating the ability to > delete a collection *without* deleting its member resources. Ask > yourself if any reduction in API functionality is acceptable, for the > purpose of optimizing DELETE -- a method whose traffic doesn't amount > to a very big slice of the overall network-traffic pie to begin with. Is it your position that all resources which represent a collection of other resources violate REST constraints? best, Erik Hetzner
Hi Eric, On Wed, Mar 18, 2009 at 10:23 AM, Eric J. Bowman <eric@...> wrote: > > Let's use Atom as an example, here. The application developer could > have the server interpret the deletion of a collection, as a request to > delete all member resources in that collection. A user could then > create a collection, like any other, for the purpose of deleting it and > whatever entries it references. > > The tradeoff in such a configuration, is eliminating the ability to > delete a collection *without* deleting its member resources. Ask > yourself if any reduction in API functionality is acceptable, for the > purpose of optimizing DELETE -- a method whose traffic doesn't amount > to a very big slice of the overall network-traffic pie to begin with. From RFC2616: "The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI in the Request-Line" The resources that are in a collection are called subordinate resources, which can be understood as a requirement that the super-resource (collection) be present. In any case there is no clear requirement that sub-resources outlive their super-resources. Unless I am missing something, collection deletion could be in a sense transactional. Regards, Alexandros
On Wed, Mar 18, 2009 at 11:59 AM, scameron02 <scott.cameron@...> wrote: > > --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: >> >> In the common case of the Web, DELETE traffic is a tiny fraction of GET >> traffic. So it just doesn't _matter_ that some sort of client logic >> like my HTML form can accomplish the same objective as discrete DELETE >> requests in one round trip, in terms of bandwidth. Splitting hairs. >> Same with batch updates. The bandwidth conserved by caching GET >> traffic is an order of magnitude greater than that consumed by fringe >> cases where bandwidth could be saved by batching multiple DELETE (or >> PUT or POST) requests. REST optimizes for GET, not batch processing. >> > > What about the (so far) less common cases outside the Web, such as > Enterprise-level distributed applications? There are certainly many > examples in that domain where high-volume writing (create/update/delete) is > a requirement and efficiency is key (as is atomicity). > > Are you suggesting that as an architecture style in its "pure" form, REST is > only appropriate for the common case of read-heavy Web applications? This > seems to sell the style short. Certainly, in its current form it does not > directly address some of the problems associated with write-intensive > applications. But the fundamental constraints still hold value for these > apps, and so far I haven't seen any better alternative for building general > purpose distibuted systems. > > Is the style so set in stone that new applications for it shouldn't be > explored, even if it means looking at things from a slightly unorthodox > angle? Who knows, maybe it turns out that it is possible to solve some of > these problem in a way that is at least consistent with the fundamental > constraints of REST. It seems to me that this forum is exactly the right > place to explore that kind of thing. > > Cheers, > scott > I could be wrong, but I don't think conserving HTTP requests is one of the goals of REST. Creating evolvable, rational, loosely-coupled interactions seems to be more to the point. I know that in a conversation on a mailing list about adding 10K documents to an Atom store, Roy F. said he'd as likely use a bash script and CURL to do the job as anything. That suggests to me the ability to decompose the operation at the client as a series of simple operations (CURL POST) as a "goal." One could reasonably extapolate to a server which accepts, as a POST, a "job request" that lists urls of resources needing to be moved from one place to another. That job server could now be the "bash script and CURL" of Roy's description. We could make status requests of the job server to find out the state of things (what's been moved, what's failed, what's left to be moved, etc.). What REST has done for us here is simply provided a rational, simple way to decompose the job into RESTful "tasks." I'd also mention that in REST there is a client and a server for any operation but a single component can, at various times serve as one or the other. --peter keane >
> Who knows, maybe it turns out that it is possible to solve some of > these problem in a way that is at least consistent with the fundamental > constraints of REST. It seems to me that this forum is exactly the right > place to explore that kind of thing. > > Cheers, > scott +1 :-)
Hello, I don't think anything has been made for using SAML in the draft-hammer-oauth document, but it might be possible. I must admit I haven't read the latest draft and, apparently, the -00 draft was split following comments made on the IETF HTTP Auth list (it might be worth asking the original question on that list, by the way). Decoupling the OAuth protocol from the WWW-Authenticate scheme specification might give room for SAML-based claims in this. I'd be interested in finding a clean way to do what Shibboleth does in Section 3.1.2 of <http://shibboleth.internet2.edu/docs/internet2-mace-shibboleth-arch-protocols-200509.pdf> (i.e. I'd prefer something that doesn't rely on an onLoad form POST done straight after a GET). Browser support is a problem in this case. Best wishes, Bruno. John Panzer wrote: > > > Hmm; perhaps see http://oauth.net/, > http://www.hueniverse.com/hueniverse/2009/03/oauth-core-10-reborn.html > --> http://tools.ietf.org/html/draft-hammer-oauth-01. > > Sebastien Lambla wrote: > >> Hi all, >> >> >> >> I�m pondering the use of a claim-based system for a REST architecture >> I�m developing. I have a few ideas on how to roll a custom http auth >> scheme to support SAML in an http-friendly way (aka at the http >> message level, not through redirects) >> >> >> >> Is anyone aware of some early RFC / standardization work going in this >> direction? >> >> >> >> Seb
Erik Hetzner wrote: > > > Let's use Atom as an example, here. The application developer could > > have the server interpret the deletion of a collection, as a > > request to delete all member resources in that collection. A user > > could then create a collection, like any other, for the purpose of > > deleting it and whatever entries it references. > > > > The tradeoff in such a configuration, is eliminating the ability to > > delete a collection *without* deleting its member resources. Ask > > yourself if any reduction in API functionality is acceptable, for > > the purpose of optimizing DELETE -- a method whose traffic doesn't > > amount to a very big slice of the overall network-traffic pie to > > begin with. > > Is it your position that all resources which represent a collection of > other resources violate REST constraints? > No, I was discussing the ramifications of treating requests made against collections differently than requests made against members. In a uniform connector interface, the goal is to "constrain[s] the interface to a consistent set of semantics for all resources," whereas the example in this thread would apply BDELETE semantics to collections and regular DELETE semantics to members. I'm talking about the nature of requests made against collections, not the nature of collections. -Eric
Alexandros Marinos wrote: > > The resources that are in a collection are called subordinate > resources, which can be understood as a requirement that the > super-resource (collection) be present. In any case there is no clear > requirement that sub-resources outlive their super-resources. Unless > I am missing something, collection deletion could be in a sense > transactional. > Or "member resources" as the example I gave was in Atom. Deletion of collections, AFAIK, is entirely unspecified -- which is why I said that application behavior is up to the developer. I'm not arguing against having the deletion of a collection result in the deletion of all members/subordinates. I'm arguing that doing things that way violates the uniform interface constraint. The pertinent question is, "what problem does batch deletion solve?" I still say it's a nitpicky optimization of an insignificant amount of traffic, at best, so even the tiniest of reasons not to do it should suffice. -Eric
Assaf Arkin wrote: > > From an engineering standpoint I would recommend and use POST because > it's more explicit and clear. > Agreed. Like I said in another thread, just write an HTML form and a POST handler to do "batch" deletions. So long as it's understood that this has nothing to do with a uniform interface. In REST, the semantics assigned to DELETE are not also duplicated in POST -- deletion of resources has one and only one method. POST is usually being used to accept data, making it sometimes mean delete makes the semantics of POST clear as mud. -Eric
On 18.03.2009, at 22:19, Eric J. Bowman wrote: > POST is usually > being used to accept data, making it sometimes mean delete makes the > semantics of POST clear as mud. Then again, one could argue that this is simply the case with POST anyway. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Wed, Mar 18, 2009 at 2:19 PM, Eric J. Bowman <eric@...>wrote: > Assaf Arkin wrote: > > > > > From an engineering standpoint I would recommend and use POST because > > it's more explicit and clear. > > > > Agreed. Like I said in another thread, just write an HTML form and a > POST handler to do "batch" deletions. So long as it's understood that > this has nothing to do with a uniform interface. In REST, the > semantics assigned to DELETE are not also duplicated in POST -- > deletion of resources has one and only one method. POST is usually > being used to accept data, making it sometimes mean delete makes the > semantics of POST clear as mud. > That's not what I understood. What I understood is: if you ask the server to do some operation, and it tells you which URL to POST to (say via form), and use POST as HTTP intended it to be used, then you're following the uniform interface. The server may tell you to use DELETE instead, which has different, and possibly better semantics, and following those would also be part of the uniform interface. But it's up to the server to decide which method it prefers -- as long as the semantics are obeyed -- it's applying a uniform interface. Assaf > > > -Eric >
mike amundsen wrote: > > Now the question: > Setting aside the issue of whether these methods qualify as REST-ful, > are there still folks who would discourage implementing composites, > batching? If yes, why? Are there other considerations that I've not > spelled out here? > I can't set aside the issue of REST. If you're asking whether I would discourage you from building an RPC-based app using WS-* and SOAP, I would say "yes" but I would have to give the REST answer to either question: "Hypothesis II: Constraints can be added to the WWW architectural style to derive a new hybrid style that better reflects the desired properties of a modern Web architecture." If you aren't concerned about any of the problems that REST solves, go ahead and do things however you'd like. If you are interested in avoiding the myriad problems which arise from developing an application that ignores REST, then don't break the uniform interface constraint. "Most software systems are created with the implicit assumption that the entire system is under the control of one entity, or at least that all entities participating within a system are acting towards a common goal and not at cross-purposes. Such an assumption cannot be safely made when the system runs openly on the Internet. Anarchic scalability refers to the need for architectural elements to continue operating when they are subjected to an unanticipated load, or when given malformed or maliciously constructed data, since they may be communicating with elements outside their organizational control. The architecture must be amenable to mechanisms that enhance visibility and scalability." If I am an architect who builds skyscrapers and you're building a lean- to, who am I to dissuade you? However, if you want to scale your lean- to up to provide housing to 1,000 people by extending it as needed, I'm gonna hafta burst your bubble by explaining that the lean-to architectural style is no more appropriate to building a housing block than, well, err... a design for a slaughterhouse. Your example puts forth a very tightly-coupled system which implements none of the constraints of REST. No big thing. Unless you want it to scale, in which case the importance of "visibility" far outweighs the disadvantage of not having batch-processing capability. > > Alternately, assume for item #1, only POST or PUT is used (not MOVE, > COPY, BDELETE, etc.). Also assume for #3 that the internet media type > reflects the intention of the user-agent > (application/vnd.customers-move+xml, /vnd.customers-bdelete+xml, > etc.). Does this modification make the process more/less desirable? > In terms of REST? If so, then you can't get any more undesirable than overloading media-type such that it provides a mechanism to override the semantics of the request method (i.e. if PUT means one thing for one media type, and another thing for the next media type, then you've failed to "constrain the interface to a consistent set of semantics for all resources"). Such an ad-hoc architecture can't scale anarchically because it exists in its own world. -Eric
Stefan Tilkov wrote: > > > POST is usually > > being used to accept data, making it sometimes mean delete makes the > > semantics of POST clear as mud. > > Then again, one could argue that this is simply the case with POST > anyway. > Sure, in the real world most applications blithely violate the constraints of REST. But we're talking about the Platonic ideal of REST, here, and that means you assign a request method one and only one action. It also means that each action is assigned to one and only one request method. Failure to do so results in an interface which is not uniform. While PUT is allowed by RFC 2616 to mean either "create" or "update," an application conforming to Atom Protocol assigns "create" to POST while constraining PUT to "update" -- the makings of a uniform interface. (Don't get me started on other aspects of Atom Protocol, though. ;-) -Eric
Eric: Thanks for taking the time to respond to my question. I found two portions of your reply particularly helpful: <snip>If you are interested in avoiding the myriad problems which arise from developing an application that ignores REST, then don't break the uniform interface constraint.</snip> Is it correct for me to assume that the "uniform interface" of which you speak is that outlined in RFC2616 only? If not, what other sources do you consider viable as part of the "uniform interface?" and <snip> Unless you want it to scale, in which case the importance of "visibility" far outweighs the disadvantage of not having batch-processing capability.</snip> I am unclear on your use of "visibility" here. Are you referring only to the values passed as HTTP Methods (and the defined behaviors for them)? Are the other meta data items (HTTP Headers) that you consider part of visibility? For example, if in my hypothetical case a large community (hundreds of servers, thousands of clients) shared an understanding of the methods I mentioned and the metadata does this have any impact on your consideration of this example? Finally, it is not the case that I am not ..."concerned about any of the problems that REST solves..." If that were true, I would not have posed my question here. Also, I am not trying to 'make a case' for performing batch work over HTTP, nor am I attempting to convince anyone of the validity of this hypothetical case. Feel free to question my understanding of Fielding's dissertation, but not my sincerity in learning from it and implementing its REST principles properly. Thanks again for your reply. mca http://amundsen.com/blog/ On Wed, Mar 18, 2009 at 17:48, Eric J. Bowman <eric@bisonsystems.net> wrote: > mike amundsen wrote: > >> >> Now the question: >> Setting aside the issue of whether these methods qualify as REST-ful, >> are there still folks who would discourage implementing composites, >> batching? If yes, why? Are there other considerations that I've not >> spelled out here? >> > > I can't set aside the issue of REST. If you're asking whether I would > discourage you from building an RPC-based app using WS-* and SOAP, I > would say "yes" but I would have to give the REST answer to either > question: > > "Hypothesis II: Constraints can be added to the WWW architectural style > to derive a new hybrid style that better reflects the desired > properties of a modern Web architecture." > > If you aren't concerned about any of the problems that REST solves, go > ahead and do things however you'd like. If you are interested in > avoiding the myriad problems which arise from developing an application > that ignores REST, then don't break the uniform interface constraint. > > "Most software systems are created with the implicit assumption that > the entire system is under the control of one entity, or at least that > all entities participating within a system are acting towards a common > goal and not at cross-purposes. Such an assumption cannot be safely > made when the system runs openly on the Internet. Anarchic scalability > refers to the need for architectural elements to continue operating > when they are subjected to an unanticipated load, or when given > malformed or maliciously constructed data, since they may be > communicating with elements outside their organizational control. The > architecture must be amenable to mechanisms that enhance visibility and > scalability." > > If I am an architect who builds skyscrapers and you're building a lean- > to, who am I to dissuade you? However, if you want to scale your lean- > to up to provide housing to 1,000 people by extending it as needed, I'm > gonna hafta burst your bubble by explaining that the lean-to > architectural style is no more appropriate to building a housing block > than, well, err... a design for a slaughterhouse. > > Your example puts forth a very tightly-coupled system which implements > none of the constraints of REST. No big thing. Unless you want it to > scale, in which case the importance of "visibility" far outweighs the > disadvantage of not having batch-processing capability. > >> >> Alternately, assume for item #1, only POST or PUT is used (not MOVE, >> COPY, BDELETE, etc.). Also assume for #3 that the internet media type >> reflects the intention of the user-agent >> (application/vnd.customers-move+xml, /vnd.customers-bdelete+xml, >> etc.). Does this modification make the process more/less desirable? >> > > In terms of REST? If so, then you can't get any more undesirable than > overloading media-type such that it provides a mechanism to override > the semantics of the request method (i.e. if PUT means one thing for one > media type, and another thing for the next media type, then you've > failed to "constrain the interface to a consistent set of semantics for > all resources"). Such an ad-hoc architecture can't scale anarchically > because it exists in its own world. > > -Eric >
Assaf Arkin wrote: > > > Assaf Arkin wrote: > > > > > > > > From an engineering standpoint I would recommend and use POST > > > because it's more explicit and clear. > > > > > > > Agreed. Like I said in another thread, just write an HTML form and > > a POST handler to do "batch" deletions. So long as it's understood > > that this has nothing to do with a uniform interface. In REST, the > > semantics assigned to DELETE are not also duplicated in POST -- > > deletion of resources has one and only one method. POST is usually > > being used to accept data, making it sometimes mean delete makes the > > semantics of POST clear as mud. > > > > That's not what I understood. What I understood is: if you ask the > server to do some operation, and it tells you which URL to POST to > (say via form), and use POST as HTTP intended it to be used, then > you're following the uniform interface. > There's nothing wrong with the pragmatism or ease-of-use of an HTML form using POST to batch-delete. It even adheres to the HEAS constraint of REST, but that's about as far as that goes. I'd say, "you're following HEAS" not "you're following the uniform interface" because HEAS is only one of the constraints which make up the uniform interface. > > The server may tell you to use DELETE instead, which has different, > and possibly better semantics, and following those would also be part > of the uniform interface. But it's up to the server to decide which > method it prefers -- as long as the semantics are obeyed -- it's > applying a uniform interface. > If an API doesn't implement DELETE, and also doesn't use POST for anything but deletion (single or batch), and the options are presented in an HTML form then yes, it's a uniform interface. However, once DELETE is also implemented, or if POST is used for anything else like accepting content uploads, the interface is no longer uniform, unless and until the previous usage of POST to delete is deprecated. The fact remains, that only the use of the DELETE method on a URI-by- URI basis is visible to intermediaries. This is the only way to prevent the user who requested the deletion from reloading the deleted content from cache. Except, of course, to not cache anything -- thereby defeating the entire premise of using REST to begin with... -Eric
scameron02 wrote: > > What about the (so far) less common cases outside the Web, such as > Enterprise-level distributed applications? There are certainly many > examples in that domain where high-volume writing > (create/update/delete) is a requirement and efficiency is key (as is > atomicity). > REST is but one architectural style. For the cases you describe, it could very well be the best slaughterhouse ever designed, but wholly inappropriate for the goals of the project. > > Are you suggesting that as an architecture style in its "pure" form, > REST is only appropriate for the common case of read-heavy Web > applications? > Absolutely not. I said it's only *designed* for the common case of the Web, taken directly from Dr. Fielding's dissertation. > > Is the style so set in stone that new applications for it shouldn't be > explored, even if it means looking at things from a slightly > unorthodox angle? > Apparently you're not familiar with my history on this list... ;-) I've suggested plenty of radical departures from REST orthodoxy, I was particularly disappointed that the consensus shot down my confirmed- delete-at-protocol-level idea, but the reasoning was sound. Should we not also recognize here when an idea is fundamentally at odds with the REST style and end the debate? I don't see any permathreads insisting that RPC-based architectures can somehow be made RESTful. It would be beating a dead horse. The REST paradigm consists of clients making discrete requests at the URI of the resource in question. The RPC paradigm consists of sending instructions to the server at a "factory" resource. Batch processing clearly falls under the latter paradigm, not the former. The REST solution to batch deletion is for the client to make discrete DELETE requests to each URI. Period. That's what REST *is*. It's an _alternative_ to the notion of batch deletion. -Eric
mike amundsen wrote: > > <snip>If you are interested in avoiding the myriad problems which > arise from developing an application that ignores REST, then don't > break the uniform interface constraint.</snip> > Is it correct for me to assume that the "uniform interface" of which > you speak is that outlined in RFC2616 only? If not, what other sources > do you consider viable as part of the "uniform interface?" > REST == Uniform Interface. RFC2616 makes no mention of a uniform interface. The only souce I know of which defines "uniform interface" is Dr. Fielding's dissertation, which defines it as the end-product constraint derived through the four other described constraints. > > <snip> Unless you want it to scale, in which case the importance of > "visibility" far outweighs the disadvantage of not having > batch-processing capability.</snip> > I am unclear on your use of "visibility" here. Are you referring only > to the values passed as HTTP Methods (and the defined behaviors for > them)? Are the other meta data items (HTTP Headers) that you consider > part of visibility? For example, if in my hypothetical case a large > community (hundreds of servers, thousands of clients) shared an > understanding of the methods I mentioned and the metadata does this > have any impact on your consideration of this example? > The shortest answer, is that a "visible" request-response stream contains everything an intermediary needs to know to determine the nature of the request only by looking at the headers. Visible instructions aren't buried in the request entity, they are right there in the request method and the response status code. Since DELETE is its own method, when such a request passes through an intermediary, the intermediary can interpret whether it was successful or not, and expire any cached representations of the deleted resource. Making deletions happen any other way is _invisible_ to intermediaries since what's really going on is not wholly contained within the headers of the request and the response. REST calls this "self-descriptive messages". Given caching, the only way to ensure that the user requesting a DELETE can't reload a representation of the resource, is to make a discrete DELETE request against the specific URI in question. > > Finally, it is not the case that I am not ..."concerned about any of > the problems that REST solves..." If that were true, I would not have > posed my question here... > Oh, I know, I was referring to your hypothetical case, not you personally. ;-) You asked a tough question and that was the only way I could figure out how to answer it. No offense intended. -Eric
On Wed, Mar 18, 2009 at 3:36 PM, Eric J. Bowman <eric@...>wrote: > Assaf Arkin wrote: > > > > > > Assaf Arkin wrote: > > > > > > > > > > > From an engineering standpoint I would recommend and use POST > > > > because it's more explicit and clear. > > > > > > > > > > Agreed. Like I said in another thread, just write an HTML form and > > > a POST handler to do "batch" deletions. So long as it's understood > > > that this has nothing to do with a uniform interface. In REST, the > > > semantics assigned to DELETE are not also duplicated in POST -- > > > deletion of resources has one and only one method. POST is usually > > > being used to accept data, making it sometimes mean delete makes the > > > semantics of POST clear as mud. > > > > > > > That's not what I understood. What I understood is: if you ask the > > server to do some operation, and it tells you which URL to POST to > > (say via form), and use POST as HTTP intended it to be used, then > > you're following the uniform interface. > > > > There's nothing wrong with the pragmatism or ease-of-use of an HTML > form using POST to batch-delete. It even adheres to the HEAS > constraint of REST, but that's about as far as that goes. I'd say, > "you're following HEAS" not "you're following the uniform interface" > because HEAS is only one of the constraints which make up the uniform > interface. > > > > > The server may tell you to use DELETE instead, which has different, > > and possibly better semantics, and following those would also be part > > of the uniform interface. But it's up to the server to decide which > > method it prefers -- as long as the semantics are obeyed -- it's > > applying a uniform interface. > > > > If an API doesn't implement DELETE, and also doesn't use POST for > anything but deletion (single or batch), and the options are presented > in an HTML form then yes, it's a uniform interface. However, once > DELETE is also implemented, or if POST is used for anything else like > accepting content uploads, the interface is no longer uniform, unless > and until the previous usage of POST to delete is deprecated. What would be the litmus test? > > > The fact remains, that only the use of the DELETE method on a URI-by- > URI basis is visible to intermediaries. This is the only way to > prevent the user who requested the deletion from reloading the deleted > content from cache. Except, of course, to not cache anything -- > thereby defeating the entire premise of using REST to begin with... The example this thread started from creates a unique resource using PUT only to immediately discard it using DELETE, without ever retrieving that resource. I provided some justification for why it would be better to replace the PUT/DELETE pair with a POST, likely against a resource that will never be retrieved. I think that falls under the uniform interface. I'm not interested in forcing caching down the throat of this use case: the only interesting resources we operate on are never retrieved. So strawman aside, why is this use of POST not uniform interface? Separately, cache control has provisions for preventing clients from reloading deleted content, and often enough, the deleted content we want them to forget is not deleted by them. So you can cache resources and be able to magically remove not by DELETE and be very uniform interface about it. Assaf > > > -Eric >
Those of you working in or near the geospatial domain might be interested in a REST-related RFC: http://www.opengeospatial.org/standards/requests/54 """ The membership of the Open Geospatial Consortium, Inc. (OGC®) is requesting comments from the public on the candidate OpenGIS® Web Map Tiling Service (WMTS) Interface Standard. The candidate WMTS Interface Standard is much like the OGC's popular Web Map Server (WMS) Interface Standard, but it enables better server performance in applications that involve many simultaneous requests. To improve performance, instead of creating a new image for each request, it returns small pre-generated images (e.g., PNG or JPEG) or reuses identical previous requests that follow a discrete set of tile matrices. This proposed standard provides support for multiple architectural patterns - KVP, REST and SOAP. """ The authors propose REST bindings to complement HTTP RPC and SOAP bindings. IMO, it needs a little work, which I hope the authors can be convinced to do as this standard will more or less define REST in the GIS mainstream. -- Sean Gillies Software Engineer Institute for the Study of the Ancient World New York University
Eric: Thanks for the follow-up. I understand that the hypothetical I presented is not a simple one and I appreciate your willingness to address it. To that end, I found this line in your reply helpful: <snip>Making deletions happen any other way is _invisible_ to intermediaries since what's really going on is not wholly contained within the headers of the request and the response.</snip> While my assumptions accounted for this particular behavior my indicating the origin server emits "Control-Cache: no-cache" w/ the responses, I take your reply here to be a part of the definition of "visibility" of which I asked earlier And, even thought my hypothetical example was not limited to DELETE actions (i.e. COPY), I still find your example important to my line of questioning. I understand you to say actions on the origin server that insuffciently inform intermediaries of the status of (possibly) cached representations are thought to be un-REST-ful (such an abuse of the acronym!). In the case of DELETE this seems rather clear for the individual item itself, but not for any related GET-able resource representations that might include the target of the DELETE in their body. In other words, when composite documents are returned upon a GET (/get-last-ten-entires, etc.), at what point do intermediaries know that these composites are invalid due to the proper use of DELETE against one of the items that appears in the composite resource? To my knowledge the answer is that intermediaries do not know the proper status of any cached representation of composite resources that are affected by the proper use of DELETE upon a single resource that is included in the GET-able composite. I do not find this condition un-REST-ful, however. For that matter, using PUT to create a resource (PUT /entires/123) or POST against a factory resource (/POST entires/) poses the same challenge to the viability of any composite resource that intermediaries might retain their cache. It it my understanding that there is more than one way to mitigate this problem using HTTP Headers to indicate the cache-ability of the (composite) resource, whether the intermediaries must re-validate the resource before presenting the cached version as a response, etc. Thus, it seems to me, that when it comes to the test of "visibility" my hypothetical example matches the same behaviors as DELETE, PUT (as create), and POST. In other words, I understand my hypothetical to contain the proper mitigations such that visibility is not violated. With that in mind, I conclude my hypothetical, while possibly distasteful to some, does not violate the principles of Fielding's work. mca http://amundsen.com/blog/ On Wed, Mar 18, 2009 at 19:16, Eric J. Bowman <eric@...> wrote: > mike amundsen wrote: > >> >> <snip>If you are interested in avoiding the myriad problems which >> arise from developing an application that ignores REST, then don't >> break the uniform interface constraint.</snip> >> Is it correct for me to assume that the "uniform interface" of which >> you speak is that outlined in RFC2616 only? If not, what other sources >> do you consider viable as part of the "uniform interface?" >> > > REST == Uniform Interface. RFC2616 makes no mention of a uniform > interface. The only souce I know of which defines "uniform interface" > is Dr. Fielding's dissertation, which defines it as the end-product > constraint derived through the four other described constraints. > >> >> <snip> Unless you want it to scale, in which case the importance of >> "visibility" far outweighs the disadvantage of not having >> batch-processing capability.</snip> >> I am unclear on your use of "visibility" here. Are you referring only >> to the values passed as HTTP Methods (and the defined behaviors for >> them)? Are the other meta data items (HTTP Headers) that you consider >> part of visibility? For example, if in my hypothetical case a large >> community (hundreds of servers, thousands of clients) shared an >> understanding of the methods I mentioned and the metadata does this >> have any impact on your consideration of this example? >> > > The shortest answer, is that a "visible" request-response stream > contains everything an intermediary needs to know to determine the > nature of the request only by looking at the headers. Visible > instructions aren't buried in the request entity, they are right there > in the request method and the response status code. > > Since DELETE is its own method, when such a request passes through an > intermediary, the intermediary can interpret whether it was successful > or not, and expire any cached representations of the deleted resource. > Making deletions happen any other way is _invisible_ to intermediaries > since what's really going on is not wholly contained within the headers > of the request and the response. REST calls this "self-descriptive > messages". > > Given caching, the only way to ensure that the user requesting a DELETE > can't reload a representation of the resource, is to make a discrete > DELETE request against the specific URI in question. > >> >> Finally, it is not the case that I am not ..."concerned about any of >> the problems that REST solves..." If that were true, I would not have >> posed my question here... >> > > Oh, I know, I was referring to your hypothetical case, not you > personally. ;-) You asked a tough question and that was the only way > I could figure out how to answer it. No offense intended. > > -Eric >
Assaf Arkin wrote: > > > If an API doesn't implement DELETE, and also doesn't use POST for > > anything but deletion (single or batch), and the options are > > presented in an HTML form then yes, it's a uniform interface. > > However, once DELETE is also implemented, or if POST is used for > > anything else like accepting content uploads, the interface is no > > longer uniform, unless and until the previous usage of POST to > > delete is deprecated. > > > What would be the litmus test? > Each request method should map to one and only one action, each action should map to one and only one method, each method should mean the same thing for all resources controlled by the application. This results in a "consistent set of semantics for all resources" and avoids the problems of the early Web which precluded caching, as per Fielding 5.1.4. > > > The fact remains, that only the use of the DELETE method on a > > URI-by- URI basis is visible to intermediaries. This is the only > > way to prevent the user who requested the deletion from reloading > > the deleted content from cache. Except, of course, to not cache > > anything -- thereby defeating the entire premise of using REST to > > begin with... > > > The example this thread started from creates a unique resource using > PUT only to immediately discard it using DELETE, without ever > retrieving that resource. > Presumably, the user has retrieved the unique entries to be deleted, in order to know they need deletion. If I send the server a list of URLs to be deleted, or create a "delete factory" resource, then I'm not transferring a representation of any application state -- no matter if it's retrieved or not. > > I provided some justification for why it would be better to replace > the PUT/DELETE pair with a POST, likely against a resource that will > never be retrieved. I think that falls under the uniform interface. > No, in a uniform interface, an action is taken against a target URI. If the resource to be deleted has a URI, then a DELETE request is made against that URI -- not some other URI and/or some other method. Your POST solution consists of multiple instructions to the server, not a representation of an application state. That's RPC, not REST. > > I'm not interested in forcing caching down the throat of this use > case: the only interesting resources we operate on are never > retrieved. So strawman aside, why is this use of POST not uniform > interface? > This is no strawman argument. If, in order for an API to function as its developer intends, caching must be disabled: then the developer must ask himself if his API is RESTful. "Do you Etag?" If you can't cache representations of the individual resources you intend to subject to batch delete, in order to make batch delete work, then you've obviously broken the uniform interface constraint. If you hadn't, you'd be able to cache without it breaking your API. You're saying that the "only interesting resources we operate on" doesn't include the individual resources making up the delete batch. I'm saying that yes, those individual resources *are* the interesting resources, and it's *those* URIs we want to DELETE, not some other URI acting as a temporary stand-in. > > Separately, cache control has provisions for preventing clients from > reloading deleted content, and often enough, the deleted content we > want them to forget is not deleted by them. So you can cache > resources and be able to magically remove not by DELETE and be very > uniform interface about it. > The only intermediaries of interest here, are those between the user who requests the DELETE and the server. No other user requested the deletion, though -- the only person who might wish to confirm that an offensive comment has been removed is the moderator who decided to remove it. When that moderator reloads the offensive comment, or the thread that used to contain it, the deleted comment should never, ever appear. This is very, very simple to accomplish -- explicitly DELETE the URI assigned to the offensive content. This does nothing about keeping other users from continuing to see the comment until its cache-control values expire. But those users didn't request the DELETE, either. In a batch-delete situation which bypasses the DELETE method, nothing is visible to intermediaries, and any cached resource won't be expired, leading the hypothetical moderator who wants to confirm the deletion to see the resource is still there, try deleting it again, get a failure message, and become very confused. If you are removing one resource by manipulating some other resource, then you haven't designed a uniform interface. REST is about performing each discrete action against a resource by manipulating that resource directly at its URI. Not some other URI. This is fundamental. -Eric
Eric: <snip> If you are removing one resource by manipulating some other resource, then you haven't designed a uniform interface. REST is about performing each discrete action against a resource by manipulating that resource directly at its URI. Not some other URI. This is fundamental. </snip> Do I understand that you believe non-canonical URIs are not REST-ful? /current-weather /weather/2009/03/19 mca http://amundsen.com/blog/ On Wed, Mar 18, 2009 at 23:05, Eric J. Bowman <eric@...> wrote: > Assaf Arkin wrote: > >> >> > If an API doesn't implement DELETE, and also doesn't use POST for >> > anything but deletion (single or batch), and the options are >> > presented in an HTML form then yes, it's a uniform interface. >> > However, once DELETE is also implemented, or if POST is used for >> > anything else like accepting content uploads, the interface is no >> > longer uniform, unless and until the previous usage of POST to >> > delete is deprecated. >> >> >> What would be the litmus test? >> > > Each request method should map to one and only one action, each action > should map to one and only one method, each method should mean the same > thing for all resources controlled by the application. This results in > a "consistent set of semantics for all resources" and avoids the > problems of the early Web which precluded caching, as per Fielding > 5.1.4. > >> >> > The fact remains, that only the use of the DELETE method on a >> > URI-by- URI basis is visible to intermediaries. This is the only >> > way to prevent the user who requested the deletion from reloading >> > the deleted content from cache. Except, of course, to not cache >> > anything -- thereby defeating the entire premise of using REST to >> > begin with... >> >> >> The example this thread started from creates a unique resource using >> PUT only to immediately discard it using DELETE, without ever >> retrieving that resource. >> > > Presumably, the user has retrieved the unique entries to be deleted, in > order to know they need deletion. If I send the server a list of URLs > to be deleted, or create a "delete factory" resource, then I'm not > transferring a representation of any application state -- no matter if > it's retrieved or not. > >> >> I provided some justification for why it would be better to replace >> the PUT/DELETE pair with a POST, likely against a resource that will >> never be retrieved. I think that falls under the uniform interface. >> > > No, in a uniform interface, an action is taken against a target URI. > If the resource to be deleted has a URI, then a DELETE request is made > against that URI -- not some other URI and/or some other method. Your > POST solution consists of multiple instructions to the server, not a > representation of an application state. That's RPC, not REST. > >> >> I'm not interested in forcing caching down the throat of this use >> case: the only interesting resources we operate on are never >> retrieved. So strawman aside, why is this use of POST not uniform >> interface? >> > > This is no strawman argument. If, in order for an API to function as > its developer intends, caching must be disabled: then the developer > must ask himself if his API is RESTful. "Do you Etag?" If you can't > cache representations of the individual resources you intend to subject > to batch delete, in order to make batch delete work, then you've > obviously broken the uniform interface constraint. If you hadn't, > you'd be able to cache without it breaking your API. > > You're saying that the "only interesting resources we operate on" > doesn't include the individual resources making up the delete batch. > I'm saying that yes, those individual resources *are* the interesting > resources, and it's *those* URIs we want to DELETE, not some other URI > acting as a temporary stand-in. > >> >> Separately, cache control has provisions for preventing clients from >> reloading deleted content, and often enough, the deleted content we >> want them to forget is not deleted by them. So you can cache >> resources and be able to magically remove not by DELETE and be very >> uniform interface about it. >> > > The only intermediaries of interest here, are those between the user > who requests the DELETE and the server. No other user requested the > deletion, though -- the only person who might wish to confirm that an > offensive comment has been removed is the moderator who decided to > remove it. When that moderator reloads the offensive comment, or the > thread that used to contain it, the deleted comment should never, ever > appear. This is very, very simple to accomplish -- explicitly DELETE > the URI assigned to the offensive content. > > This does nothing about keeping other users from continuing to see the > comment until its cache-control values expire. But those users didn't > request the DELETE, either. In a batch-delete situation which bypasses > the DELETE method, nothing is visible to intermediaries, and any cached > resource won't be expired, leading the hypothetical moderator who wants > to confirm the deletion to see the resource is still there, try > deleting it again, get a failure message, and become very confused. > > If you are removing one resource by manipulating some other resource, > then you haven't designed a uniform interface. REST is about > performing each discrete action against a resource by manipulating that > resource directly at its URI. Not some other URI. This is fundamental. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
mike amundsen wrote: > > If you are removing one resource by manipulating some other resource, > then you haven't designed a uniform interface. REST is about > performing each discrete action against a resource by manipulating > that resource directly at its URI. Not some other URI. This is > fundamental. > </snip> > > Do I understand that you believe non-canonical URIs are not REST-ful? > > /current-weather > /weather/2009/03/19 > No, not at all. If I PUT today's weather, and the server happens to update the current-weather page with the new info, that's just peachy. The client's request was met to the letter -- it asked to create a new resource containing the submitted data and that's just what happened. What I don't see allowed in REST, is for the client to make a single request that creates a new resource for today's weather, then copies that data to some other resource. Of course, there's nothing RESTless about making a POST to create a new resource for March 19th, followed by a PUT to /current-weather to update that. If that process is automated by server logic, it means the client just has to make one request, but the client can't *count on* that behavior, allowing client and server to evolve independently. -Eric
At Wed, 18 Mar 2009 21:05:27 -0600, Eric J. Bowman wrote: > Each request method should map to one and only one action, each action > should map to one and only one method, each method should mean the same > thing for all resources controlled by the application. This results in > a "consistent set of semantics for all resources" and avoids the > problems of the early Web which precluded caching, as per Fielding > 5.1.4. The web is a global hypertext system. To my mind, it doesn’t seem to make any sense to distinguish between “resources controlled by the application†and other resources (presumably controlled by other applications). Either there is a global constraint, or there is no constraint. best, Erik Hetzner
> What I don't see allowed in REST, is for the client to make a single > request that creates a new resource for today's weather, then copies > that data to some other resource. and why would or why should that be disallowed? I have seen comments about visibility in this thread, but it is a matter of making tradeoffs. In a lot of real-world applications, reads and writes do have side effects, and it would be unreasonable to leak those side effects to the client. My 2 cents. Subbu
Erik Hetzner wrote: > > > Each request method should map to one and only one action, each > > action should map to one and only one method, each method should > > mean the same thing for all resources controlled by the > > application. This results in a "consistent set of semantics for > > all resources" and avoids the problems of the early Web which > > precluded caching, as per Fielding 5.1.4. > > The web is a global hypertext system. To my mind, it doesn’t seem to > make any sense to distinguish between “resources controlled by the > application†and other resources (presumably controlled by other > applications). Either there is a global constraint, or there is no > constraint. > The goal of REST is a uniform interface, not a global interface. One API may assign "create" semantics to POST and "update" semantics to PUT. Another API may assign "create" semantics to PUT and "update" semantics to PATCH. Neither is wrong, yet neither are they compatible, even if both applications do exactly the same thing. What's "global" to me, is whether a given method is idempotent or not, plus GET and DELETE, which leaves plenty of room for interpretation -- the gist of an architectural style. Split-level-ranch houses come in all shapes and sizes. So do REST APIs. By "resources controlled by the application" I mean "resources inside the same house" like having uniform wiring throughout. One house may be 110 volts, another 220, neither violates specs but neither are they compatible, due to the lack of a global constraint on voltage. All the outlets should be the same within the house, although one house may have spade prongs and another, cylindrical. The constraint is that one or the other is chosen and adhered to throughout. (A house with both 110 and 220 would be un-RESTful in my example. ;-) -Eric
Eric: Thanks for the reply. <snip> > What I don't see allowed in REST, is for the client to make a single > request that creates a new resource for today's weather, then copies > that data to some other resource. <./snip> I am unable to locate support for this assertion in Fielding's dissertation. I, on the other hand, find sections 6.2.2 thru 6.2.5 contain a number of references to hiding the implementation details of an action from the client; admonitions against treating resources as storage objects; and reminders that the Web is not a distributed file system. Possibly you can point me the section(s) that echo your point that REST does not allow a client to make a request that results in data appearing in another resource. mca http://amundsen.com/blog/ On Thu, Mar 19, 2009 at 00:42, Eric J. Bowman <eric@...> wrote: > mike amundsen wrote: > >> >> If you are removing one resource by manipulating some other resource, >> then you haven't designed a uniform interface. REST is about >> performing each discrete action against a resource by manipulating >> that resource directly at its URI. Not some other URI. This is >> fundamental. >> </snip> >> >> Do I understand that you believe non-canonical URIs are not REST-ful? >> >> /current-weather >> /weather/2009/03/19 >> > > No, not at all. If I PUT today's weather, and the server happens to > update the current-weather page with the new info, that's just peachy. > The client's request was met to the letter -- it asked to create a new > resource containing the submitted data and that's just what happened. > > What I don't see allowed in REST, is for the client to make a single > request that creates a new resource for today's weather, then copies > that data to some other resource. > > Of course, there's nothing RESTless about making a POST to create a new > resource for March 19th, followed by a PUT to /current-weather to > update that. If that process is automated by server logic, it means > the client just has to make one request, but the client can't *count > on* that behavior, allowing client and server to evolve independently. > > -Eric >
Subbu Allamaraju wrote: > > > What I don't see allowed in REST, is for the client to make a single > > request that creates a new resource for today's weather, then copies > > that data to some other resource. > > and why would or why should that be disallowed? > Because in REST, each action involves only one URI. There's simply no place to put another URI/method combination, except as instructions to the server (in the message body instead of in headers). But REST is not about sending instructions to the server, it's about sending a representation of an application state to the server. I don't care if a new resource for today's weather is created using PUT or POST, there is no header I know of that can be added to either method which would instruct the server to also do something to some other URI, like "also PUT a copy of this submission at /current-weather". To clarify, REST makes no provision for the client to request more than one action on one URI at a time. The server may do whatever it wants, like create or update some other resource when it creates a resource. But the client cannot make such a request. The client isn't prohibited from accomplishing the same thing the server does, it just has to use as many discrete requests as it takes to get it done, no corner-cutting batch transactions allowed. Since the client can accomplish the same thing as server logic, using multiple requests, I don't see what problem batch processing solves. Other than "too many requests" which is a side-effect of the architectural style (it's a given), not a problem to be solved. > > I have seen comments > about visibility in this thread, but it is a matter of making > tradeoffs. In a lot of real-world applications, reads and writes do > have side effects, and it would be unreasonable to leak those side > effects to the client. > I haven't suggested otherwise. What I'm saying is those side-effects can't be controlled by the client. If the creation of one resource happens to cause the creation of another resource, that's a perfectly allowable application behavior. What is _not_ allowed is for the client to dictate that its request to create a resource also must create or update some specific other resource. And so on. I've tried to keep it simple by stating that if you're sending instructions to the server (like a list of URIs to delete), that representation doesn't begin to resemble an application state, so the interaction is, by definition, something other than REST. -Eric
On Mar 18, 2009, at 11:33 PM, Eric J. Bowman wrote: > I haven't suggested otherwise. What I'm saying is those side-effects > can't be controlled by the client. If the creation of one resource > happens to cause the creation of another resource, that's a perfectly > allowable application behavior. What is _not_ allowed is for the > client to dictate that its request to create a resource also must > create or update some specific other resource. And so on. > > I've tried to keep it simple by stating that if you're sending > instructions to the server (like a list of URIs to delete), that > representation doesn't begin to resemble an application state, so the > interaction is, by definition, something other than REST. Makes sense. I see your point. Subbu --- http://subbu.org
mike amundsen wrote: > > > What I don't see allowed in REST, is for the client to make a single > > request that creates a new resource for today's weather, then copies > > that data to some other resource. > <./snip> > > I am unable to locate support for this assertion in Fielding's > dissertation. > The server can do whatever it wants with a client request, I've not stated otherwise. The client cannot dictate any side-effects, only the action it is requesting on the target URI. HTTP has one target URI and one method. There is no provision in HTTP for a single client request to dictate some side effect on some other URI. All a client can do is make a single request against a single resource in REST, there is no allowance for batch processing (where a single client request dictates that multiple actions be taken on multiple URIs). -Eric
mike amundsen wrote: > > In other words, when composite documents are returned upon a GET > (/get-last-ten-entires, etc.), at what point do intermediaries know > that these composites are invalid due to the proper use of DELETE > against one of the items that appears in the composite resource? > Well, if you apply 'cache-control: must-revalidate' to the collection, a cache will check its Etag against the origin server's. If the Etag was updated because a member resource was removed, then the cache will serve an updated representation instead of a stale one. But, bear in mind that the cache isn't actually *obligated* to serve the fresh data. (Or, the cached collection can be set to expire every few minutes.) Which is why the DELETE method is important. It allows the caches pertinent to the user who requested the DELETE, to recognize that the resource is no longer available, and not serve stale representations even if the cache-control headers don't indicate 'expired'. > > To my knowledge the answer is that intermediaries do not know the > proper status of any cached representation of composite resources that > are affected by the proper use of DELETE upon a single resource that > is included in the GET-able composite. > Sure they do, provided the application is properly written. If your application generates a collection of member resources (a feed of Atom entries, for example), and assigns it an Etag, then a member resource is deleted, the next request for the collection at the origin server will generate new output and therefore, a new Etag. The tradeoff involved with caching, is that you don't get to control un-caching precisely, since a cache can always decide to serve stale data (say, it can't connect to the origin server). > > Thus, it seems to me, that when it comes to the test of "visibility" > my hypothetical example matches the same behaviors as DELETE, PUT (as > create), and POST. In other words, I understand my hypothetical to > contain the proper mitigations such that visibility is not violated. > With that in mind, I conclude my hypothetical, while possibly > distasteful to some, does not violate the principles of Fielding's > work. > Visibility is a "desirable property" which results from the application of REST constraints, not a constraint itself, just to clarify. Code-on- Demand, REST's optional constraint, reduces visibility. So you can't think of it as a "visibility violation" if that helps any. As to your hypothetical. You can't just not care about caching, even if you aren't using it. Because, if you are adhering to a uniform interface design, caching is possible. If you couldn't cache even if you wanted to, it's 50% likely to mean that you haven't developed a uniform interface, and 50% likely that your hypothetical is too convoluted to represent the "common case of the Web" that REST is designed for. ;-) Some of my solutions may be unorthodox, without violating any of REST's constraints, so I sympathize with what you're trying to conclude. But, sorry, I can't get past this: "3 assume a single resource can be send to the origin server that contains all the details to handle the above methods" I'm sure you meant to say representation, not resource, but that's not my problem with it. What you're describing is sending a set of instructions to the server. In REST, the only data we send to the server is in the form of a representation of an application state. A list of URIs for batch deletion doesn't represent an application state. Instructions to move or copy a resource aren't application states. A PATCH request whose entity is a delta, is a representation of the desired new state of the application. A POST request using the www-form-urlencoded media type is a representation of the desired state of the form the user dereferenced with GET. Sending instructions to the server isn't an unorthodox interpretation of REST, it's something fundamentally opposed to and not beginning to resemble REST. -Eric
On Wed, Mar 18, 2009 at 8:05 PM, Eric J. Bowman <eric@...>wrote: > Assaf Arkin wrote: > > > > > > If an API doesn't implement DELETE, and also doesn't use POST for > > > anything but deletion (single or batch), and the options are > > > presented in an HTML form then yes, it's a uniform interface. > > > However, once DELETE is also implemented, or if POST is used for > > > anything else like accepting content uploads, the interface is no > > > longer uniform, unless and until the previous usage of POST to > > > delete is deprecated. > > > > > > What would be the litmus test? > > > > Each request method should map to one and only one action, each action > should map to one and only one method, each method should mean the same > thing for all resources controlled by the application. This results in > a "consistent set of semantics for all resources" and avoids the > problems of the early Web which precluded caching, as per Fielding > 5.1.4. > > > > > > The fact remains, that only the use of the DELETE method on a > > > URI-by- URI basis is visible to intermediaries. This is the only > > > way to prevent the user who requested the deletion from reloading > > > the deleted content from cache. Except, of course, to not cache > > > anything -- thereby defeating the entire premise of using REST to > > > begin with... > > > > > > The example this thread started from creates a unique resource using > > PUT only to immediately discard it using DELETE, without ever > > retrieving that resource. > > > > Presumably, the user has retrieved the unique entries to be deleted, in > order to know they need deletion. If I send the server a list of URLs > to be deleted, or create a "delete factory" resource, then I'm not > transferring a representation of any application state -- no matter if > it's retrieved or not. > > > > > I provided some justification for why it would be better to replace > > the PUT/DELETE pair with a POST, likely against a resource that will > > never be retrieved. I think that falls under the uniform interface. > > > > No, in a uniform interface, an action is taken against a target URI. > If the resource to be deleted has a URI, then a DELETE request is made > against that URI -- not some other URI and/or some other method. Your > POST solution consists of multiple instructions to the server, not a > representation of an application state. That's RPC, not REST. > > > > > I'm not interested in forcing caching down the throat of this use > > case: the only interesting resources we operate on are never > > retrieved. So strawman aside, why is this use of POST not uniform > > interface? > > > > This is no strawman argument. If, in order for an API to function as > its developer intends, caching must be disabled: then the developer > must ask himself if his API is RESTful. "Do you Etag?" If you can't > cache representations of the individual resources you intend to subject > to batch delete, in order to make batch delete work, then you've > obviously broken the uniform interface constraint. If you hadn't, > you'd be able to cache without it breaking your API. > > You're saying that the "only interesting resources we operate on" > doesn't include the individual resources making up the delete batch. > I'm saying that yes, those individual resources *are* the interesting > resources, and it's *those* URIs we want to DELETE, not some other URI > acting as a temporary stand-in. > > > > > Separately, cache control has provisions for preventing clients from > > reloading deleted content, and often enough, the deleted content we > > want them to forget is not deleted by them. So you can cache > > resources and be able to magically remove not by DELETE and be very > > uniform interface about it. > > > > The only intermediaries of interest here, are those between the user > who requests the DELETE and the server. No other user requested the > deletion, though -- the only person who might wish to confirm that an > offensive comment has been removed is the moderator who decided to > remove it. When that moderator reloads the offensive comment, or the > thread that used to contain it, the deleted comment should never, ever > appear. This is very, very simple to accomplish -- explicitly DELETE > the URI assigned to the offensive content. > > This does nothing about keeping other users from continuing to see the > comment until its cache-control values expire. But those users didn't > request the DELETE, either. In a batch-delete situation which bypasses > the DELETE method, nothing is visible to intermediaries, and any cached > resource won't be expired, leading the hypothetical moderator who wants > to confirm the deletion to see the resource is still there, try > deleting it again, get a failure message, and become very confused. > If you are removing one resource by manipulating some other resource, > then you haven't designed a uniform interface. REST is about > performing each discrete action against a resource by manipulating that > resource directly at its URI. Not some other URI. This is fundamental. > Updating one resource by means of another is a very common use case. And resources are allowed to share state, that has never been an issue. Assaf > > -Eric >
Eric: First I began my thread talking about more than batch (MOVE, COPY). Second, I asked my question in terms of Fielding, not HTTP. In re-reading this thread, it seems that set of distinctions has been lost (at least my me). Finally, in following your responses to others on this thread it seems that a key problem you have w/ my hypothetical examples is that they result in bodies that contain non-application state. This is an additional requirement and your assertion that my hypothetical cases are, by their nature, non-application-state is an issue that I am not clear on within Fielding's work. I think I understand your assertions, I am just not clear on their origin. This is my problem, not yours and I'll stand aside for now to review several things: Thanks again for your replies. BTW - I look forward to comments on my initial hypothetical from other parties on this list, too. mca http://amundsen.com/blog/ On Thu, Mar 19, 2009 at 03:01, Eric J. Bowman <eric@...> wrote: > mike amundsen wrote: > >> >> > What I don't see allowed in REST, is for the client to make a single >> > request that creates a new resource for today's weather, then copies >> > that data to some other resource. >> <./snip> >> >> I am unable to locate support for this assertion in Fielding's >> dissertation. >> > > The server can do whatever it wants with a client request, I've not > stated otherwise. The client cannot dictate any side-effects, only the > action it is requesting on the target URI. HTTP has one target URI and > one method. There is no provision in HTTP for a single client request > to dictate some side effect on some other URI. All a client can do is > make a single request against a single resource in REST, there is no > allowance for batch processing (where a single client request dictates > that multiple actions be taken on multiple URIs). > > -Eric >
Assaf Arkin wrote: > > > If you are removing one resource by manipulating some other > > resource, then you haven't designed a uniform interface. REST is > > about performing each discrete action against a resource by > > manipulating that resource directly at its URI. Not some other > > URI. This is fundamental. > > > > Updating one resource by means of another is a very common use case. > Updating a member resource often does have the side effect of updating the collection resource. But, that's server behavior. The client cannot dictate that side effect. Can a single FTP request both create one file and update another? Neither can HTTP requests, although that may be the side effect in the case of either protocol -- depending on application behavior. > > And resources are allowed to share state, that has never been an > issue. > I have never maintained otherwise. If the client wants to control both creating a new resource, and updating another resource to mirror the newly-created one, then the client needs to make two requests against two URIs. It cannot, in REST, piggyback any further action on a single request. The server can, of course, update the other resource to mirror the one the client just created. But since the client didn't request that, the client doesn't need a status notification for that action, nor can the client override that action, nor can the client count on that action in the future. -Eric
On Thu, Mar 19, 2009 at 1:06 AM, Eric J. Bowman <eric@...>wrote: > Assaf Arkin wrote: > > > > > > If you are removing one resource by manipulating some other > > > resource, then you haven't designed a uniform interface. REST is > > > about performing each discrete action against a resource by > > > manipulating that resource directly at its URI. Not some other > > > URI. This is fundamental. > > > > > > > Updating one resource by means of another is a very common use case. > > > > Updating a member resource often does have the side effect of updating > the collection resource. But, that's server behavior. The client > cannot dictate that side effect. Can a single FTP request both create > one file and update another? Neither can HTTP requests, although that > may be the side effect in the case of either protocol -- depending on > application behavior. > > > > > And resources are allowed to share state, that has never been an > > issue. > > > > I have never maintained otherwise. If the client wants to control both > creating a new resource, and updating another resource to mirror the > newly-created one, then the client needs to make two requests against > two URIs. It cannot, in REST, piggyback any further action on a single > request. > > The server can, of course, update the other resource to mirror the one > the client just created. But since the client didn't request that, the > client doesn't need a status notification for that action, nor can the > client override that action, nor can the client count on that action in > the future. I'm not sure where all that client/server dichotomy comes from. In the scenario I proposed the server tells the client how to construct a request that will affect multiple states. For example, a Web email that lets the client delete multiple messages at once by sending a form with one checkbox next to each email. I don't see a client forcing its will on the server, server doing actions not requested by the client, or anything beyond plain HTTP. Assaf > > > -Eric >
Assaf Arkin wrote: > > I'm not sure where all that client/server dichotomy comes from. > Client behavior and server behavior are both opaque to REST. However, clients and servers are constrained in how they may talk to one another by the uniform connector interface. A server can take multiple actions based on a single request, but a client cannot dictate multiple actions to the server by making a single request. A client can perform multiple actions based on the receipt of one response, but the server cannot dictate multiple client behaviors within a response. For example, a server may redirect a client request, but it can't tell the client to also change its request method. A server may very well send a response which triggers certain scripted behavior on the client, but that isn't done at the protocol level. A client may very well send a request which triggers certain scripted behavior on the server, that also isn't done at the protocol level. Communication between client and server is done at the protocol level, by having the client request that a server take one action, limited to a small number of available methods. The client never sends instructions to the server in REST. > > In the scenario I proposed the server tells the client how to > construct a request that will affect multiple states. For example, a > Web email that lets the client delete multiple messages at once by > sending a form with one checkbox next to each email. > If a multiple-delete form is written using Xforms 1.1, then the client will perform discrete DELETE requests against each selected resource, and this is perfectly acceptable. With regular HTML forms, though, the method is constrained to POST or GET. So the server is telling the client how to construct a POST request which bypasses the DELETE method. The data sent back to the server takes the form of operating instructions, rather than a request for one specific action to be taken against one discrete URI. The uniform interface is not used -- in a uniform interface the DELETE method is requested for each URI the client wants to delete. The submitted POST in your example resembles what application state? Can that application state be retrieved by a GET request? > > I don't see a client forcing its will on the server, server doing > actions not requested by the client, or anything beyond plain HTTP. > Sure, it's HTTP, but HTTP != REST. In a uniform interface, if the client wants to delete multiple resources, then the client makes a DELETE request against each URI to be deleted. Each request generates a success/fail response which is visible to intermediaries, allowing any caches between the user who requested the delete and the origin server to expire all cached representations of a deleted resource. This is fundamental. This cannot happen when special instructions to the server are POSTed via an HTML forms interface. No intermediary can possibly surmise that any deletion has occurred. POST is borked into meaning deletion instead of its generic-interface meaning of addition. DELETE has the generic-interface meaning of deletion, but it isn't involved in the delete requests at all. Your comment that the "server isn't doing any actions not requested by the client" isn't quite right. It may look to the user like an HTML form allowing deletion, but that isn't what the client is requesting, because the request method isn't DELETE. So the server is, indeed, taking action (deletion) that has nothing to do with the request method (whatever POST means, it doesn't mean DELETE since that's its own method). So, yes, deleting resources with some method other than DELETE results in an API that does not resemble a uniform interface. Deleting resources using the DELETE method has absolutely no downside, with the benefit of being visible to intermediaries, as envisioned by REST. -Eric
2009/3/18 Eric J. Bowman <eric@...> > > Absolutely not. I said it's only *designed* for the common case of the > Web, taken directly from Dr. Fielding's dissertation. > And however said dissertation title refers to "Design of Network-based Software Architectures". Now, is the Web the only "Network-based Software Architectures" we have? Isn't a enterprise-wide architecture a network-based one? Or a extranet-based architecture? Certainly the previous work of Dr. Fielding on the web is the foundation for his dissertation, but my understanding of it, it encompasses far more than "the Web"...
I don't see why it is unrestfull for the client to say to the server delete all the clients whose name starts with a 'A' or delete all the journal entries between 2008-01-01 and 2008-12-31 It's easy to do a GET /clients/A* so why not a DELETE /clients/A* Also, if you POST a batch-delete operation POST /clients/deleteFactory 1001;1002;1003 you are just passing parameters to a resource whose purpose is to delete clients, not to operate against other resources. Maybe it is deleting resources /client/1001 ... but that is a side-effect *of the server*. This request requests a "business" operation on a resource, but from the point-of-view of the client the client doesen't even know that exists resources corresponding to 1001... Now I admit that having resources like this smells terribly like using verbs in uri's and thus like RPC. And it doesen't solve the cache problem. Also, you said elsewhere that the client sends a "application-state" to the server, is that a lapse of your, is not the other way around? I acknowledge that this thread is getting too "dense" for my experience with REST so bear with me if I'm being too "simplistic"... 2009/3/19 Eric J. Bowman <eric@bisonsystems.net>: > Assaf Arkin wrote: > >> >> I'm not sure where all that client/server dichotomy comes from. >> > > Client behavior and server behavior are both opaque to REST. However, > clients and servers are constrained in how they may talk to one another > by the uniform connector interface. A server can take multiple actions > based on a single request, but a client cannot dictate multiple actions > to the server by making a single request. > > A client can perform multiple actions based on the receipt of one > response, but the server cannot dictate multiple client behaviors > within a response. For example, a server may redirect a client > request, but it can't tell the client to also change its request method. > > A server may very well send a response which triggers certain scripted > behavior on the client, but that isn't done at the protocol level. A > client may very well send a request which triggers certain scripted > behavior on the server, that also isn't done at the protocol level. > Communication between client and server is done at the protocol level, > by having the client request that a server take one action, limited to > a small number of available methods. The client never sends > instructions to the server in REST. > >> >> In the scenario I proposed the server tells the client how to >> construct a request that will affect multiple states. For example, a >> Web email that lets the client delete multiple messages at once by >> sending a form with one checkbox next to each email. >> > > If a multiple-delete form is written using Xforms 1.1, then the client > will perform discrete DELETE requests against each selected resource, > and this is perfectly acceptable. With regular HTML forms, though, the > method is constrained to POST or GET. So the server is telling the > client how to construct a POST request which bypasses the DELETE > method. > > The data sent back to the server takes the form of operating > instructions, rather than a request for one specific action to be taken > against one discrete URI. The uniform interface is not used -- in a > uniform interface the DELETE method is requested for each URI the > client wants to delete. > > The submitted POST in your example resembles what application state? > Can that application state be retrieved by a GET request? > >> >> I don't see a client forcing its will on the server, server doing >> actions not requested by the client, or anything beyond plain HTTP. >> > > Sure, it's HTTP, but HTTP != REST. In a uniform interface, if the > client wants to delete multiple resources, then the client makes a > DELETE request against each URI to be deleted. Each request generates > a success/fail response which is visible to intermediaries, allowing > any caches between the user who requested the delete and the origin > server to expire all cached representations of a deleted resource. > This is fundamental. > > This cannot happen when special instructions to the server are POSTed > via an HTML forms interface. No intermediary can possibly surmise that > any deletion has occurred. POST is borked into meaning deletion > instead of its generic-interface meaning of addition. DELETE has the > generic-interface meaning of deletion, but it isn't involved in the > delete requests at all. > > Your comment that the "server isn't doing any actions not requested by > the client" isn't quite right. It may look to the user like an HTML > form allowing deletion, but that isn't what the client is requesting, > because the request method isn't DELETE. So the server is, indeed, > taking action (deletion) that has nothing to do with the request method > (whatever POST means, it doesn't mean DELETE since that's its own > method). > > So, yes, deleting resources with some method other than DELETE results > in an API that does not resemble a uniform interface. Deleting > resources using the DELETE method has absolutely no downside, with the > benefit of being visible to intermediaries, as envisioned by REST. > > -Eric >
At Thu, 19 Mar 2009 00:10:15 -0600, Eric J. Bowman wrote: > The goal of REST is a uniform interface, not a global interface. One > API may assign "create" semantics to POST and "update" semantics to > PUT. Another API may assign "create" semantics to PUT and "update" > semantics to PATCH. Neither is wrong, yet neither are they compatible, > even if both applications do exactly the same thing. > > What's "global" to me, is whether a given method is idempotent or not, > plus GET and DELETE, which leaves plenty of room for interpretation -- > the gist of an architectural style. Split-level-ranch houses come in > all shapes and sizes. So do REST APIs. By "resources controlled by > the application" I mean "resources inside the same house" like having > uniform wiring throughout. > > One house may be 110 volts, another 220, neither violates specs but > neither are they compatible, due to the lack of a global constraint on > voltage. All the outlets should be the same within the house, although > one house may have spade prongs and another, cylindrical. The > constraint is that one or the other is chosen and adhered to throughout. > > (A house with both 110 and 220 would be un-RESTful in my example. ;-) Hi Eric - Thanks for the response. Leaving aside the larger issues of batch processing, about which I haven’t formed an opinion, I think that you are not making a very convincing case here. Yes, it would be a good idea if your application only uses POST for one thing. But I don’t see why it would be ‘un-RESTful’ to do otherwise. Since the web is a global system, either a constraint on the architecture of the web applies globally (110 volt everywhere) or it does not apply (110 or 220 as you choose, even in the same house). You can add on to this and say ‘this house is 110 volt only’, but doing otherwise is not ‘un-RESTful’. best, Erik Hetzner
Dong Liu wrote: > > Although the context of the original question of including multiple > resources in a DELETE was not clear, I assumed that the delete task > of those resource should be atomic. That is, if successful, all > resources are deleted, or if failed, none of the resources is > deleted. Separate DELETE request one after the other can not achieve > this goal. > OK, we each mean a different thing by "atomic". What you're suggesting isn't the sort of thing REST supports. BDELETE can't even meet your goal. What you're after is a server behavior that the client can control by sending instructions to the server. Nothing wrong with that, but that aspect of your API does deviate from REST, which isn't a solution to every problem out there. -Eric
Erik Hetzner wrote: > > Yes, it would be a good idea if your application only uses POST for > one thing. But I don’t see why it would be ‘un-RESTful’ to do > otherwise. > Once upon a time, REST was known as the "HTTP Request Object" and it's what I used to write a CMS using Server-Side Javascript in 1998 -- my first OOP project. In OOP, one writes a separate method for each action. One doesn't write multiple actions for each method. That just isn't the OOP paradigm, nor is it the OOP-based REST paradigm. In REST, each action an API allows against its resources is given its own request method. Let me turn the tables on you, and ask if you can find any support in Roy's writings for allowing multiple semantics per method? I take sentences like "constrain the interface to a consistent set of semantics for all resources" very seriously. If a method can have more than one action, then the method must mean a different thing based on URI or media type. Media types aren't meant to describe method semantics. If some URIs handle a method in one fashion, and other URIs in the same system handle a method in some other fashion, then there certainly isn't a "consistent set of semantics for all resources" is there? -Eric
António Mota wrote: > > > Absolutely not. I said it's only *designed* for the common case of > > the Web, taken directly from Dr. Fielding's dissertation. > > > > And however said dissertation title refers to "Design of Network-based > Software Architectures". Now, is the Web the only "Network-based > Software Architectures" we have? Isn't a enterprise-wide architecture > a network-based one? Or a extranet-based architecture? Certainly the > previous work of Dr. Fielding on the web is the foundation for his > dissertation, but my understanding of it, it encompasses far more than > "the Web"... > Yes, it does. Re-read the summary at the end of Roy's dissertation. It lays the groundwork for understanding network-based software architecture, then applies that knowledge to describe a new architectural style optimized for the "common case of the Web". The insight contained in the dissertation could be used to devise a new architectural style called "FEST" which takes into account the specialized needs of an enterprise-wide architecture, if those needs are significantly different from the common case of the Web. The resulting design could optimize PUT, for example, but this may very well require de-optimizing GET. It may even be a uniform interface. It wouldn't be REST, but would qualify as "inspired by REST". -Eric
> > Yes, it does. Re-read the summary at the end of Roy's dissertation. > Sorry, I meant re-read the summary at the end of the *introduction* to Roy's dissertation... -Eric
António Mota wrote: > > I don't see why it is unrestfull for the client to say to the server > > delete all the clients whose name starts with a 'A' > or > delete all the journal entries between 2008-01-01 and 2008-12-31 > Because in REST, there is no provision to perform some action against multiple URIs in one request. If you have a resource that lists all clients whose names begin with 'A' then you can have the client iterate over the URIs in that resource and DELETE them each in turn. A REST request consists of one action taken against one URI, such that each action can receive a response which includes a status code. > > It's easy to do a > > GET /clients/A* > > so why not a DELETE /clients/A* > A REST request consists of one action taken against one URI. You issue a GET request against a URI, and a representation of the resource is returned. There is no such thing as a wildcard request in REST, even though you can certainly code an application this way. If you really want to see a list of all clients whose name begins with 'A' then: GET /search?clients=A* Now you've defined a resource, which has a URI, and contains the desired data. But, a DELETE against /search?clients=A* would not be expected, in a generic interface, to delete a bunch of individual records. It would be expected to delete the resource identified as /search?clients=A* and nothing else. In a generic interface, a DELETE request is issued against each URI targeted for deletion. > > Also, if you POST a batch-delete operation > > POST /clients/deleteFactory > 1001;1002;1003 > > you are just passing parameters to a resource whose purpose is to > delete clients, not to operate against other resources. Maybe it is > deleting resources /client/1001 ... but that is a side-effect *of the > server*. > No, it is not. The client's request is that the server delete records 1001-1003. Those are the semantics of the interaction between components, even if the method is an overloaded POST. What you are describing is a textbook example of an RPC request, where you are passing parameters to some procedure you're calling on the server. The question to ask yourself is, "What is returned when I GET /clients/ deleteFactory?" Nothing? That strongly suggests an RPC endpoint, not a REST resource. The purpose of that RPC call is to delete resources without using the DELETE method. In REST, the proper way to do this is: DELETE /client/1001 DELETE /client/1002 DELETE /client/1003 Each resource you want to delete has a URI. Each URI has a DELETE method. So, call the DELETE method of the URI you want to delete. It's that straightforward, this is not a wheel which needs reinventing. > > This request requests a "business" operation on a resource, > but from the point-of-view of the client the client doesen't even know > that exists resources corresponding to 1001... > The client doesn't need to know. If the client issues a DELETE request to /client/1001 and that resource doesn't exist, the response will be 404 (or perhaps 410 if it used to exist). > > Now I admit that having resources like this smells terribly like using > verbs in uri's and thus like RPC. And it doesen't solve the cache > problem. > Exactly. But if you follow REST you won't have RPC endpoints, and your caching problems will have already been solved. > > Also, you said elsewhere that the client sends a "application-state" > to the server, is that a lapse of your, is not the other way around? > Yes, I'm correct. ;-) REST stands for REpresentational State Transfer. Clients and servers communicate with each other by passing representations of the state of a resource back and forth. Imagine a resource, "picture of me". I have a representation of this resource on my workstation, "eric.jpg". I want to post this photo to my website, so I PUT the eric.jpg representation to the URI http://ericjbowman.com/ photos/eric.jpg. Before I uploaded this representation, this was the application state: GET /photos/eric.jpg 404 Not Found After I uploaded the image, this became the new application state: GET /photos/eric.jpg 200 OK + eric.jpg By transferring to the server a representation of my desired application state, I have instructed the server (by making a PUT request) to change the state of /photos/eric.jpg from not found, to the new state where it identifies the resource "picture of me", and returns the representation I uploaded on subsequent GET requests. > > I acknowledge that this thread is getting too "dense" for my > experience with REST so bear with me if I'm being too "simplistic"... > Actually, I think you're making this all too complicated on yourself... but don't worry about it, REST is not easy to grasp. Primarily because it's so different than anything we've been taught before about software. My advice is to let go of this notion of "factory resources" as it will cause you no end of confusion to try to think of REST in such terms. -Eric
The idea that composite resources cannot be used to solve those problems seems to conflict slightly with what Roy was saying on his blog not long ago Resources are not storage items (or, at least, they aren’t always equivalent to some storage item on the back-end). The same resource state can be overlayed by multiple resources, just as an XML document can be represented as a sequence of bytes or a tree of individually addressable nodes. Likewise, a single resource can be the equivalent of a database stored procedure, with the power to abstract state changes over any number of storage items. If you find yourself in need of a batch operation, then most likely you just haven’t defined enough resources. I see this as a validation of the idea that whenever you're trying to "batch" operations, you should probably have another resource that, when acted upon, has the power to change multiple states behind the scenes, without letting the client or intermediaries know what happened. Just because your operationmay cause caches to serve staled representations doesn't mean you're unrestful. Quite the opposite, the power of REST is indeed in its capacity to be temporarily inconsistent. Seb _________________________________________________________________ 25GB of FREE Online Storage – Find out more http://clk.atdmt.com/UKM/go/134665320/direct/01/
Well, I re-read it, and although I don't see any reference to a "common case of the Web" I do see a reference to "the modern World Wide Web". However I don't agree with your assumption that REST, as you said "it's only *designed* for the common case of the Web" and I don't see how anyone can infer that from that summary, which I quote: In summary, this dissertation makes the following contributions to software > research within the field of Information and Computer Science: > > - a framework for understanding software architecture through > architectural styles, including a consistent set of terminology for > describing software architecture; > - a classification of architectural styles for network-based > application software by the architectural properties they would induce when > applied to the architecture for a distributed hypermedia system; > - REST, a novel architectural style for distributed hypermedia systems; > and, > - application and evaluation of the REST architectural style in the > design and deployment of the architecture for the modern World Wide Web. > > "REST, a novel architectural style for distributed hypermedia systems". Not for the "common case of the Web", even if after that it is said "application and evaluation" to the WWW. Now, far from me to try to assume what Mr Fielding was thinking when he wrote that, but since he certainly wrote that for other people to interpret, even if I'm wrong this is my interpretation. The WWW existed before Mr Fielding wrote this dissertation, and after that nobody stopped the Web for some days to conform it with the dissertation. I think, but maybe I'm wrong, that Mr Fielding started from the "particular" case of the web to abstract a much larger set of definitions applicable to "network-based application software" and "distributed hypermedia system" (of which the web is a particular case) and after doing that and had those definitions in place, checked those principles against the Web. Which I think it's a very common way of reasoning, going from the particular to the abstract, and then to the particular again. It's what I usually do in IT systems (at least since I read the work of Gerald Weinberg in "Introduction to General System Thinking"). So, was Mr Fielding thesis based on the WWW? Yes. Is it, as you said, "only *designed* for the common case of the Web"? No. It was designed for "network-based, distributed hypermedia systems". Of which the Web is the most prominent, but not the only. Any enterprise-wide application can follow a REST style, or any extranet-like application also. Now, I can be wrong in this all interpretation, but even so that won't disallow me from trying to apply REST in my current work of designing a enterprise service infrastructure for our applications :) ... 2009/3/20 Eric J. Bowman <eric@...> > António Mota wrote: > > > > > > Absolutely not. I said it's only *designed* for the common case of > > > the Web, taken directly from Dr. Fielding's dissertation. > > > > > > > And however said dissertation title refers to "Design of Network-based > > Software Architectures". Now, is the Web the only "Network-based > > Software Architectures" we have? Isn't a enterprise-wide architecture > > a network-based one? Or a extranet-based architecture? Certainly the > > previous work of Dr. Fielding on the web is the foundation for his > > dissertation, but my understanding of it, it encompasses far more than > > "the Web"... > > > > Yes, it does. Re-read the summary at the end of Roy's dissertation. > It lays the groundwork for understanding network-based software > architecture, then applies that knowledge to describe a new > architectural style optimized for the "common case of the Web". The > insight contained in the dissertation could be used to devise a new > architectural style called "FEST" which takes into account the > specialized needs of an enterprise-wide architecture, if those needs > are significantly different from the common case of the Web. The > resulting design could optimize PUT, for example, but this may very > well require de-optimizing GET. It may even be a uniform interface. > It wouldn't be REST, but would qualify as "inspired by REST". > > -Eric >
António Mota wrote: > > Well, I re-read it, and although I don't see any reference to a > "common case of the Web" I do see a reference to "the modern World > Wide Web". However I don't agree with your assumption that REST, as > you said "it's only *designed* for the common case of the Web" and I > don't see how anyone can infer that from that summary, which I quote: > I'm sorry, but you misunderstood me. The summary explains how the dissertation lays the groundwork for understanding network-based software architecture, then applies that knowledge to describe a new architectural style, which is revealed in Chapter 5 to be optimized for the "common case of the Web". (5.1.5) > > So, was Mr Fielding thesis based on the WWW? Yes. Is it, as you said, > "only *designed* for the common case of the Web"? No. It was designed > for "network-based, distributed hypermedia systems". Of which the Web > is the most prominent, but not the only. Any enterprise-wide > application can follow a REST style, or any extranet-like application > also. > REST describes a uniform connector interface optimized for the common case of the Web. Therefore, if you are trying to apply it to a problem area that doesn't resemble the common case of the Web, you are not using REST as it was designed. This is not to say it won't work! I have gadgets around my house that I use to do things they weren't designed for. I don't use the claw part of my hammer to pull nails, I use it to turn on my TV. (Don't ask.) -Eric
Sebastien Lambla wrote: > > The idea that composite resources cannot be used to solve those > problems seems to conflict slightly with what Roy was saying on his > blog not long ago > Five months without posting qualifies as very long ago in blogtime... :-D I realize you weren't responding to me directly, but I'd still like to comment... > > (quoting Roy Fielding) > Resources are not storage items (or, at least, they aren’t always > equivalent to some storage item on the back-end). The same resource > state can be overlayed by multiple resources, just as an XML document > can be represented as a sequence of bytes or a tree of individually > addressable nodes. Likewise, a single resource can be the equivalent > of a database stored procedure, with the power to abstract state > changes over any number of storage items. > A collection resource can handle a DELETE request made against the collection, as a stored procedure to delete all member resources of the collection (composite resource, if you will). The problem in this thread, is the desire to have the client request that the server delete members 2, 5 and 9 from a collection. > > If you find yourself in need of a batch operation, then most likely > you just haven’t defined enough resources. > True enough. A collection resource can be created which contains only members 2, 5 and 9 and treats DELETE as a stored procedure to delete all members along with the collection resource itself. No problem. But, hypothetically, why? I still see no compelling reason to optimize DELETE at the expense of the visibility which allows proper cache behavior. > > I see this as a validation of the idea that whenever you're trying to > "batch" operations, you should probably have another resource that, > when acted upon, has the power to change multiple states behind the > scenes, without letting the client or intermediaries know what > happened. > The problem is, the nature of a batch operation is that of a client request for multiple actions to be taken. This is different, in my mind, than a stored procedure. A stored procedure doesn't need to tell the client anything about actions the client didn't request, true enough. But, if the client is requesting a batch job, then the client needs to be notified of the results of each aspect of the batch job. See BDELETE. You can't claim that something like /deleteFactory is a stored procedure, when it's accepting variable input from clients. > > Just because your operation may cause caches to serve staled > representations doesn't mean you're unrestful. Quite the opposite, > the power of REST is indeed in its capacity to be temporarily > inconsistent. > True enough, but... There's nothing to be done about caches serving stale data, not with cache control headers anyway. But there is the special case of DELETE, which allows immediate cache expiration on those specific caches between the user requesting the DELETE and the origin server. Deleting members 2, 5 and 9 from a collection in any way other than three separate DELETE operations eliminates even the *possibility* of prompt expiration of 2, 5 and 9 from those specific caches pertinent to the user who requested the DELETE (other users have no compelling need for immediate expiration). You've gone from 99% probability that deleted resources will be expired from the cache of the user who requested the delete, to 0%. Not using DELETE to delete may be allowed as a stored procedure, but it brings about none of the desirable behavior of DELETE and has no advantages other than saving what's most likely a trivial amount of network round-trips -- requests that don't even have message bodies. If a client is requesting multiple deletions, then the client needs to issue multiple DELETE requests. -Eric
Well, I'm not using it to the "common case of the Web", I'm using it in a "network-based software architecture". I think there's no point in discussing what's my interpretation and your interpretation. Interpretations are just that, they don't change the nature of things, just how we look at them. I think you're wrong, you think I'm wrong, go figure, maybe we're both wrong... That doesn't affect my work, thought. And like Candid said to the good Dr. Pangloss, "all that is very well, but let us cultivate our garden"... I know I do! On Mar 20, 2009 11:35am, "Eric J. Bowman" <eric@bisonsystems.net> wrote: > António Mota wrote: > > > > Well, I re-read it, and although I don't see any reference to a > > "common case of the Web" I do see a reference to "the modern World > > Wide Web". However I don't agree with your assumption that REST, as > > you said "it's only *designed* for the common case of the Web" and I > > don't see how anyone can infer that from that summary, which I quote: > > > I'm sorry, but you misunderstood me. The summary explains how the > dissertation lays the groundwork for understanding network-based > software architecture, then applies that knowledge to describe a new > architectural style, which is revealed in Chapter 5 to be optimized for > the "common case of the Web". (5.1.5) > > > > So, was Mr Fielding thesis based on the WWW? Yes. Is it, as you said, > > "only *designed* for the common case of the Web"? No. It was designed > > for "network-based, distributed hypermedia systems". Of which the Web > > is the most prominent, but not the only. Any enterprise-wide > > application can follow a REST style, or any extranet-like application > > also. > > > REST describes a uniform connector interface optimized for the common > case of the Web. Therefore, if you are trying to apply it to a problem > area that doesn't resemble the common case of the Web, you are not > using REST as it was designed. This is not to say it won't work! I > have gadgets around my house that I use to do things they weren't > designed for. I don't use the claw part of my hammer to pull nails, I > use it to turn on my TV. (Don't ask.) > -Eric
On 20.03.2009, at 12:28, Eric J. Bowman wrote: > > > > If you find yourself in need of a batch operation, then most likely > > you just haven’t defined enough resources. > > > > True enough. A collection resource can be created which contains only > members 2, 5 and 9 and treats DELETE as a stored procedure to delete > all > members along with the collection resource itself. No problem. But, > hypothetically, why? I still see no compelling reason to optimize > DELETE at the expense of the visibility which allows proper cache > behavior. Atomicity? Stefan
I thought that's what we've been talking since the beginning... On Mar 20, 2009 12:11pm, Stefan Tilkov <stefan.tilkov@innoq.com> wrote: > On 20.03.2009, at 12:28, Eric J. Bowman wrote: > >> If you find yourself in need of a batch operation, then most likely> > you just haven’t defined enough resources.> True enough. A collection > resource can be created which contains onlymembers 2, 5 and 9 and treats > DELETE as a stored procedure to delete allmembers along with the > collection resource itself. No problem. But,hypothetically, why? I still > see no compelling reason to optimizeDELETE at the expense of the > visibility which allows proper cachebehavior. > Atomicity? > Stefan
I thought so, too, which is why I was puzzled by Eric's post. Stefan On 20.03.2009, at 13:14, amsmota@... wrote: > I thought that's what we've been talking since the beginning... > > On Mar 20, 2009 12:11pm, Stefan Tilkov <stefan.tilkov@...> > wrote: > > On 20.03.2009, at 12:28, Eric J. Bowman wrote: > > > > >> If you find yourself in need of a batch operation, then most > likely> you just haven’t defined enough resources.> True enough. A > collection resource can be created which contains onlymembers 2, 5 > and 9 and treats DELETE as a stored procedure to delete allmembers > along with the collection resource itself. No problem. > But,hypothetically, why? I still see no compelling reason to > optimizeDELETE at the expense of the visibility which allows proper > cachebehavior. > > > > Atomicity? > > > > > > Stefan > > > > > >
On Fri, Mar 20, 2009 at 11:28 AM, Eric J. Bowman <eric@...>wrote: > There's nothing to be done about caches serving stale data, not with > cache control headers anyway. But there is the special case of DELETE, > which allows immediate cache expiration on those specific caches between > the user requesting the DELETE and the origin server. Deleting members > 2, 5 and 9 from a collection in any way other than three separate DELETE > operations eliminates even the *possibility* of prompt expiration of 2, > 5 and 9 from those specific caches pertinent to the user who requested > the DELETE (other users have no compelling need for immediate > expiration). You've gone from 99% probability that deleted resources > will be expired from the cache of the user who requested the delete, to > 0%. > An off-the-wall suggestion: Since DELETE is idempotent, how about sending the separate DELETE commands to the server after the batch delete anyway, for the sheer purpose of invalidating the intermediate caches?
> True enough. A collection resource can be created which contains only > members 2, 5 and 9 and treats DELETE as a stored procedure to delete all > members along with the collection resource itself. No problem. But, > hypothetically, why? I still see no compelling reason to optimize > DELETE at the expense of the visibility which allows proper cache > behavior. And > If a client is requesting multiple deletions, then the client needs to > issue multiple DELETE requests. The core of the issue is that some resources only make sense to be deleted *together*, because that's what makes sense to the application. If I want to delete 3 orders because one credit card has been rejected, I can either delete them sequentially (and potentially end up in an inconsistent internal state), or group them together as another resource (let's say ordersForCreditCardXxx) and delete that resource as a unit. The idea that somehow those scenarios are not needed, or that retry and pray semantics of deleting multiple resources is usable in every situation is completely alien to me. Those are real-world scenarios where you are trying to do multiple things as a unit, not to save network calls or bandwidth, but because they belong logically to the same operation. Having such a composite resource you can delete on is perfectly reasonable. Yes the caches won't see it, but that's a trade-off you have to decide for yourself: is the cache consistency more important than the resulting state of my resource(s). To add insult to injury, intermediaries are not required to actually stale a representation upon receiving a DELETE. And the very nature of proxies as they're used today means that while your proxy may well delete the representation, mine won't, putting us exactly where we were. At the end of the day, if you value your squid cache beyond the inherent atomicity of certain operations, you're quite free to do so. It's a tradeoff. But, IMHO, neither approaches are unrestful, by nature or by definition. Seb
amsmota@... wrote: > > I thought that's what we've been talking since the beginning... > Yes, I've been saying the same things repeatedly. You *can* do this, it just isn't using the uniform interface, therefore it isn't REST, if it's the client dictating what batch of resources to delete. In a uniform interface, the client calls the DELETE method of each resource it is interested in deleting. Otherwise the client has zero chance of accurately verifying deletion, because nothing is visible. -Eric
Stefan Tilkov wrote: > > > > > > > If you find yourself in need of a batch operation, then most > > > likely you just haven’t defined enough resources. > > > > > > > True enough. A collection resource can be created which contains > > only members 2, 5 and 9 and treats DELETE as a stored procedure to > > delete all > > members along with the collection resource itself. No problem. But, > > hypothetically, why? I still see no compelling reason to optimize > > DELETE at the expense of the visibility which allows proper cache > > behavior. > > Atomicity? > Nope. What we've danced around without discussing in this thread, is Code on Demand. If you want a batch of operations to succeed or fail as a unit (presumably what everyone but me means by atomic), then you may well have a compelling reason to implement REST's optional constraint. An applet running in the client could GET each member of the set, cache it, DELETE it, and if any deletion fails, PUT each member back. Or somesuch. All the benefit of a uniform interface, while implementing an operation that otherwise just doesn't fit with REST. -Eric
> An applet running in the client could GET each member of the set, cache > it, DELETE it, and if any deletion fails, PUT each member back. Or > somesuch. All the benefit of a uniform interface, while implementing > an operation that otherwise just doesn't fit with REST. Again, this scenario may well be interesting, but it has very little to do with code on demand. The allowed interaction between user agent and server is driven by the media type. The media type dictates how to process next. Code on demand is used to augment a client when it doesn’t have enough knowledge to continue processing by a lack of understanding of the media type. It is perfectly acceptable for a media type to be defined that would specify the semantics by which a delete like you described is to be processed. It's up to the client to understand the media type (by having implemented a spec), and perfectly possible for those clients that support code ondemand to fill-in the gap in their implementation with an implementation given by the server. It all depends on your media type, and you're quite free to do things whichever way you want, and be restful for as long as the processing model doesn't itself breach REST constraints. Which goes back to my original point: from my point of view, you're not breaking a REST constraint by having composite resources and delete on collection semantics instead of individual URIs. Seb
Sebastien Lambla wrote: > > If I want to delete 3 orders because one credit card has been > rejected, I can either delete them sequentially (and potentially end > up in an inconsistent internal state), or group them together as > another resource (let's say ordersForCreditCardXxx) and delete that > resource as a unit. > Yes, but having the deletion of a collection trigger the deletion of all its members is a library function. Not something visible that can be counted on. Please refer to Roy's blog post, "REST APIs must be hypertext-driven"... " A REST API should never have 'typed' resources that are significant to the client. Specification authors may use resource types for describing server implementation behind the interface, but those types must be irrelevant and invisible to the client. " The key here is "behind the interface". A stored procedure happens behind the interface; a batch request is made by the client. If deleting ordersForCreditCardXxx triggers the deletion of other resources which may be individually deleted by calling their own DELETE methods, then you do _not_ have "a consistent set of semantics for all resources". You have "typed" resources that are significant to the client, i.e. unlike other resources on the system will behave a certain way. " Failure here implies that clients are assuming a resource structure due to out-of band information, such as a domain-specific standard, which is the data-oriented equivalent to RPC's functional coupling. " > > The idea that somehow those scenarios are not needed, or that retry > and pray semantics of deleting multiple resources is usable in every > situation is completely alien to me. Those are real-world scenarios > where you are trying to do multiple things as a unit, not to save > network calls or bandwidth, but because they belong logically to the > same operation. > OK, atomicity, I'm not a DB jock so that doesn't naturally occur to me. I didn't mean to imply that those scenarios aren't valid, just that they aren't natural candidates for REST. Unless, of course, we implement Code on Demand for those scenarios so they can still use a uniform interface and be visible to intermediaries. > > Having such a composite resource you can delete on is perfectly > reasonable. Yes the caches won't see it, but that's a trade-off you > have to decide for yourself: is the cache consistency more important > than the resulting state of my resource(s). > Yes, perfectly reasonable, just like I mentioned in the thread I started about using an HTML form and a POST handler for this sort of thing. Just accept that an aspect of your API isn't REST and move on, it isn't the end of the world. The individual members of a collection have DELETE methods, and the proper way to delete a resource is to call its DELETE method. If you want the deletion of a collection to also delete every member of the collection, by calling the DELETE method of the collection, you no longer have a uniform interface, because you no longer have a consistent set of semantics for all resources, because some of your resources have DELETE semantics assigned to the DELETE method (members), while other resources have BDELETE semantics assigned to the DELETE method (collections). > > To add insult to injury, intermediaries are not required to actually > stale a representation upon receiving a DELETE. And the very nature > of proxies as they're used today means that while your proxy may well > delete the representation, mine won't, putting us exactly where we > were. > No, it doesn't put us where we were. Just because part of a standard doesn't say 'MUST' is no reason to throw our hands up in the air and proclaim that it won't ever work for anybody so why bother at all... Intermediaries SHOULD expire all representations of a resource it sees a DELETE request for. RFC 2119 explains SHOULD to mean, "the full implications must be understood and carefully weighed before choosing a different course." In this case, I'm not sure anyone's come up with a valid reason not to expire on DELETE, but I still don't think it merits a "MUST". By not using DELETE, instead of this mechanism most likely working, it can't possibly work. That is not where we were using DELETE. > > At the end of the day, if you value your squid cache beyond the > inherent atomicity of certain operations, you're quite free to do so. > It's a tradeoff. But, IMHO, neither approaches are unrestful, by > nature or by definition. > Don't misstate the argument. At the end of the day, I value the scalability of the uniform interface beyond the difficulty of either implementing an Xforms interface to allow a user to mark a bunch of URIs for deletion which then DELETEs each one, or implementing Code on Demand for batch deletion if at some point I find atomicity to be important to an application I'm working on. Assigning DELETE semantics to some resources' DELETE method, while assigning BDELETE semantics to other resources' DELETE method within the same application, is the antithesis of a REST API. -Eric
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> > Not using DELETE to delete may be allowed as a stored procedure, but > it brings about none of the desirable behavior of DELETE and has no > advantages other than saving what's most likely a trivial amount of > network round-trips -- requests that don't even have message bodies. > > If a client is requesting multiple deletions, then the client needs to > issue multiple DELETE requests. > > -Eric > "Trivial amount of network trips". Ultimately that's what this discussion is all about. Programmers, try to reduce out-of-proc calls which is why calling DELETE (or any other METHOD for that matter) multiple times is somewhat appalling and leads to discussions about "batch". When you're dealing with the need for sub-second response time(s), the network trip may not be as trivial as we may think it is - at least that's my experience. I still think "batch" can be solved in RESTful manner though. Eb
You seems to have a clear understanding of what REST is or should be, so perhaps you had more sources of information than the rest of us. But, since you like so much quotings, where in In order to obtain a uniform interface, multiple architectural constraints > are needed to guide the behavior of components. REST is defined by four > interface constraints: - identification of resources; - manipulation of resources through representations; - self-descriptive messages; and, - hypermedia as the engine of application state > you get the idea that this is about GET, DELETE, POST or PUT? Where does ot talk about verbs? Also, in your interpretation of the semantics of a REST-style architecture, taken then it's not all about the web, and so it's not the same as HTTP, what's the role of media-types? For what you said, there's no use to them. I can have my server distinguish different operations for the same verb by using different media-types. Is that unrestfull? The actual function performed by the POST method is determined by the server. Is that unrestfull also? 2009/3/20 Eric J. Bowman <eric@...> > Sebastien Lambla wrote: > > > > > If I want to delete 3 orders because one credit card has been > > rejected, I can either delete them sequentially (and potentially end > > up in an inconsistent internal state), or group them together as > > another resource (let's say ordersForCreditCardXxx) and delete that > > resource as a unit. > > > > Yes, but having the deletion of a collection trigger the deletion of > all its members is a library function. Not something visible that can > be counted on. Please refer to Roy's blog post, "REST APIs must be > hypertext-driven"... > > " > A REST API should never have 'typed' resources that are significant to > the client. Specification authors may use resource types for describing > server implementation behind the interface, but those types must be > irrelevant and invisible to the client. > " > > The key here is "behind the interface". A stored procedure happens > behind the interface; a batch request is made by the client. > > If deleting ordersForCreditCardXxx triggers the deletion of other > resources which may be individually deleted by calling their own DELETE > methods, then you do _not_ have "a consistent set of semantics for all > resources". You have "typed" resources that are significant to the > client, i.e. unlike other resources on the system will behave a certain > way. > > " > Failure here implies that clients are assuming a resource structure due > to out-of band information, such as a domain-specific standard, which > is the data-oriented equivalent to RPC's functional coupling. > " > > > > > The idea that somehow those scenarios are not needed, or that retry > > and pray semantics of deleting multiple resources is usable in every > > situation is completely alien to me. Those are real-world scenarios > > where you are trying to do multiple things as a unit, not to save > > network calls or bandwidth, but because they belong logically to the > > same operation. > > > > OK, atomicity, I'm not a DB jock so that doesn't naturally occur to > me. I didn't mean to imply that those scenarios aren't valid, just > that they aren't natural candidates for REST. Unless, of course, we > implement Code on Demand for those scenarios so they can still use a > uniform interface and be visible to intermediaries. > > > > > Having such a composite resource you can delete on is perfectly > > reasonable. Yes the caches won't see it, but that's a trade-off you > > have to decide for yourself: is the cache consistency more important > > than the resulting state of my resource(s). > > > > Yes, perfectly reasonable, just like I mentioned in the thread I > started about using an HTML form and a POST handler for this sort of > thing. Just accept that an aspect of your API isn't REST and move on, > it isn't the end of the world. > > The individual members of a collection have DELETE methods, and the > proper way to delete a resource is to call its DELETE method. > > If you want the deletion of a collection to also delete every member of > the collection, by calling the DELETE method of the collection, you no > longer have a uniform interface, because you no longer have a > consistent set of semantics for all resources, because some of your > resources have DELETE semantics assigned to the DELETE method > (members), while other resources have BDELETE semantics assigned to the > DELETE method (collections). > > > > > To add insult to injury, intermediaries are not required to actually > > stale a representation upon receiving a DELETE. And the very nature > > of proxies as they're used today means that while your proxy may well > > delete the representation, mine won't, putting us exactly where we > > were. > > > > No, it doesn't put us where we were. Just because part of a standard > doesn't say 'MUST' is no reason to throw our hands up in the air and > proclaim that it won't ever work for anybody so why bother at all... > > Intermediaries SHOULD expire all representations of a resource it sees > a DELETE request for. RFC 2119 explains SHOULD to mean, "the full > implications must be understood and carefully weighed before choosing a > different course." In this case, I'm not sure anyone's come up with a > valid reason not to expire on DELETE, but I still don't think it merits > a "MUST". > > By not using DELETE, instead of this mechanism most likely working, it > can't possibly work. That is not where we were using DELETE. > > > > > At the end of the day, if you value your squid cache beyond the > > inherent atomicity of certain operations, you're quite free to do so. > > It's a tradeoff. But, IMHO, neither approaches are unrestful, by > > nature or by definition. > > > > Don't misstate the argument. At the end of the day, I value the > scalability of the uniform interface beyond the difficulty of either > implementing an Xforms interface to allow a user to mark a bunch of > URIs for deletion which then DELETEs each one, or implementing Code on > Demand for batch deletion if at some point I find atomicity to be > important to an application I'm working on. > > Assigning DELETE semantics to some resources' DELETE method, while > assigning BDELETE semantics to other resources' DELETE method within > the same application, is the antithesis of a REST API. > > -Eric >
Sebastien Lambla wrote: > > Code on demand is used to augment a client when it doesn’t have > enough knowledge to continue processing by a lack of understanding of > the media type. > I have no idea where you're getting that definition. The definition from REST is, "REST allows client functionality to be extended by downloading and executing code in the form of applets or scripts." So what I described is indeed just the sort of thing CoD is used for. > > It is perfectly acceptable for a media type to be defined that would > specify the semantics by which a delete like you described is to be > processed. It's up to the client to understand the media type (by > having implemented a spec), and perfectly possible for those clients > that support code ondemand to fill-in the gap in their implementation > with an implementation given by the server. > No, it is not acceptable for a media type to redefine the semantics of a method. One must "constrain the interface to a consistent set of semantics for all resources." IOW, method semantics do not vary by media type. The DELETE method doesn't magically change semantics when it encounters a certain media type. The semantics of the DELETE method are understood by intermediaries, regardless of media type. BDELETE semantics are not visible to intermediaries. > > Which goes back to my original point: from my point of view, you're > not breaking a REST constraint by having composite resources and > delete on collection semantics instead of individual URIs. > I don't know how many more times I can repeat that there must be "a consistent set of semantics for all resources" in order to have a REST API. Each resource already has a DELETE method, which is called to delete a resource over the wire. If certain special resources have their DELETE semantics altered to mean BDELETE (by virtue of URI structure, media type, or any other differentiator) then the interface defies the very notion of "uniform". If you want the specific constraint that's broken, it's "self- descriptive messages" since the standard DELETE method is not understood to mean BDELETE, the scope of the interaction (multiple deletes) is not visible, and the response says nothing about the cacheability of the deleted resources. -Eric
On 20.03.2009, at 14:35, Eric J. Bowman wrote: > If deleting ordersForCreditCardXxx triggers the deletion of other > resources which may be individually deleted by calling their own > DELETE > methods, then you do _not_ have "a consistent set of semantics for all > resources". You have "typed" resources that are significant to the > client, i.e. unlike other resources on the system will behave a > certain > way. I strongly and emphatically disagree. To me, it's seems perfectly fine, and entirely RESTful, if resources change without any client requesting this change through the uniform interface. One reason for such as change might be a side-effect from another RESTful interaction, another might be a change in weather, another one the flow of time. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
At Fri, 20 Mar 2009 02:17:18 -0600, Eric J. Bowman wrote: > > Erik Hetzner wrote: > > > > > Yes, it would be a good idea if your application only uses POST for > > one thing. But I don’t see why it would be ‘un-RESTful’ to do > > otherwise. > > > > Once upon a time, REST was known as the "HTTP Request Object" and it's > what I used to write a CMS using Server-Side Javascript in 1998 -- my > first OOP project. In OOP, one writes a separate method for each > action. One doesn't write multiple actions for each method. That just > isn't the OOP paradigm, nor is it the OOP-based REST paradigm. Yes. > In REST, each action an API allows against its resources is given its > own request method. Let me turn the tables on you, and ask if you can > find any support in Roy's writings for allowing multiple semantics per > method? I take sentences like "constrain the interface to a consistent > set of semantics for all resources" very seriously. I don’t think that you are taking the ‘all resources’ part of that phrase very seriously. > If a method can have more than one action, then the method must mean a > different thing based on URI or media type. Media types aren't meant > to describe method semantics. If some URIs handle a method in one > fashion, and other URIs in the same system handle a method in some > other fashion, then there certainly isn't a "consistent set of > semantics for all resources" is there? Let me rephrase that. If is RESTful to use POST against http://example.org/a to mean one thing, and against http://example.com/a to mean another thing, why is it not RESTful to use POST against http://example.org/b to mean that other thing? You are quoting ‘consistent set of semantics for all resources’. I don’t understand why you take ‘all resources’ to mean all resources in an application or domain, not all resources on the web. (The reason that I am pressing this issue is that I think it might be that you are imagining a stronger uniformity of semantics than you might take if you understood ‘all resources’ to mean all resources on the web, not all resources bounded by your application.) best, Erik Hetzner
> No, it is not acceptable for a media type to redefine the semantics of > a method. One must "constrain the interface to a consistent set of > semantics for all resources." IOW, method semantics do not vary by > media type. The DELETE method doesn't magically change semantics when > it encounters a certain media type. The semantics of the DELETE method > are understood by intermediaries, regardless of media type. BDELETE > semantics are not visible to intermediaries. You misread me. I do not suggest a media type REDEFINES an existing method, that would indeed break the uniform interface. I suggest the interaction model that informs the client which method to use when and against which URI *is* defined within the media type. Just like the html form tag. > If you want the specific constraint that's broken, it's "self- > Descriptive messages" since the standard DELETE method is not understood > to mean BDELETE, the scope of the interaction (multiple deletes) is not > visible, and the response says nothing about the cacheability of the > deleted resources. You say it's broken, I still don't see how. Let's quote: "Within REST, intermediary components can actively transform the content of messages because the messages are self-descriptive and their semantics are visible to intermediaries" "REST enables intermediate processing by constraining messages to be self-descriptive: interaction is stateless between requests, standard methods and media types are used to indicate semantics and exchange information, and responses explicitly indicate cacheability." You'll notice that the self-descriptive nature comes from METHOD+Media Type, not just METHOD. If the intermediary doesn't understand the media type, it won't be able to apply any transformation or logic, because while the message is self-described (which means "I don't need anything but whats in the message to understand what it does"), the content of the message is part of the message. The source of our disagreement in here seems to be that you apply the constraint of self-description as having meaning when the media type is not understood by an intermediary, I do not believe this is the case.
At Fri, 20 Mar 2009 05:28:16 -0600, Eric J. Bowman wrote: > […] > > True enough. A collection resource can be created which contains only > members 2, 5 and 9 and treats DELETE as a stored procedure to delete all > members along with the collection resource itself. No problem. But, > hypothetically, why? I still see no compelling reason to optimize > DELETE at the expense of the visibility which allows proper cache > behavior. > > […] Presumably the client can create this collection, right? So the client can therefore cause the ‘batch’ deletion of resources, without being ‘un-RESTful’? best, Erik Hetzner
> leting ordersForCreditCardXxx triggers the deletion of other > resources which may be individually deleted by calling their own DELETE > methods, then you do _not_ have "a consistent set of semantics for all > resources". You have "typed" resources that are significant to the > client, i.e. unlike other resources on the system will behave a certain > way. > " > Failure here implies that clients are assuming a resource structure due > to out-of band information, such as a domain-specific standard, which > is the data-oriented equivalent to RPC's functional coupling. > " If a client creates a bucket resource and append other resources to it (POST /bucket -> 201, POST /bucket/21 etc), this is not a "typed" resource. Of course there's out of band information, that's exactly the job of a media type. It's a prior interaction contract. It would define that you have such a thing as collection of resources called buckets, and that if you create such a bucket and then delete it, it will delete any resource that you added to that bucket. It doesn't make the /bucket a resource type that is understood by the client in advance. It makes the case that when you want to create a new resource, and it's of type app/vnd.acme.bucket+xml, deleting said resource may delete any resource that has been added to that specific resource. *what* the resource is is not very important for the client, what the resource representation loos like (aka <form method=POST class=addToBucket action=/bucket/21>) and the associated interaction model is. There's no classes of resources, there's representations that do let you direct the interaction model. That's exactly what html forms do. "here's a URI on which you can append data in such a way, and the following should happen when you click on the button". What exactly would be the issue in the following: GET /items?name=a* SeeOther: /bucket/21 GET /bucket/21 <bucket> <items href="/bucket/21/items"> <order name="ah well" href="http://orders/ah_well" /> </items> <action name="deleteEntries" method="delete" action="/bucket/21/items" /> <action name="deleteBucket" method="delete" action="/bucket/21" /> </bucket> How the client decides which action to invoke is out of band information, with semantics defined in the media type. Code on demand can be used for clients that do not know about the app/vnd.acme.bucket+xml media type. And pre-emptively, I don't belive that having an order resource living at both /orders/ah_well and /bucket/21/items/1, and being deleted from both whenever a delete is triggered on either, is unrestful, or break the uniform interface. It still won't make your cache happy, but as I said, this is a tradeoff. Seb
On Thu, Mar 19, 2009 at 2:15 AM, Eric J. Bowman <eric@...>wrote: > Assaf Arkin wrote: > > The submitted POST in your example resembles what application state? > Can that application state be retrieved by a GET request? Think of Web email where you get a view of your inbox, a checkbox next to each email and a button that says delete. Let's not call it delete, let's call it archive instead. The server is now telling you what the application state is: the contents of the inbox. Along with it, how to transition to a different application state by changing the contents of the inbox. It lets you checkbox specific emails and with one request archive them. The main point of the architecture is to reduce coupling between client and server. The client and server here only agreed on a common understanding of media type and protocol, client is merely following the happy hypermedia trial. There are no new semantics on, in this case HTML or HTTP. Assaf > > > > > > I don't see a client forcing its will on the server, server doing > > actions not requested by the client, or anything beyond plain HTTP. > > > > Sure, it's HTTP, but HTTP != REST. In a uniform interface, if the > client wants to delete multiple resources, then the client makes a > DELETE request against each URI to be deleted. Each request generates > a success/fail response which is visible to intermediaries, allowing > any caches between the user who requested the delete and the origin > server to expire all cached representations of a deleted resource. > This is fundamental. > > This cannot happen when special instructions to the server are POSTed > via an HTML forms interface. No intermediary can possibly surmise that > any deletion has occurred. POST is borked into meaning deletion > instead of its generic-interface meaning of addition. DELETE has the > generic-interface meaning of deletion, but it isn't involved in the > delete requests at all. > > Your comment that the "server isn't doing any actions not requested by > the client" isn't quite right. It may look to the user like an HTML > form allowing deletion, but that isn't what the client is requesting, > because the request method isn't DELETE. So the server is, indeed, > taking action (deletion) that has nothing to do with the request method > (whatever POST means, it doesn't mean DELETE since that's its own > method). > > So, yes, deleting resources with some method other than DELETE results > in an API that does not resemble a uniform interface. Deleting > resources using the DELETE method has absolutely no downside, with the > benefit of being visible to intermediaries, as envisioned by REST. > > -Eric >
On Wed, Mar 18, 2009 at 3:36 PM, Eric J. Bowman <eric@...>wrote: > > If an API doesn't implement DELETE, and also doesn't use POST for > anything but deletion (single or batch), and the options are presented > in an HTML form then yes, it's a uniform interface. However, once > DELETE is also implemented, or if POST is used for anything else like > accepting content uploads, the interface is no longer uniform, unless > and until the previous usage of POST to delete is deprecated. I've been trying to think why this is patently wrong, and it now occurred to me. REST is distributed, there are no boundaries, it therefore does not have the notion of an API. I can talk about an API, the set of resources and media types I decided to document, or read about someone else, of for the purpose of containing a discussion we agreed to look at. But there is no cohesion unit of API under REST to which this litmus test can apply. Ironically this pursuit of "no resource may disappear unless directly DELETEed" has resulted in this litmus test which is both tightly coupled and allowing non-uniformity, as seen in the context of distributed architecture. Assaf > > > The fact remains, that only the use of the DELETE method on a URI-by- > URI basis is visible to intermediaries. This is the only way to > prevent the user who requested the deletion from reloading the deleted > content from cache. Except, of course, to not cache anything -- > thereby defeating the entire premise of using REST to begin with... > > -Eric >
> > Eric J. Bowman wrote: > > If a method can have more than one action, then the method must mean a > different thing based on URI or media type. (...) If some URIs handle a method in one > fashion, and other URIs in the same system handle a method in some > other fashion, then there certainly isn't a "consistent set of > semantics for all resources" is there? And rfc2616 says: The actual function performed by the POST method is determined by the server and is usually dependent on the Request-URI. What do we need more to clarify the issue? At least we agree that that HTTP is compatible with REST?
Sebastien Lambla wrote: > > If a client creates a bucket resource and append other resources to > it (POST /bucket -> 201, POST /bucket/21 etc), this is not a "typed" > resource. > Right. But the semantics of POST aren't quite as set in stone as those of DELETE, either... > > Of course there's out of band information, that's exactly the job of > a media type. It's a prior interaction contract. It would define that > you have such a thing as collection of resources called buckets, and > that if you create such a bucket and then delete it, it will delete > any resource that you added to that bucket. > I don't think so. It sounds like you're describing a code library shared between client and server. Media types don't define method semantics. If I call the DELETE method of a URI, the media type is irrelevant to how that request is handled. Same with GET and PUT. Differential media types for PATCH don't alter what PATCH does. > > It doesn't make the /bucket a resource type that is understood by the > client in advance. It makes the case that when you want to create a > new resource, and it's of type app/vnd.acme.bucket+xml, deleting said > resource may delete any resource that has been added to that specific > resource. *what* the resource is is not very important for the > client, what the resource representation loos like (aka <form > method=POST class=addToBucket action=/bucket/21>) and the associated > interaction model is. > I've just read through a whole bunch of media-type declaration documents. I don't see any media types which define what DELETE does, or any other method. RFC 2616 defines what the PUT method can do. Atom Protocol constrains PUT to only one of those meanings. It would be an error for Atom Protocol to define PUT as a partial update -- media types may constrain the semantics of a method, but they don't define them. > > There's no classes of resources, there's representations that do let > you direct the interaction model. That's exactly what html forms do. > "here's a URI on which you can append data in such a way, and the > following should happen when you click on the button". > The various HTML media types constrain form interaction to GET and POST, they don't modify the definitions of GET and POST. > > What exactly would be the issue in the following: > GET /items?name=a* > SeeOther: /bucket/21 > GET /bucket/21 > <bucket> > <items href="/bucket/21/items"> > <order name="ah well" href="http://orders/ah_well" /> > </items> > <action name="deleteEntries" method="delete" > action="/bucket/21/items" /> <action name="deleteBucket" > method="delete" action="/bucket/21" /> </bucket> > There's nothing wrong with a form that instructs the client to delete two different resources. There's everything wrong, if those two distinct deletions the client wants, are handled in a single request. > > How the client decides which action to invoke is out of band > information, with semantics defined in the media type. Code on demand > can be used for clients that do not know about the app/vnd.acme.bucket > +xml media type. > Method semantics are never set by media type. > > And pre-emptively, I don't belive that having an order resource > living at both /orders/ah_well and /bucket/21/items/1, and being > deleted from both whenever a delete is triggered on either, is > unrestful, or break the uniform interface. > For the dozenth time in this thread, you're right, it isn't and it doesn't. That's server behavior, a stored procedure, whatever you want to call it. It still comes down to what the client requested. If the client is trying to delete two separate resources, whose URIs are sent to the server as part of a single request, then you have something which doesn't begin to resemble REST. If the client requests one resource to be DELETEd, and the server deletes another resource as well, so be it -- provided that second deletion isn't somehow part of the client request. -Eric
Sebastien Lambla wrote: > > You misread me. I do not suggest a media type REDEFINES an existing > method, that would indeed break the uniform interface. I suggest the > interaction model that informs the client which method to use when > and against which URI *is* defined within the media type. Just like > the html form tag. > True enough. Media type defines which methods to use on what URIs. But media type can't be extended to allow DELETE to behave as BDELETE based on media type, which seems to be the gist of your suggestion. > > You'll notice that the self-descriptive nature comes from METHOD > +Media Type, not just METHOD. If the intermediary doesn't understand > the media type, it won't be able to apply any transformation or > logic, because while the message is self-described (which means "I > don't need anything but whats in the message to understand what it > does"), the content of the message is part of the message. > Self-descriptive messages have nothing to do with the _content_ of the message, only its headers. Intermediaries don't need to understand anything about media-types in order to handle caching, let alone the message content. A gateway which transforms incoming text/html into application/xhtml+xml is able to do this because the media types are understood. But it still figures out what to do on the basis of headers, not entity content. > > The source of our disagreement in here seems to be that you apply the > constraint of self-description as having meaning when the media type > is not understood by an intermediary, I do not believe this is the > case. > I don't know where you get that, but you're putting words in my mouth. No intermediary will be able to make heads or tails out of any request that contains multiple actions (batch), because the only thing an intermediary understands is one request made against one URI -- anything else in the request dictating further action be taken won't be understood, regardless of media type. -Eric
Erik Hetzner wrote: > > > True enough. A collection resource can be created which contains > > only members 2, 5 and 9 and treats DELETE as a stored procedure to > > delete all members along with the collection resource itself. No > > problem. But, hypothetically, why? I still see no compelling > > reason to optimize DELETE at the expense of the visibility which > > allows proper cache behavior. > > > > […] > > Presumably the client can create this collection, right? > > So the client can therefore cause the ‘batch’ deletion of resources, > without being ‘un-RESTful’? > No. If the client is creating a collection for the purpose of having the deletion of that collection delete member resources of the collection, then DELETE has the semantics of DELETE for member resources, while having the semantics of BDELETE for collection resources, but in REST you can't assign multiple semantics to a single method, because then you do not have "a consistent set of semantics for all resources". It is simply not RESTful for a client to request more than one operation at a time. If the client wants to delete multiple resources, then it must call the DELETE method of each resource it wants to delete. Not POST a collection of resources for batch deletion in one further request. That further DELETE would be a BDELETE, where the response code doesn't indicate the success or failure of the sub- operations, and is therefore invisible to intermediaries, which won't have a clue to stale the deleted sub-resources, making it highly probable that the user who just requested a bunch of resources be deleted can still see those resources. Which is still why DELETE is its own method, it solves these problems, with the tradeoff of increased network traffic. -Eric
António Mota wrote: > > And rfc2616 says: > > The actual function performed by the POST method is determined by the > server and is usually dependent on the Request-URI. > > > What do we need more to clarify the issue? At least we agree that > that HTTP is compatible with REST? > HTTP is not REST. http://roy.gbiv.com/untangled/2008/specialization There are limitless un-RESTful possibilities in RFC 2616, including having the semantics of POST vary by URI. I'm not sure how the issue can be more clear than stating a REST API must "constrain the interface to a consistent set of semantics for all resources". -Eric
António Mota wrote: > > You seems to have a clear understanding of what REST is or should be, > so perhaps you had more sources of information than the rest of us. > But, since you like so much quotings, where in > Not more sources, just more experience -- eleven years of it, there are no shortcuts. > > In order to obtain a uniform interface, multiple architectural > constraints > > are needed to guide the behavior of components. REST is defined by > > four interface constraints: > > - identification of resources; > > - manipulation of resources through representations; > > - self-descriptive messages; and, > > - hypermedia as the engine of application state > > > > you get the idea that this is about GET, DELETE, POST or PUT? Where > does ot talk about verbs? > I'm not sure what "ot" means. A self-descriptive request includes a request method. You can't have a request without one. An intermediary SHOULD mark as expired, any resource for which it has seen a DELETE request, regardless of content-type or cache headers or anything else. The reason an intermediary knows this, is because a DELETE method is self-descriptive, by virtue of the request method. > > Also, in your interpretation of the semantics of a REST-style > architecture, taken then it's not all about the web, and so it's not > the same as HTTP, what's the role of media-types? For what you said, > there's no use to them. > Don't misunderstand the purpose of a media type. Without setting Content-type: application/xhtml+xml, none of my XHTML 1.1 content would trigger the XML parser in browsers, but would instead be rendered as HTML. The application/xhtml+xml media type informs clients, servers and intermediaries that the content of the entity is HTML as XML. A media type of application/jpg tells a client to use its JPEG library to render the image. If a server only outputs PNG, then the presence of application/jpg in the request message lets the application know that it needs to transform the image and save it as a PNG. You don't need HTTP to have a media type, in fact they're derived from MIME types used by the SMTP protocol. > > I can have my server distinguish different operations for the same > verb by using different media-types. Is that unrestfull? > Absolutely, beyond any shadow of a doubt. If you need different operations you use different methods. If you have one method that means one thing for some resources, and something else for other resources, then you do not have "a consistent set of semantics for all resources" you have semantics that vary by resource depending on media type. This is expressly forbidden, and not anywhere near as controversial as the pushback against it in this thread would indicate. :-) > > The actual function performed by the POST method is determined by the > server. Is that unrestfull also? > No, the semantics of POST are pretty loose. In REST, you describe a uniform interface for your resources. That means you pick one meaning for POST for your API and stick with it for all your resources. That meaning mustn't already be covered in another method -- POST can never mean DELETE because the semantics of deletion are already assigned to the DELETE method. -Eric
Assaf Arkin wrote: > > > The submitted POST in your example resembles what application state? > > Can that application state be retrieved by a GET request? > > > Think of Web email where you get a view of your inbox, a checkbox > next to each email and a button that says delete. Let's not call it > delete, let's call it archive instead. The server is now telling you > what the application state is: the contents of the inbox. Along with > it, how to transition to a different application state by changing > the contents of the inbox. It lets you checkbox specific emails and > with one request archive them. > What language is this form written in? If it's in HTML, then you can only use GET or POST, in which case deleting a message would use POST just like archiving a message would use POST, but you can't have two methods doing the same thing (POST and DELETE), or one method which does different things for different resources (POST). Of course, using Xforms makes such an interface possible without violating REST, if POST archives and DELETE deletes, because Xforms 1.1 allows any method to be used. > > The main point of the architecture is to reduce coupling between > client and server. The client and server here only agreed on a common > understanding of media type and protocol, client is merely following > the happy hypermedia trial. There are no new semantics on, in this > case HTML or HTTP. > No, HEAS isn't the main point of REST, it's one of the four constraints which make up the uniform connector interface. Trust me, there are millions of web applications out there that don't resemble REST even if they are using HEAS and even request methods properly. Remember the other year, when Google released its Web Accelerator? Various sites had content deleted by virtue of Google's app following "delete this" links implemented with GET. That was pure HEAS -- the bot followed the links it found, the client and server agreed on the meaning of text/html media types, and no new semantics were introduced (only misapplied to the GET method). Of course, since the method was GET not DELETE, the bot authors had no idea that their product would wreak such havoc, but that's because the majority of the Web isn't RESTful. -Eric
Assaf Arkin wrote: > > > > > If an API doesn't implement DELETE, and also doesn't use POST for > > anything but deletion (single or batch), and the options are > > presented in an HTML form then yes, it's a uniform interface. > > However, once DELETE is also implemented, or if POST is used for > > anything else like accepting content uploads, the interface is no > > longer uniform, unless and until the previous usage of POST to > > delete is deprecated. > > > I've been trying to think why this is patently wrong, and it now > occurred to me. REST is distributed, there are no boundaries, it > therefore does not have the notion of an API. > If you're saying that I'm patently wrong, then I disagree, but that isn't clear. REST is an architectural style, i.e. a mansion, not the house itself. REST has no concept of 1600 Pennsylvania Ave, even though that address is a mansion. We can talk about a RESTful API implementation, but we can't use that API implementation as a definition of REST, any more than we can state that any mansion that doesn't look like the White House must not be a mansion. > > I can talk about an API, the set of resources and media types I > decided to document, or read about someone else, of for the purpose > of containing a discussion we agreed to look at. But there is no > cohesion unit of API under REST to which this litmus test can apply. > Right, REST is not a protocol, merely a design pattern. > > Ironically this pursuit of "no resource may disappear unless directly > DELETEed" has resulted in this litmus test which is both tightly > coupled and allowing non-uniformity, as seen in the context of > distributed architecture. > This is not what I have been saying. The _client_ can't disappear any resource unless it calls the DELETE method of that resource. The server can do (almost) whatever it wants. But the client is constrained to requesting one action against one URI at a time. -Eric
So, REST is about "the common case of the Web", but it's not about HTTP!!!! Go figure. The link you provide makes my point that REST is applicable to a protocol-agnostic architecture, as I've told several times and as I implemented in my small part of the world. In my RESTish architecture I have a HTTP connector but also a JMS, a IMAP, a intraVM and a JCR connector, and others expected... All conecting to a bunch of resources in the most RESTish style I could do. What's Mr Fielding is saying, in my interpretaion, is that REST is at a abstraction level higher than that of HTTP. But you somehow turn this around to prove your point that a thing and it's opposite are true. POST is to be interpreted by the server, usually depending of the URI. All methods, POST including, should mean the same to all resources. This are antagonic statements. And now you say that HTTP it's not RESTfull.... Discussing with you is like talking to a wall, or worse. There's no point, it's a waste of time. I'm sorry for ever engaged this discussion... I don't like to appear to have something like a "superior attitude" and leave a discussion like this, but it's really a waste of time, because you can argue a thing and it's opposite at the same time, and there can't be any discussios on those grounds. 2009/3/21 Eric J. Bowman <eric@...> > António Mota wrote: > > > > > And rfc2616 says: > > > > The actual function performed by the POST method is determined by the > > server and is usually dependent on the Request-URI. > > > > > > What do we need more to clarify the issue? At least we agree that > > that HTTP is compatible with REST? > > > > HTTP is not REST. > > http://roy.gbiv.com/untangled/2008/specialization > > There are limitless un-RESTful possibilities in RFC 2616, including > having the semantics of POST vary by URI. > > I'm not sure how the issue can be more clear than stating a REST API > must "constrain the interface to a consistent set of semantics for all > resources". > > -Eric >
2009/3/21 Eric J. Bowman <eric@...> > António Mota wrote: > > > > > > you get the idea that this is about GET, DELETE, POST or PUT? Where > > does ot talk about verbs? > > > > I'm not sure what "ot" means. > > Gee, I'm so sorry, it was "it" instead of "ot". What a mistake I've made!!! Oh well, I think this just proves that you're really intellectually superior to people like me that is capable of such disgusting errors. I sincerely apologise for this.
2009/3/21 Eric J. Bowman <eric@...> > António Mota wrote: > > > > > And rfc2616 says: > > > > The actual function performed by the POST method is determined by the > > server and is usually dependent on the Request-URI. > > > > > > What do we need more to clarify the issue? At least we agree that > > that HTTP is compatible with REST? > > > > HTTP is not REST. > I did not say HTTP *is* REST, I asked if you agree that HTTP is compatible with REST. compatible != is. But I think you understood that...
Maybe it is a good idea to start the discuss in a new thread. I did some homework by reading RFC 2616. On page 56, section 9.7, it reads "If the request passes through a cache and the Request-URI identifies one or more currently cached entities, those entries SHOULD be treated as stale." So how can the intermediary figure out the currently cached entities identified by the Request-URI? On page 97, section 13.10, it reads "The effect of certain methods performed on a resource at the origin server might cause one or more existing cache entries to become non- transparently invalid. That is, although they might continue to be "fresh," they do not accurately reflect what the origin server would return for a new request on that resource." I think this means it is common for an update or delete request yields "non-transparently" invalidation of one or more existing cache entries. Further one page 98 "There is no way for the HTTP protocol to guarantee that all such cache entries are marked invalid." and "Some HTTP methods MUST cause a cache to invalidate an entity. This is either the entity referred to by the Request-URI, or by the Location or Content-Location headers (if present)." Can we put many Content-Location headers in a DELETE request? "In order to prevent denial of service attacks, an invalidation based on the URI in a Location or Content-Location header MUST only be performed if the host part is the same as in the Request-URI. " Then how about invalidating the caches after a "batch DELETE" as this "off-the-wall" approach? http://tech.groups.yahoo.com/group/rest-discuss/message/12280 Suppose we have a URI that identifies a collection of resources 1. the client sends DELETE URI to the server. The client knows what the URI refers to. 2. the server reply with a 200 or 202 with a representation that will send several DELETE's to the server each of which is for a member in the collection.
At Sat, 21 Mar 2009 13:12:12 -0600, Eric J. Bowman wrote: > > Erik Hetzner wrote: > > Presumably the client can create this collection, right? > > > > So the client can therefore cause the ‘batch’ deletion of resources, > > without being ‘un-RESTful’? > > > > No. If the client is creating a collection for the purpose of having > the deletion of that collection delete member resources of the > collection, then DELETE has the semantics of DELETE for member > resources, while having the semantics of BDELETE for collection > resources, but in REST you can't assign multiple semantics to a single > method, because then you do not have "a consistent set of semantics for > all resources". Deleting a bag of bagels has the same semantics as deleting a bagel; it simply doesn’t matter that the bagels are individually addressable as well as being part of a bag. best, Erik Hetzner
António Mota wrote: > > POST is to be interpreted by the server, usually depending of the > URI. All methods, POST including, should mean the same to all > resources. This are antagonic statements. And now you say that HTTP > it's not RESTfull.... > HTTP is not REST. RESTful architectures can be built with HTTP, just like non-RESTful architectures can be built with HTTP. Don't take this out on me personally if you don't understand it. POST can mean what your API needs it to mean, so long as that's what it means for all resources in your API. -Eric
António Mota wrote: > > > António Mota wrote: > > > > > > > > > > you get the idea that this is about GET, DELETE, POST or PUT? > > > Where does ot talk about verbs? > > > > > > > I'm not sure what "ot" means. > > > > > Gee, I'm so sorry, it was "it" instead of "ot". What a mistake I've > made!!! Oh well, I think this just proves that you're really > intellectually superior to people like me that is capable of such > disgusting errors. I sincerely apologise for this. > Don't be like that, guy. I thought it might stand for "original topic" or something, so I pointed out that I might not be answering what it is you asked. -Eric
António Mota wrote: > > I did not say HTTP *is* REST, I asked if you agree that HTTP is > compatible with REST. compatible != is. But I think you understood > that... > You were implying that not violating HTTP means the result is REST, but this simply isn't the case. Please don't make accusations that I'm trying to make you look foolish, when in reality I'm only doing my best to help. -Eric
Erik Hetzner wrote: > > > No. If the client is creating a collection for the purpose of > > having the deletion of that collection delete member resources of > > the collection, then DELETE has the semantics of DELETE for member > > resources, while having the semantics of BDELETE for collection > > resources, but in REST you can't assign multiple semantics to a > > single method, because then you do not have "a consistent set of > > semantics for all resources". > > Deleting a bag of bagels has the same semantics as deleting a bagel; > it simply doesn’t matter that the bagels are individually addressable > as well as being part of a bag. > No, throwing a bag of bagels in the trash isn't the same semantic as eating a bagel. If DELETE had batch-delete semantics, there wouldn't be any need for the BDELETE method. -Eric
Dong Liu wrote: > > On page 56, section 9.7, it reads > "If the request passes through a cache and the Request-URI identifies > one or more currently cached entities, those entries SHOULD be > treated as stale." > > So how can the intermediary figure out the currently cached entities > identified by the Request-URI? > Because those cached representations all have the same URI as the URI of the DELETE request. They may have different Content-Locations. > > On page 97, section 13.10, it reads > "The effect of certain methods performed on a resource at the origin > server might cause one or more existing cache entries to become non- > transparently invalid. That is, although they might continue to be > "fresh," they do not accurately reflect what the origin server would > return for a new request on that resource." > > I think this means it is common for an update or delete request > yields "non-transparently" invalidation of one or more existing cache > entries. > Of course. If I send a DELETE request to my server, the only caches that should stale that resource are the ones between me and my server. The rest of the Internet is oblivious to the DELETE. Eventually, the deleted resource will expire. If this is a problem, set stricter cache-control headers. > > Further on page 98 > "There is no way for the HTTP protocol to guarantee that all such > cache entries are marked invalid." > Caches aren't always connected to the Internet, or may have very expensive Internet connections (the hotspot on Mt. Everest), or some other reason not to check with the origin server at all before serving a cached representation. So no, there is no way to guarantee expiration, this is the essence of "anarchic scalability" -- you cede a bit of control over your resources to the world-at-large with zero control over what happens as a result. > > "Some HTTP methods MUST cause a cache to invalidate an entity. This > is either the entity referred to by the Request-URI, or by the > Location or Content-Location headers (if present)." > > Can we put many Content-Location headers in a DELETE request? > A Content-Location header indicates content negotiation. It is not a mechanism for piggybacking additional actions in one client request. If I have a resource /image which serves image.gif or image.png depending on client capability, and I PUT a new image.gif to /image, what a cache should expire is image.gif not /image. A Location header indicates a redirect. It instructs the client to re- try its request at a different URI. You can't, by virtue of PUT or DELETE, remove that redirection -- the PUT or DELETE request itself gets redirected to the proper location. So the invalidation must affect the resource identified by the Location header, NOT the redirect itself. > > "In order to prevent denial of service attacks, an invalidation based > on the URI in a Location or Content-Location header MUST only be > performed if the host part is the same as in the Request-URI. " > This is just common sense, sending a DELETE request to one host in hopes of deleting a resource on some other host would be a real problem, due to the number of malicious deviant savages out there. > > Then how about invalidating the caches after a "batch DELETE" as this > "off-the-wall" approach? > http://tech.groups.yahoo.com/group/rest-discuss/message/12280 > I'll come back and answer this part later, I've been getting this from Yahoo: "The group rest-discuss is temporarily unavailable" Which comes as no surprise, since in recent days my posts to this list have been taking hours to show up on the web or in my inbox. -1 > > Suppose we have a URI that identifies a collection of resources > > 1. the client sends DELETE URI to the server. The client knows what > the URI refers to. > > 2. the server reply with a 200 or 202 with a representation that will > send several DELETE's to the server each of which is for a member in > the collection. > Then your first DELETE has the exact same semantics as GET, instead of meaning deletion. There is nothing wrong with a server sending an Xforms representation to the client with a bunch of URIs the user can select for deletion, whereupon the client will send a discrete DELETE request to each URI at the touch of a button. But, you must GET that form. -Eric
Dong Liu wrote: > > Then how about invalidating the caches after a "batch DELETE" as this > "off-the-wall" approach? > http://tech.groups.yahoo.com/group/rest-discuss/message/12280 > " An off-the-wall suggestion: Since DELETE is idempotent, how about sending the separate DELETE commands to the server after the batch delete anyway, for the sheer purpose of invalidating the intermediate caches? " Because it isn't "a consistent set of semantics for all resources". DELETE means DELETE on some resources, but it means BDELETE on other resources. The client is attempting to send the server multiple instructions in one request, while REST constrains the client to call one method against one URI at a time. -Eric
I haven't been following the discussion, but if you want a single request to invalidate multiple cached entities, I've been working on that as part of my day job; it motivated the set of patches to Squid we funded a little while back (see commits to squid2-HEAD by Benno a few months back). In a nutshell, we've architected it so that a PUT, POST or DELETE (i.e., anything that triggers an invalidation by side effect as per 2616) will not only invalidate the resources identified by the request- URI, Location, and Content-Location, but they will also invalidate: 1) Resources pointed to by the PUT/POST/DELETE response with a Link header that has an "invalidates" relation; e.g., Link: </foo>; rel="invalidates" in a response to POST /bar will invalidate /foo. 2) Cached responses that have a Link header with an "invalidated-by" relation that points to the URI that has been PUT/POST/DELETEd; e.g., Link: </baz>; rel="invalidated-by" in a cached response will make that cached response invalid when /baz is POST/PUT/DELETEd to. The second case is the more interesting, because you can have search results (for example) contain an identifier -- even one that doesn't exist -- that will trigger many things becoming invalid when a single resource becomes invalid. There are a number of limitations and caveats here, of course -- chiefly that it's not a reliable mechanism in every case, and that caches that aren't aware of these extensions will of course not implement them. However, for some cases -- especially accelerator caches -- they can be useful for increasing cache efficiency without sacrificing control of your resources. I hope to Open Source the helper process that implements all of this; the changes to Squid were merely getting it conformant with the RFC, and putting in a few hooks to enable this to work. Cheers, On 21/03/2009, at 4:54 PM, Dong Liu wrote: > Maybe it is a good idea to start the discuss in a new thread. > > I did some homework by reading RFC 2616. > > On page 56, section 9.7, it reads > "If the request passes through a cache and the Request-URI > identifies one or more currently cached entities, those entries > SHOULD be treated as stale." > > So how can the intermediary figure out the currently cached entities > identified by the Request-URI? > > On page 97, section 13.10, it reads > "The effect of certain methods performed on a resource at the origin > server might cause one or more existing cache entries to become non- > transparently invalid. That is, although they might continue to be > "fresh," they do not accurately reflect what the origin server would > return for a new request on that resource." > > I think this means it is common for an update or delete request > yields "non-transparently" invalidation of one or more existing > cache entries. > > Further one page 98 > "There is no way for the HTTP protocol to guarantee that all such > cache entries are marked invalid." > > and > > "Some HTTP methods MUST cause a cache to invalidate an entity. This > is either the entity referred to by the Request-URI, or by the > Location or Content-Location headers (if present)." > > Can we put many Content-Location headers in a DELETE request? > > "In order to prevent denial of service attacks, an invalidation > based on the URI in a Location or Content-Location header MUST only > be performed if the host part is the same as in the Request-URI. " > > Then how about invalidating the caches after a "batch DELETE" as > this "off-the-wall" approach? http://tech.groups.yahoo.com/group/rest-discuss/message/12280 > > Suppose we have a URI that identifies a collection of resources > > 1. the client sends DELETE URI to the server. The client knows what > the URI refers to. > > 2. the server reply with a 200 or 202 with a representation that > will send several DELETE's to the server each of which is for a > member in the collection. > > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -- Mark Nottingham http://www.mnot.net/
Mark Nottingham wrote: > > I haven't been following the discussion, but if you want a single > request to invalidate multiple cached entities, I've been working on > that as part of my day job; it motivated the set of patches to Squid > we funded a little while back (see commits to squid2-HEAD by Benno a > few months back). > Interesting. I like it. Another example relative to this and other recent discussions here would be: DELETE /foo 200 OK LINK </bar>; rel=delete Such that deleting one resource triggers a cascade-deletion of others. But, I don't like that, not that I think that's what you were suggesting. This comment is meant peremptorily. :-) -Eric
Mike Amundsen sent me this link... http://www.xent.com/pipermail/fork/2001-August/003191.html ...and asked if it changed my position that COPY and MOVE are RPC calls, and not RESTful. Well, yes, it does. Instead of saying it can't be done, I'm now saying I'll believe it when I see it... " Think of it another way. The URI namespace consists of a hierarchy of names (collections). COPY and MOVE semantics are not really operations on the target of the COPY and MOVE -- in fact, they are operations on the parent collections that "own" the origin and destination namespaces. The set of names within a collection are the state of that collection as a resource. If you want to make these operations more REST-like, then define a suitable representation of a collection such that each namespace can be GET-retrieved, manipulated at the user agent, and then have the result of those manipulations communicated to the two namespaces in such a way that they can achieve the new state, barring conflicts, without unnecessary transfers across the network. This one isn't easy. " ...but honestly, Roy lost me there. I've been mulling this over for a couple of days and I'm not seeing it. Anyone? -Eric
"HTTP is not REST." - I already told that nobody here said that, but the question you didn't answer is: Is HTTP RESTfull? Does HTTP obey the set of constraints that REST define to obtain a uniform interface? POST in HTTP can be used for whatever the server wants, as per rfc2616, right? HTTP is used as the underlying protocol of the web, right? (...) "common case of the Web" that REST is designed for. This last one is a quote from you. Now, - REST is designed for the the common case of the web. - in the common case of the web the actual function performed by the POST method is determined by the server and is usually dependent on the Request-URI - but in REST, so you say, POST means the same for all resources, is not dependent on the Request-URI - but that's not the case of the web - but REST is designed for the Web - so REST, that was designed for the common case of the web, and the web, from which REST was designed for, are incompatible... REST is designed for the the common case of the web, but REST is not designed like the common case of the web. Is that your opinion? You're not trying to help nobody, you are just too stubborn to realise that you've been contradicting yourself all the way . It's just a waste of time to continue this debate... 2009/3/22 Eric J. Bowman <eric@...> > António Mota wrote: > > > > > POST is to be interpreted by the server, usually depending of the > > URI. All methods, POST including, should mean the same to all > > resources. This are antagonic statements. And now you say that HTTP > > it's not RESTfull.... > > > > HTTP is not REST. RESTful architectures can be built with HTTP, just > like non-RESTful architectures can be built with HTTP. Don't take this > out on me personally if you don't understand it. > > POST can mean what your API needs it to mean, so long as that's what it > means for all resources in your API. > > -Eric >
I was not attacking you on nothing, sorry if I gave that impression. What I was trying to say is that you don't answer the questions directly, instead you "twist" them to fit the answers that are convenient to you. And I wasn't implying what you said that I was implying, what I say is that HTTP obey the set of constraints that REST define to obtain a uniform interface. 2009/3/22 Eric J. Bowman <eric@...> > António Mota wrote: > > > > > I did not say HTTP *is* REST, I asked if you agree that HTTP is > > compatible with REST. compatible != is. But I think you understood > > that... > > > > You were implying that not violating HTTP means the result is REST, but > this simply isn't the case. Please don't make accusations that I'm > trying to make you look foolish, when in reality I'm only doing my best > to help. > > -Eric >
Thanks for the reply, Mark. The first I thought would help was you, when I wondered how the caching control of HTTP were *implemented*. Cheers, Dong On Sun, Mar 22, 2009 at 1:44 AM, Mark Nottingham <mnot@...> wrote: > I haven't been following the discussion, but if you want a single request to > invalidate multiple cached entities, I've been working on that as part of my > day job; it motivated the set of patches to Squid we funded a little while > back (see commits to squid2-HEAD by Benno a few months back). > > In a nutshell, we've architected it so that a PUT, POST or DELETE (i.e., > anything that triggers an invalidation by side effect as per 2616) will not > only invalidate the resources identified by the request-URI, Location, and > Content-Location, but they will also invalidate: > > 1) Resources pointed to by the PUT/POST/DELETE response with a Link header > that has an "invalidates" relation; e.g., > Link: </foo>; rel="invalidates" > in a response to POST /bar will invalidate /foo. > > 2) Cached responses that have a Link header with an "invalidated-by" > relation that points to the URI that has been PUT/POST/DELETEd; e.g., > Link: </baz>; rel="invalidated-by" > in a cached response will make that cached response invalid when /baz is > POST/PUT/DELETEd to. > > The second case is the more interesting, because you can have search results > (for example) contain an identifier -- even one that doesn't exist -- that > will trigger many things becoming invalid when a single resource becomes > invalid. > > There are a number of limitations and caveats here, of course -- chiefly > that it's not a reliable mechanism in every case, and that caches that > aren't aware of these extensions will of course not implement them. However, > for some cases -- especially accelerator caches -- they can be useful for > increasing cache efficiency without sacrificing control of your resources. > > I hope to Open Source the helper process that implements all of this; the > changes to Squid were merely getting it conformant with the RFC, and putting > in a few hooks to enable this to work. > > Cheers, > > > On 21/03/2009, at 4:54 PM, Dong Liu wrote: > >> Maybe it is a good idea to start the discuss in a new thread. >> >> I did some homework by reading RFC 2616. >> >> On page 56, section 9.7, it reads >> "If the request passes through a cache and the Request-URI identifies one >> or more currently cached entities, those entries SHOULD be treated as >> stale." >> >> So how can the intermediary figure out the currently cached entities >> identified by the Request-URI? >> >> On page 97, section 13.10, it reads >> "The effect of certain methods performed on a resource at the origin >> server might cause one or more existing cache entries to become non- >> transparently invalid. That is, although they might continue to be "fresh," >> they do not accurately reflect what the origin server would return for a new >> request on that resource." >> >> I think this means it is common for an update or delete request yields >> "non-transparently" invalidation of one or more existing cache entries. >> >> Further one page 98 >> "There is no way for the HTTP protocol to guarantee that all such cache >> entries are marked invalid." >> >> and >> >> "Some HTTP methods MUST cause a cache to invalidate an entity. This is >> either the entity referred to by the Request-URI, or by the Location or >> Content-Location headers (if present)." >> >> Can we put many Content-Location headers in a DELETE request? >> >> "In order to prevent denial of service attacks, an invalidation based on >> the URI in a Location or Content-Location header MUST only be performed if >> the host part is the same as in the Request-URI. " >> >> Then how about invalidating the caches after a "batch DELETE" as this >> "off-the-wall" approach? >> http://tech.groups.yahoo.com/group/rest-discuss/message/12280 >> >> Suppose we have a URI that identifies a collection of resources >> >> 1. the client sends DELETE URI to the server. The client knows what the >> URI refers to. >> >> 2. the server reply with a 200 or 202 with a representation that will send >> several DELETE's to the server each of which is for a member in the >> collection. >> >> >> >> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > > -- > Mark Nottingham http://www.mnot.net/ > > -- http://dongnotes.blogspot.com/
Atom has defined both feed and entry resources. A feed document can aggregate multiple entries together -- indeed this is the purpose of its existence (to aggregate, order, and potentially filter). A GET on an individual entry resource gives the entry's Atom representation of course. A GET on a related feed gets that entry, plus its 'sibling' entries. This is all based on the definition of the feed itself of course. Question: Is this fundamentally different from the DELETE case below? If so, how? Eric J. Bowman wrote: > Erik Hetzner wrote: > > >>> No. If the client is creating a collection for the purpose of >>> having the deletion of that collection delete member resources of >>> the collection, then DELETE has the semantics of DELETE for member >>> resources, while having the semantics of BDELETE for collection >>> resources, but in REST you can't assign multiple semantics to a >>> single method, because then you do not have "a consistent set of >>> semantics for all resources". >>> >> Deleting a bag of bagels has the same semantics as deleting a bagel; >> it simply doesn’t matter that the bagels are individually addressable >> as well as being part of a bag. >> >> > > No, throwing a bag of bagels in the trash isn't the same semantic as > eating a bagel. If DELETE had batch-delete semantics, there wouldn't > be any need for the BDELETE method. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Den 22. mars. 2009 kl. 05.34 skrev Eric J. Bowman: > Erik Hetzner wrote: > > > Deleting a bag of bagels has the same semantics as deleting a bagel; > > it simply doesn’t matter that the bagels are individually > addressable > > as well as being part of a bag. > > No, throwing a bag of bagels in the trash isn't the same semantic as > eating a bagel. If DELETE had batch-delete semantics, there wouldn't > be any need for the BDELETE method. Ok, up to a point there I actually sort of enjoyed your insistence on your point of view. But this really is too much :) I agree that the case of creating your own BDELETE or using post as a bdelete, wouldn´t be that restful, since it implies having a "method endpoint" at the server side. But DELETE on two adresses identifying two of my resources, means just that in HTTP. What that DELETE means in my resource model on the server is not up to you to decide, but you can be told it through our common out-of-band understanding of the resource representations and interface. Of course, for this to work we both need to understand and implement the meaning of the representation types and the constrained interface. If you don´t give the bag of bagels an address, then you will find you have a problem trying to delete it through "restful" http. So if it´s significant in your resource model, give it an address (or more). It is possible to to model this restful over http, removing resources at other adresses as a respones to DELETE is *NOT* in itself "unrestful" use of http. Don´t mix the two cases together. Deleting the bag of bagels does give you a practical set of things to be aware of concerning the efficiency of caching. But creating your client in such a way as to in anyway require an intermediate cache (not part of the client) seeing a concrete delete on resources deleted, wouldn´t be very restful. The constraints of rest is designed for loose coupling, so if you need this kind of tight coupling, don´t use rest. That does not mean that an intermediate cache shouldn´t optimise when possible and that you might want to leverage that optimisation in your app. But you can never require it. If you really need it, make sure the responses isn´t cached ...or use something other than rest. Jo
One of the things I've been reminded this week (after doing some
research in related areas of REST and HTTP) is that Fielding maintains
the notion that a resource should not be equated to a file object.
This seems key to his notion that much of WebDAV misses the mark - too
much equating of resources to file objects going on. In the case of
MOVE and COPY (words so closely tied in most brians to file-type
actions), it's easy to forget this aspect of the REST architectural
model.
Also, his view that *state* is what is transferred (not file-objects,
or documents, etc.) is essential to so much of what makes REST
well-fitted for heterogeneous networks. Therefore, the idea that we
can define a payload that contains instructions to a server on what
actions to take ("Server-A, this is client-B calling. Please delete
the following items from the collection") runs counter to the notion
of "state transfer." I think this is a primary reason that folks
trying to understand REST get tied up a bit when grappling with things
such as BATCH and other collection-based actions. There is an
additional abstraction (the resource itself) that is not always easy
to notice.
Fieldings approach (outlined in his quote below) is all about GET-ting
the state of the two locations (source and destination), having the
client manipulate that state and send it back to the server for
processing. I see this as the client telling the server: "Ok, I see
the state of the collection as it exists right now. Please make the
state of that collection match what I am sending you."
Another big issue (not mentioned below) is access rights. To make a
MOVE/COPY truly uniform, it should work across servers. (MOVE this
resource from server-a/collection to server-b/collection, etc.). And
REST aside, HTTP itself does poorly when dealing with access semantics
for cross-server activity (there is currently only one "Authorization"
header).
But think I see the frame of the solution: design a way to express the
state of a collection along with a way to express the modification of
that state such that a server can properly interpret the changes and
go about making the server match the requested state. Leaving aside
the authorization conundrum for a moment, it might be possible to
treat MOVE/COPY state expressions as a form of a 'diff-gram.' Upon
GET-ting the state of a collection, the client could modify that state
and generate a difference graph that expresses the changes made by the
client. Upon receiving this difference graph, the server could modify
the server collection to match the state sent by the client.
This GET collection state, modify collection state and send the
modification back to the server may also be a hint for a way to handle
other actions on collections.
mca
http://amundsen.com/blog/
On Sun, Mar 22, 2009 at 04:50, Eric J. Bowman <eric@...> wrote:
> Mike Amundsen sent me this link...
>
> http://www.xent.com/pipermail/fork/2001-August/003191.html
>
> ...and asked if it changed my position that COPY and MOVE are RPC
> calls, and not RESTful. Well, yes, it does. Instead of saying it
> can't be done, I'm now saying I'll believe it when I see it...
>
> "
> Think of it another way. The URI namespace consists of a
> hierarchy of names (collections). COPY and MOVE semantics are
> not really operations on the target of the COPY and MOVE --
> in fact, they are operations on the parent collections that
> "own" the origin and destination namespaces. The set of names
> within a collection are the state of that collection as a
> resource. If you want to make these operations more
> REST-like, then define a suitable representation of a
> collection such that each namespace can be GET-retrieved,
> manipulated at the user agent, and then have the result of
> those manipulations communicated to the two namespaces in such
> a way that they can achieve the new state, barring conflicts,
> without unnecessary transfers across the network. This one
> isn't easy.
> "
>
> ...but honestly, Roy lost me there. I've been mulling this over for a
> couple of days and I'm not seeing it. Anyone?
>
> -Eric
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
> Then your first DELETE has the exact same semantics as GET, instead of > meaning deletion. There is nothing wrong with a server sending an > Xforms representation to the client with a bunch of URIs the user can > select for deletion, whereupon the client will send a discrete DELETE > request to each URI at the touch of a button. But, you must GET that > form. I do not think the DELETE with the composite resource uri is that same as a GET. If you want to make a GET to do this, it would look like GET uri?action=delete It looks bad, is it? As I understand, your position is that there is no way towards a RESTful solution for this problem, thought you thought that POST with many delete's is better than PUT + DELETE. My position is that we can still work out a solution that follows the REST constraints and is implemented in HTTP. Cheers, Dong On Sun, Mar 22, 2009 at 12:05 AM, Eric J. Bowman <eric@...> wrote: > Dong Liu wrote: > >> >> On page 56, section 9.7, it reads >> "If the request passes through a cache and the Request-URI identifies >> one or more currently cached entities, those entries SHOULD be >> treated as stale." >> >> So how can the intermediary figure out the currently cached entities >> identified by the Request-URI? >> > > Because those cached representations all have the same URI as the URI > of the DELETE request. They may have different Content-Locations. > >> >> On page 97, section 13.10, it reads >> "The effect of certain methods performed on a resource at the origin >> server might cause one or more existing cache entries to become non- >> transparently invalid. That is, although they might continue to be >> "fresh," they do not accurately reflect what the origin server would >> return for a new request on that resource." >> >> I think this means it is common for an update or delete request >> yields "non-transparently" invalidation of one or more existing cache >> entries. >> > > Of course. If I send a DELETE request to my server, the only caches > that should stale that resource are the ones between me and my server. > The rest of the Internet is oblivious to the DELETE. Eventually, the > deleted resource will expire. If this is a problem, set stricter > cache-control headers. > >> >> Further on page 98 >> "There is no way for the HTTP protocol to guarantee that all such >> cache entries are marked invalid." >> > > Caches aren't always connected to the Internet, or may have very > expensive Internet connections (the hotspot on Mt. Everest), or some > other reason not to check with the origin server at all before serving > a cached representation. So no, there is no way to guarantee > expiration, this is the essence of "anarchic scalability" -- you cede a > bit of control over your resources to the world-at-large with zero > control over what happens as a result. > >> >> "Some HTTP methods MUST cause a cache to invalidate an entity. This >> is either the entity referred to by the Request-URI, or by the >> Location or Content-Location headers (if present)." >> >> Can we put many Content-Location headers in a DELETE request? >> > > A Content-Location header indicates content negotiation. It is not a > mechanism for piggybacking additional actions in one client request. > If I have a resource /image which serves image.gif or image.png > depending on client capability, and I PUT a new image.gif to /image, > what a cache should expire is image.gif not /image. > > A Location header indicates a redirect. It instructs the client to re- > try its request at a different URI. You can't, by virtue of PUT or > DELETE, remove that redirection -- the PUT or DELETE request itself gets > redirected to the proper location. So the invalidation must affect the > resource identified by the Location header, NOT the redirect itself. > >> >> "In order to prevent denial of service attacks, an invalidation based >> on the URI in a Location or Content-Location header MUST only be >> performed if the host part is the same as in the Request-URI. " >> > > This is just common sense, sending a DELETE request to one host in > hopes of deleting a resource on some other host would be a real > problem, due to the number of malicious deviant savages out there. > >> >> Then how about invalidating the caches after a "batch DELETE" as this >> "off-the-wall" approach? >> http://tech.groups.yahoo.com/group/rest-discuss/message/12280 >> > > I'll come back and answer this part later, I've been getting this from > Yahoo: > > "The group rest-discuss is temporarily unavailable" > > Which comes as no surprise, since in recent days my posts to this list > have been taking hours to show up on the web or in my inbox. -1 > >> >> Suppose we have a URI that identifies a collection of resources >> >> 1. the client sends DELETE URI to the server. The client knows what >> the URI refers to. >> >> 2. the server reply with a 200 or 202 with a representation that will >> send several DELETE's to the server each of which is for a member in >> the collection. >> > > Then your first DELETE has the exact same semantics as GET, instead of > meaning deletion. There is nothing wrong with a server sending an > Xforms representation to the client with a bunch of URIs the user can > select for deletion, whereupon the client will send a discrete DELETE > request to each URI at the touch of a button. But, you must GET that > form. > > -Eric > -- http://dongnotes.blogspot.com/
Dong Liu wrote: > > > Maybe it is a good idea to start the discuss in a new thread. > > I did some homework by reading RFC 2616. > ... If you're interested in HTTP and caching, I'd strongly recommend to *also* read: <http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p6-cache-06.html> BR, Julian
At Sat, 21 Mar 2009 22:34:41 -0600, Eric J. Bowman wrote: > > Erik Hetzner wrote: > > Deleting a bag of bagels has the same semantics as deleting a bagel; > > it simply doesn’t matter that the bagels are individually addressable > > as well as being part of a bag. > > > > No, throwing a bag of bagels in the trash isn't the same semantic as > eating a bagel. If DELETE had batch-delete semantics, there wouldn't > be any need for the BDELETE method. Who said anything about eating a bagel? In retrospect I shouldn’t have chosen a culinary metaphor. Instead of a bag of bagels, imagine a resource structured like a set of nesting dolls, [1] each of which is individually addressable; and yet, the semantics of deleting one are identical - whether or not the doll contains other dolls is irrelevant. If you don’t accept this, ok. But I have yet to read an argument why this is wrong, besides an assertion that they have different semantics - which is wrong in the case of nested resources, in my opinion - and arguments about caching, which are irrelevant to the question of whether or not this can be modeled in a RESTful system. best, Erik Hetzner 1. http://en.wikipedia.org/wiki/Matryoshka_doll Russian
Hi guys, I put together a series of blog entries with an idea on how to construct a HATEOAS REST API. The core of the idea is that the current REST APIs have some AJAXy (2006 - 2009) types of optimizations, but don't take advantage of HTML's more basic capabilities (~1995 - now) that are applicable to HATEOAS. Using that basic idea, I tried to figure out how to take advantage of HTML and Browser idioms in a REST API setting. The road to Real REST APIs: http://www.jroller.com/Solomon/entry/the_road_to_real_rest Proposal: REST/HATEOAS Java client: http://www.jroller.com/Solomon/entry/proposal_rest_hateaos_java_client REST - HATEOAS Client communication: http://www.jroller.com/Solomon/entry/rest_hateoas_client_communication I'm hoping to get feedback from this illustrious crowd on the following: 1) does the idea fully implement REST, including HATEOAS 2) is this idea implementable 3) future direction for the idea Any feedback (even "it sucks, here's why...") would be appreciated. Note that it is a blog, and not a polished article... I don't mean to provide flaim-bait, but it still happens :) -Solomon
Solomon, I noticed the following comment in the first article: Website architectures are the only examples of architectures that I've seen that fully implement REST characteristics. I strongly suggest looking at CCXML <http://www.w3.org/TR/ccxml/> for a good example of a markup language designed for machine to machine RESTful interactions. CCXML is meant to control telephony resources to implement call control applications. If you're not a telephony person, the general state machine model is being distilled into and improved in SCXML <http://www.w3.org/TR/scxml/> . The key take away from these languages though is to model your client as a set of APIs accessible via the markup language. You GET an initial document that drives the client via those APIs. Part of the API allows the client to make HTTP requests to change resources and/or transition to new documents. In this model, you have a client that consists of a markup interpreter driving an underlying "platform". The platform has no dependencies on the markup language, let alone any specific entities in those documents. i.e. you never ever have client platform code that walks the document looking for key pieces of data (which is where most folks end up). Your markup just "runs" in the interpreter and invokes the platform. The platform can generate events up into the interpreter which are typically surfaced as events in the markup. The result of this model is that the client is completely decoupled from the server which, after all, is a key benefit of REST. Essentially, what I'm describing above is a general client design that is consistent with the HATEOAS constraint. There may be other designs that do this, but this one has worked well for me. You can think of an HTML web browser working this way. The platform is the renderer, chrome etc. User input generates DOM events in the markup. And so on. Hope this helps. Andrew Wahbe --- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@...> wrote: > > Hi guys, > > I put together a series of blog entries with an idea on how to construct a > HATEOAS REST API. The core of the idea is that the current REST APIs have > some AJAXy (2006 - 2009) types of optimizations, but don't take advantage of > HTML's more basic capabilities (~1995 - now) that are applicable to HATEOAS. > > Using that basic idea, I tried to figure out how to take advantage of HTML > and Browser idioms in a REST API setting. > > The road to Real REST APIs: > http://www.jroller.com/Solomon/entry/the_road_to_real_rest > Proposal: REST/HATEOAS Java client: > http://www.jroller.com/Solomon/entry/proposal_rest_hateaos_java_client > REST - HATEOAS Client communication: > http://www.jroller.com/Solomon/entry/rest_hateoas_client_communication > > I'm hoping to get feedback from this illustrious crowd on the following: > > 1) does the idea fully implement REST, including HATEOAS > 2) is this idea implementable > 3) future direction for the idea > > Any feedback (even "it sucks, here's why...") would be appreciated. Note > that it is a blog, and not a polished article... I don't mean to provide > flaim-bait, but it still happens :) > > -Solomon >
I was thinking about this alot more last night. I'm not sure you need this type of markup language. Computers aren't humans. While a human can look at a web page and figure out how to enter information on the form there, a piece of code can't. For code, everything needs to be mostly predetermined and the interactions agreed upon before hand. I guess what I'm saying is that an atom:link should be good enough. The href defines the location on the network. The media type cements the contract. A schema for the form would be good though, I think (Could just be HTML form description or some subset of that). Append the schema ID to the form when you send it, just like we do for XML. Content-Type: application/x-www-form-urlencoded form-schema-id=http://... name=value name2=value name3=value Or create a new mime type that allows you to specify the form schema id as an attribute: Content-Type: application/apiml;form-schema=http://... name=value name2=value name3=value Solomon Duskis wrote: > > > Hi guys, > > I put together a series of blog entries with an idea on how to construct > a HATEOAS REST API. The core of the idea is that the current REST APIs > have some AJAXy (2006 - 2009) types of optimizations, but don't take > advantage of HTML's more basic capabilities (~1995 - now) that are > applicable to HATEOAS. > > Using that basic idea, I tried to figure out how to take advantage of > HTML and Browser idioms in a REST API setting. > > The road to Real REST APIs: > http://www.jroller.com/Solomon/entry/the_road_to_real_rest > <http://www.jroller.com/Solomon/entry/the_road_to_real_rest> > Proposal: REST/HATEOAS Java client: > http://www.jroller.com/Solomon/entry/proposal_rest_hateaos_java_client > <http://www.jroller.com/Solomon/entry/proposal_rest_hateaos_java_client> > REST - HATEOAS Client communication: > http://www.jroller.com/Solomon/entry/rest_hateoas_client_communication > <http://www.jroller.com/Solomon/entry/rest_hateoas_client_communication> > > I'm hoping to get feedback from this illustrious crowd on the following: > > 1) does the idea fully implement REST, including HATEOAS > 2) is this idea implementable > 3) future direction for the idea > > Any feedback (even "it sucks, here's why...") would be appreciated. > Note that it is a blog, and not a polished article... I don't mean to > provide flaim-bait, but it still happens :) > > -Solomon > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Fri, Mar 27, 2009 at 11:22 AM, Bill Burke <bburke@...> wrote:
> I was thinking about this alot more last night. I'm not sure you need
> this type of markup language. Computers aren't humans. While a human can
> look at a web page and figure out how to enter information on the form
> there, a piece of code can't. For code, everything needs to be mostly
> predetermined and the interactions agreed upon before hand.
>
Specific Interactions do need some predetermined flows, but the specifics of
the communication (i.e. here's the URL, and here's the headers for
content-type...) don't need to be.
Also, there are plenty of companies that built empires based on knowing how
to crawl HTML. In other words, with a ML, a piece of code can figure out
how to do some pretty interesting aggregation. For example, let's say an
enterprise has a whole bunch of semi-interconnected APIs based on this ML.
I can conceive of a central API aggregator that crawls those APIs and
creates a dynamic registry of available functionality. There can be a whole
bunch of interesting applications once we "engineer for serendipity."
I guess what I'm saying is that an atom:link should be good enough. The
> href defines the location on the network. The media type cements the
> contract.
>
> A schema for the form would be good though, I think (Could just be HTML
> form description or some subset of that). Append the schema ID to the form
> when you send it, just like we do for XML.
>
I agree with this... forms are good. However, I think that API ML will need
a re-imagining of HTML forms. It could benefit from fields that describe
how to fill in a URL template (<form action="/account/{accountId}" <select
name="accountId" destination="urltemplate">...), or even HTTP headers (<form
..><select name="contentType" destination="header"
destination-detail="Content-Type"><option name="xml"
value="application/xml">)...
>
> Content-Type: application/x-www-form-urlencoded
>
> form-schema-id=http://...
> name=value
> name2=value
> name3=value
>
> Or create a new mime type that allows you to specify the form schema id as
> an attribute:
>
>
> Content-Type: application/apiml;form-schema=http://...
>
> name=value
> name2=value
> name3=value
>
>
> Solomon Duskis wrote:
>
>>
>>
>> Hi guys,
>>
>> I put together a series of blog entries with an idea on how to construct a
>> HATEOAS REST API. The core of the idea is that the current REST APIs have
>> some AJAXy (2006 - 2009) types of optimizations, but don't take advantage of
>> HTML's more basic capabilities (~1995 - now) that are applicable to HATEOAS.
>>
>> Using that basic idea, I tried to figure out how to take advantage of HTML
>> and Browser idioms in a REST API setting.
>>
>> The road to Real REST APIs:
>> http://www.jroller.com/Solomon/entry/the_road_to_real_rest <
>> http://www.jroller.com/Solomon/entry/the_road_to_real_rest>
>> Proposal: REST/HATEOAS Java client:
>> http://www.jroller.com/Solomon/entry/proposal_rest_hateaos_java_client <
>> http://www.jroller.com/Solomon/entry/proposal_rest_hateaos_java_client>
>> REST - HATEOAS Client communication:
>> http://www.jroller.com/Solomon/entry/rest_hateoas_client_communication <
>> http://www.jroller.com/Solomon/entry/rest_hateoas_client_communication>
>>
>> I'm hoping to get feedback from this illustrious crowd on the following:
>>
>> 1) does the idea fully implement REST, including HATEOAS
>> 2) is this idea implementable
>> 3) future direction for the idea
>>
>> Any feedback (even "it sucks, here's why...") would be appreciated. Note
>> that it is a blog, and not a polished article... I don't mean to provide
>> flaim-bait, but it still happens :)
>>
>> -Solomon
>>
>>
>>
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com
>
Solomon Duskis wrote: > > > On Fri, Mar 27, 2009 at 11:22 AM, Bill Burke <bburke@... > <mailto:bburke@...>> wrote: > > I was thinking about this alot more last night. I'm not sure you > need this type of markup language. Computers aren't humans. While > a human can look at a web page and figure out how to enter > information on the form there, a piece of code can't. For code, > everything needs to be mostly predetermined and the interactions > agreed upon before hand. > > > Specific Interactions do need some predetermined flows, but the > specifics of the communication (i.e. here's the URL, and here's the > headers for content-type...) don't need to be. > > Also, there are plenty of companies that built empires based on knowing > how to crawl HTML. In other words, with a ML, a piece of code can > figure out how to do some pretty interesting aggregation. For example, > let's say an enterprise has a whole bunch of semi-interconnected APIs > based on this ML. I can conceive of a central API aggregator that > crawls those APIs and creates a dynamic registry of available > functionality. There can be a whole bunch of interesting applications > once we "engineer for serendipity." > And I'm saying tht serendipity exists in the media type already. You don't need to send it with the message. When you send around XML documents to you embed the schema? Or do you just link to it? Or define a new mime type? Answer? You link to the schema in your message and maybe also define a new mime type. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
I'll definitely take a look at CCXML and SCXML and the technologies that use them... but I don't think that they are RESTful. They are useful, but don't meet the Hypermedia As The Engine Of Application State (HATEOAS): REST has a constraint of Hypermedia As The Engine Of Application State (HATEOAS). Roy Fielding has a long post as to what that means, but I think of it in terms of a browser user clicking on a link to move from one page/state to another state. It seems like CCXML's oriented at describing the communication process: start the call, dial, connect... end the call. If so, then yes, there are states, but it's not "application state." I would categorize that type of state as "communication state" SCXML seems to be a generic state machine. Generic state machines usually aren't used to capture "application state." They are usually used to capture work flow and "resource state"; for example a mortgage loan is in the "initialized' state, "processing" state or "approved" state based on some interaction with the state machine. SCXML may be used to capture applications state, but it doesn't seem to be hypertext driven. I definitely could be wrong about all of this :) -Solomon On Thu, Mar 26, 2009 at 11:45 PM, wahbedahbe <andrew.wahbe@...> wrote: > Solomon, > I noticed the following comment in the first article: > > *Website architectures are the only examples of architectures that I've > seen that fully implement REST characteristics*. > > I strongly suggest looking at CCXML <http://www.w3.org/TR/ccxml/> for a > good example of a markup language designed for machine to machine RESTful > interactions. CCXML is meant to control telephony resources to implement > call control applications. If you're not a telephony person, the general > state machine model is being distilled into and improved in SCXML<http://www.w3.org/TR/scxml/> > . > > The key take away from these languages though is to model your client as a > set of APIs accessible via the markup language. You GET an initial document > that drives the client via those APIs. Part of the API allows the client to > make HTTP requests to change resources and/or transition to new documents. > > In this model, you have a client that consists of a markup interpreter > driving an underlying "platform". The platform has no dependencies on the > markup language, let alone any specific entities in those documents. i.e. > you never ever have client platform code that walks the document looking for > key pieces of data (which is where most folks end up). Your markup just > "runs" in the interpreter and invokes the platform. The platform can > generate events up into the interpreter which are typically surfaced as > events in the markup. > > The result of this model is that the client is completely decoupled from > the server which, after all, is a key benefit of REST. Essentially, what I'm > describing above is a general client design that is consistent with the > HATEOAS constraint. There may be other designs that do this, but this one > has worked well for me. You can think of an HTML web browser working this > way. The platform is the renderer, chrome etc. User input generates DOM > events in the markup. And so on. > > Hope this helps. > > Andrew Wahbe > > > > --- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@...> wrote: > > > > Hi guys, > > > > I put together a series of blog entries with an idea on how to construct > a > > HATEOAS REST API. The core of the idea is that the current REST APIs have > > some AJAXy (2006 - 2009) types of optimizations, but don't take advantage > of > > HTML's more basic capabilities (~1995 - now) that are applicable to > HATEOAS. > > > > Using that basic idea, I tried to figure out how to take advantage of > HTML > > and Browser idioms in a REST API setting. > > > > The road to Real REST APIs: > > http://www.jroller.com/Solomon/entry/the_road_to_real_rest > > Proposal: REST/HATEOAS Java client: > > http://www.jroller.com/Solomon/entry/proposal_rest_hateaos_java_client > > REST - HATEOAS Client communication: > > http://www.jroller.com/Solomon/entry/rest_hateoas_client_communication > > > > I'm hoping to get feedback from this illustrious crowd on the following: > > > > 1) does the idea fully implement REST, including HATEOAS > > 2) is this idea implementable > > 3) future direction for the idea > > > > Any feedback (even "it sucks, here's why...") would be appreciated. Note > > that it is a blog, and not a polished article... I don't mean to provide > > flaim-bait, but it still happens :) > > > > -Solomon > > > >
I'd like to take a step back... Media types that capture information specific to a "domain object" are a premature optimization. If you have a "movie" media type, and an "actor" media type, you've literally optimized the HATEOAS out of it (pardon my French). You might be able to get from a "movie" to it's "actors" and back again. However, how can you get from a "movie" or an "actor" to the "movie search" or the "My Queue" application states? That kind of problem becomes so unwieldy across the set of individualized media types to the point that HATEOAS gets ignored. In a generic media type, like my proposed "API-ML", you can also specify a movie and actor. However, you'd get lose the localized performance of a more targeted media type. On the other hand, using a more generic markup languange allows you to add a hypertext reference from the movie resource to an "actor" resource/application state or just as easily to the "movie search" and "My Queue" application states. IMHO, a generic API ML will inherently reduce performance for individual requests, but will optimize "conversations." -Solomon On Fri, Mar 27, 2009 at 4:02 PM, Bill Burke <bburke@...> wrote: > > > Solomon Duskis wrote: > >> >> >> On Fri, Mar 27, 2009 at 11:22 AM, Bill Burke <bburke@... <mailto: >> bburke@...>> wrote: >> >> I was thinking about this alot more last night. I'm not sure you >> need this type of markup language. Computers aren't humans. While >> a human can look at a web page and figure out how to enter >> information on the form there, a piece of code can't. For code, >> everything needs to be mostly predetermined and the interactions >> agreed upon before hand. >> >> >> Specific Interactions do need some predetermined flows, but the specifics >> of the communication (i.e. here's the URL, and here's the headers for >> content-type...) don't need to be. >> >> Also, there are plenty of companies that built empires based on knowing >> how to crawl HTML. In other words, with a ML, a piece of code can figure >> out how to do some pretty interesting aggregation. For example, let's say >> an enterprise has a whole bunch of semi-interconnected APIs based on this >> ML. I can conceive of a central API aggregator that crawls those APIs and >> creates a dynamic registry of available functionality. There can be a whole >> bunch of interesting applications once we "engineer for serendipity." >> >> > And I'm saying tht serendipity exists in the media type already. You don't > need to send it with the message. When you send around XML documents to you > embed the schema? Or do you just link to it? Or define a new mime type? > Answer? You link to the schema in your message and maybe also define a new > mime type. > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
John Panzer wrote: > > Atom has defined both feed and entry resources. A feed document can > aggregate multiple entries together -- indeed this is the purpose of > its existence (to aggregate, order, and potentially filter). A GET > on an individual entry resource gives the entry's Atom representation > of course. A GET on a related feed gets that entry, plus its > 'sibling' entries. This is all based on the definition of the feed > itself of course. > > Question: Is this fundamentally different from the DELETE case > below? If so, how? > Yes, because GET doesn't have side effects, DELETE does. When I GET a collection, I receive links to the individual members, so if my client wants to delete all of those members, it has the URIs over which to iterate a bunch of DELETE requests. -Eric > > >>> No. If the client is creating a collection for the purpose of > >>> having the deletion of that collection delete member resources of > >>> the collection, then DELETE has the semantics of DELETE for member > >>> resources, while having the semantics of BDELETE for collection > >>> resources, but in REST you can't assign multiple semantics to a > >>> single method, because then you do not have "a consistent set of > >>> semantics for all resources". > >>> > >> Deleting a bag of bagels has the same semantics as deleting a > >> bagel; it simply doesn’t matter that the bagels are individually > >> addressable as well as being part of a bag. > >> > >> > > > > No, throwing a bag of bagels in the trash isn't the same semantic as > > eating a bagel. If DELETE had batch-delete semantics, there > > wouldn't be any need for the BDELETE method. > > > > -Eric > >
Erik Hetzner wrote: > > In retrospect I shouldn’t have chosen a culinary metaphor. Instead of > a bag of bagels, imagine a resource structured like a set of nesting > dolls, [1] each of which is individually addressable; and yet, the > semantics of deleting one are identical - whether or not the doll > contains other dolls is irrelevant. > Except nowhere in REST are URIs defined to be hierarchical. If URIs were constrained to be hierarchical, then your new metaphor would be the guiding principle. REST does define URIs to be opaque, so the fact that these return 404... http://example.org/foo http://example.org/foo/ ...says absolutely nothing about the status of: http://example.org/foo/bar So, nesting dolls don't really apply here, also because a resource can be a member of multiple collections. > > If you don’t accept this, ok. But I have yet to read an argument why > this is wrong, besides an assertion that they have different semantics > - which is wrong in the case of nested resources, in my opinion - and > arguments about caching, which are irrelevant to the question of > whether or not this can be modeled in a RESTful system. > I haven't said it was a wrong thing to do, I have said to be careful not to violate the uniform interface constraint. If your API only accepts DELETE on collections, but disallows DELETE on individual members, then you have consistent semantics. They're the semantics of BDELETE, implied rather than spelled out in the entity, so it's stretching DELETE -- but not violating REST, IMO, provided that DELETE doesn't mean anything else on any other resources. Or, just violate the uniform interface constraint for this behavior, nothing wrong with a hybrid API provided it's recognized for what it is. Except for the special case of atomic batch-deletion, I just don't understand why it's so undesirable to follow the REST approach here: The REST approach to having a client request the deletion of all members of a collection, is for the server to instruct the client to call the DELETE method of each member resource in turn. This is possible using Xforms 1.1, or with XHR, either one obeying the HEAS constraint. The case this doesn't cover, atomic batch-deletion, is a good example use- case for Code on Demand. -Eric
Jo Størset wrote: > > I agree that the case of creating your own BDELETE or using post as > a bdelete, wouldn´t be that restful, since it implies having a > "method endpoint" at the server side. > There's a nuance here that people are missing, if I ever figure out a way to explain it in a concise fashion understandable to folks of all abilities and experience and languages, that doesn't get me yelled at, then I suppose I'll be able to write that elusive "REST made easy" book... > > But DELETE on two adresses identifying two of my resources, means > just that in HTTP. What that DELETE means in my resource model on > the server is not up to you to decide, but you can be told it through > our common out-of-band understanding of the resource representations > and interface. Of course, for this to work we both need to understand > and implement the meaning of the representation types and the > constrained interface. > Of course you can have one DELETE request remove more than one resource, in HTTP and even in REST. The topic of this thread, is how to have the *client* orchestrate that request. Which is of course possible in a variety of ways, including having the server just "interpret it that way" when certain collections are deleted. It just isn't RESTful, but if your API is mostly REST but needs this sort of thing for ease-of- use or pragmatic reasons anyway, that's just dandy. Nothing wrong with a hybrid API. But a hybrid API is the best you can do, if you are relying on a common out-of-band understanding of any sort, to change DELETE into BDELETE for some resources but not others. If the intention of the API is to allow the client to choose more than one resource to be deleted in a single request, instead of instructing the client to perform multiple DELETE requests, then it just isn't REST. Deleting other resources as a side effect is one thing; having the client dictate that side effect is another. > > If you don´t give the bag of bagels an address, then you will find > you have a problem trying to delete it through "restful" http. So if > it´s significant in your resource model, give it an address (or > more). It is possible to to model this restful over http, removing > resources at other adresses as a respones to DELETE is *NOT* in > itself "unrestful" use of http. Don´t mix the two cases together. > So long as the deletion of those other resources is a side-effect, and not somehow part of the client request. If the documentation for your API is describing how the client can create a new resource for the purpose of batch-deleting its members with a single request, then it's this out-of-band knowledge describing the interface, which is not using self-descriptive messages or HEAS. This couples together the evolution of client and server. -Eric
Let's try to get back to the fundamentals here. Does anyone seriously disagree with this statement of mine? The REST approach to having a client request the deletion of all members of a collection, is for the server to instruct the client to call the DELETE method of each member resource in turn. This is possible using Xforms 1.1, or with XHR, either one of which obeys the HEAS constraint. The case this doesn't cover, atomic batch-deletion, is a good example use-case for Code on Demand. -Eric
Solomon, yes I'm definitely aware of the HATEOAS constraint, and though it may be a bit buried in the specs, that concept is there. In CCXML, you use <fetch> and <goto> to transition from document to document. So the web of linked CCXML documents is a state machine as described in Fielding's thesis, and then each document contains a mini state machine as well. SCXML has these transitions as well but with a twist. In SCXML they works like a Gosub rather than a goto. This is done by using the src attribute on <state> (see section 3.11) or by using <invoke>. This twist on the model is interesting and has it's ups and downs but I don't think it's entirely un-RESTful though I concede that it's up for debate. I think the take-away from understanding these formats should be that different kinds of markup are possible that are better suited for machine-to-machine interactions. I've been trying to encourage folks to understand these formats to have other examples to draw from when thinking about REST. People tend to not look beyond HTML and often end up where Bill did in his response to this thread saying that: "Computers aren't humans. While a human can look at a web page and figure out how to enter information on the form there, a piece of code can't. For code, everything needs to be mostly predetermined and the interactions agreed upon before hand." No offense to Bill, but that completely misses the point of REST. One of the biggest benefits you will get from a RESTful design is the decoupling of the client and server. This is why the HATEOAS constraint is so important. CCXML is a counter-example to Bill's assertion (an assertion I've heard over and over again on this list). There's no coupling between a CCXML client and server beyond HTTP, CCXML and the initial URL of the application. A CCXML client can construct a POST using the information in a CCXML document. It's RESTful machine-to-machine interaction and is definitely worth understanding. Anyways, thanks for taking the time to look at these specs. I've pointed them out before on this list but got no response. Hopefully, they give you some good insights. Andrew --- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@...> wrote: > > I'll definitely take a look at CCXML and SCXML and the technologies that use > them... but I don't think that they are RESTful. They are useful, but > don't meet the Hypermedia As The Engine Of Application State (HATEOAS): > > REST has a constraint of Hypermedia As The Engine Of Application State > (HATEOAS). Roy Fielding has a long post as to what that means, but I think > of it in terms of a browser user clicking on a link to move from one > page/state to another state. > > It seems like CCXML's oriented at describing the communication process: > start the call, dial, connect... end the call. If so, then yes, there are > states, but it's not "application state." I would categorize that type of > state as "communication state" > > SCXML seems to be a generic state machine. Generic state machines usually > aren't used to capture "application state." They are usually used to > capture work flow and "resource state"; for example a mortgage loan is in > the "initialized' state, "processing" state or "approved" state based on > some interaction with the state machine. SCXML may be used to capture > applications state, but it doesn't seem to be hypertext driven. > > I definitely could be wrong about all of this :) > > -Solomon > >
Eric J. Bowman wrote: > > Let's try to get back to the fundamentals here. Does anyone seriously > disagree with this statement of mine? > Mike A. did, but he didn't post it to the list... Allow me to rephrase: The REST approach to having a client request the deletion of one or more resources, is for the server to instruct the client, using hypermedia, how to call the DELETE method for each of the desired URIs. The case this doesn't cover, atomic batch-deletion, is a good example use-case for Code on Demand. Mike points out that my phrasing ignored the possibility of URI templates, JSON etc., or concurrent DELETE requests from clients. -Eric
We may be talking past each other; what would you expect if you applied DELETE to the collection URI rather than GET? On 3/29/09, Eric J. Bowman <eric@...> wrote: > John Panzer wrote: > >> >> Atom has defined both feed and entry resources. A feed document can >> aggregate multiple entries together -- indeed this is the purpose of >> its existence (to aggregate, order, and potentially filter). A GET >> on an individual entry resource gives the entry's Atom representation >> of course. A GET on a related feed gets that entry, plus its >> 'sibling' entries. This is all based on the definition of the feed >> itself of course. >> >> Question: Is this fundamentally different from the DELETE case >> below? If so, how? >> > > Yes, because GET doesn't have side effects, DELETE does. When I GET a > collection, I receive links to the individual members, so if my client > wants to delete all of those members, it has the URIs over which to > iterate a bunch of DELETE requests. > > -Eric > >> >> >>> No. If the client is creating a collection for the purpose of >> >>> having the deletion of that collection delete member resources of >> >>> the collection, then DELETE has the semantics of DELETE for member >> >>> resources, while having the semantics of BDELETE for collection >> >>> resources, but in REST you can't assign multiple semantics to a >> >>> single method, because then you do not have "a consistent set of >> >>> semantics for all resources". >> >>> >> >> Deleting a bag of bagels has the same semantics as deleting a >> >> bagel; it simply doesn’t matter that the bagels are individually >> >> addressable as well as being part of a bag. >> >> >> >> >> > >> > No, throwing a bag of bagels in the trash isn't the same semantic as >> > eating a bagel. If DELETE had batch-delete semantics, there >> > wouldn't be any need for the BDELETE method. >> > >> > -Eric >> > >
On Mar 29, 2009, at 6:13 PM, Eric J. Bowman wrote: > Erik Hetzner wrote: > > > > > In retrospect I shouldn’t have chosen a culinary metaphor. > Instead of > > a bag of bagels, imagine a resource structured like a set of nesting > > dolls, [1] each of which is individually addressable; and yet, the > > semantics of deleting one are identical - whether or not the doll > > contains other dolls is irrelevant. > > > > Except nowhere in REST are URIs defined to be hierarchical. > REST does not define URIs (styles do not define an architecture). The URI spec does, and it reserves "/" to mean hierarchical. ....Roy
On Mar 29, 2009, at 6:27 PM, Eric J. Bowman wrote: > Let's try to get back to the fundamentals here. Does anyone seriously > disagree with this statement of mine? > > The REST approach to having a client request the deletion of all > members > of a collection, is for the server to instruct the client to call the > DELETE method of each member resource in turn. > No, that is only one of many ways. The most common is to define a resource that has the semantics of all members of the collection and apply DELETE to that one resource. The simplest is to just delete the collection. ....Roy
John Panzer wrote: > > We may be talking past each other; what would you expect if you > applied DELETE to the collection URI rather than GET? > I wouldn't expect anything. Atom Protocol leaves the deletion of collections undefined, which means it could go either way. Since members can belong to multiple collections, it seems wiser to me, to not delete individual members when a collection is deleted. Unless my application logic constrains resources to only be members of one collection. -Eric
On Tue, Mar 31, 2009 at 1:26 AM, Eric J. Bowman <eric@...> wrote: > John Panzer wrote: > >> >> We may be talking past each other; what would you expect if you >> applied DELETE to the collection URI rather than GET? >> > > I wouldn't expect anything. Atom Protocol leaves the deletion of > collections undefined, which means it could go either way. Since > members can belong to multiple collections, it seems wiser to me, to > not delete individual members when a collection is deleted. Unless my > application logic constrains resources to only be members of one > collection. > You raise a valuable point that bears more consideration. There was an interesting discussion thread on rest-discuss that's related [1]. How do we deal with fully dependent members of a collection (i.e., don't have any meaning/existence outside the collection, and thus can be destroyed when the collection is destoyed) versus self-standing members that happen to be aggregated in a collection (in which case deletion should "undo" the aggregation but NOT delete the individual members). --peter keane [1] http://tech.groups.yahoo.com/group/rest-discuss/message/11383 > -Eric >
Bill de hOra wrote: > > Suppose I had a collection resource and a document resource. > > /collection/foo > /document/bar > > and I want to add that document to the collection. What idioms are > people using to add (and remove) items to the collection, or in > general tell a server to relate two resources? > I consider this a "tagging" problem. Assuming /document/bar is Atom, I would GET it, and PUT it back with the following line added: <category term='foo'/> Actually, I use PATCH and 'application/atomcat+xml' to add and remove tags, but PUT can also be used to remove <category/> tags. The 'scheme' attribute contains the /collection/ path, while the /document/ path is implicit in the request URI. The server logic is written such that /collection/foo is a stored search for documents containing <category term='foo'/>. This search is re- executed and given a new Etag as required by the server, which detects re-tagging in PUT (or PATCH) requests. Since no resources are being created or deleted in tagging operations, I wouldn't use POST or DELETE at all. But this solution is still REST, and Atom Protocol (except for the PATCH bit). -Eric
That was pretty much my thought as well back when this thread was alive. But what about when /document/foo is not under your control (i.e., you can't get/put it)? This is a similar issue that the activities feed folks are addressing right now. Likewise, the OAI-ORE effort was all based on creating aggregations of web resources. Brings up interesting issues, I think. Your approach (stored search for documents w/ a particular category) gets at something I've often said, which it that we need a good standard query mechanism for doing queries with "filter by category" or some such. I actually like what Google Base specifies in this area. (I know that opensearch is another effort in this area). --peter On Tue, Mar 31, 2009 at 10:26 AM, Eric J. Bowman <eric@...> wrote: > Bill de hOra wrote: > >> >> Suppose I had a collection resource and a document resource. >> >> /collection/foo >> /document/bar >> >> and I want to add that document to the collection. What idioms are >> people using to add (and remove) items to the collection, or in >> general tell a server to relate two resources? >> > > I consider this a "tagging" problem. Assuming /document/bar is Atom, I > would GET it, and PUT it back with the following line added: > > <category term='foo'/> > > Actually, I use PATCH and 'application/atomcat+xml' to add and remove > tags, but PUT can also be used to remove <category/> tags. The 'scheme' > attribute contains the /collection/ path, while the /document/ path is > implicit in the request URI. > > The server logic is written such that /collection/foo is a stored search > for documents containing <category term='foo'/>. This search is re- > executed and given a new Etag as required by the server, which detects > re-tagging in PUT (or PATCH) requests. > > Since no resources are being created or deleted in tagging operations, > I wouldn't use POST or DELETE at all. But this solution is still REST, > and Atom Protocol (except for the PATCH bit). > > -Eric >
The NASA Earth Science Data Systems Standards Process Group (SPG) is
seeking someone to speak about REST in the context of Earth science
web services on July 7 in Santa Barbara, CA. Is there anyone in the
Southern California area (local is better, since there's no travel
budget) who would be interested in helping NASA's future data
architectures be fully informed by REST and webarch? Please contact
Allan Doyle <adoyle@...> if you're interested in
speaking.
Wiki page: http://wiki.esipfed.org/index.php/SPG_Agenda_July_7_2009
From the wiki:
One role of the NASA Earth Science Data Systems Standards Process
Group (SPG) http://www.esdswg.net/spg is to develop a growing list of
stable, operationally ready standards; as well as a body of technical
notes related to implementation of standards, including specifications
and practices that could develop into standards.
The SPG is often called upon within NASA for information and advice
regarding specs, standards, and practices that are likely to have a
large impact on NASA's internal data management processes as well as
on NASA's role as a partner to international activities such as CEOS
and GEO/GEOSS.
The SPG traditionally chooses a topic of interest as a theme for a
technical information exchange session at each of its meetings.
On Tuesday, July 7, 2009, at the University of California, Santa
Barbara, the SPG will host a technical session to investigate the
topic of web services as it relates to Earth science data management.
Our goal is to investigate and provide answers to these questions:
1. What is the current state of the art in web services in this
context? Are there specifications, standards, or practices about which
the SPG should be seeking out RFC submissions? State of the art might
imply that there is not yet widespread operational use, but that such
use is likely to occur in the future.
2. What is being used in production environments? For those web
service specifications and practices that are in widespread,
operational use, which ones should be entered into the SPG process?
3. What experiments and trials are being done using new ideas or
new ways of using existing ideas? Are there activities that could be
documented as tech notes so that potential adopters can be made aware
of them?
Current leading-edge activities in this area suggest that existing OGC
specifications and Service Oriented Architectures based on UDDI, WSDL,
SOAP, etc. are mismatched with emerging architectural patterns such as
REST. How can the SPG help discern a pathway that will enable
construction of robust, interoperable data systems?
We would like to invite speakers knowledgeable about this topic in
this context to present their work during a technical session,
followed by a round table discussion answering the questions posed
above.
[edit] Speakers
Please contact Allan Doyle <adoyle@...> if you're
interested in speaking. We'll update the list below as people are added.
* Allan Doyle, NASA SPG - session leader, and introduction
* Karl Benedict - Director, Earth Data Analysis Center, UNM
--
Sean Gillies
Software Engineer
Institute for the Study of the Ancient World
New York University
wahbedahbe wrote: > "Computers aren't humans. While a human can look at a web page > and figure out how to enter information on the form there, a > piece of code can't. For code, everything needs to be mostly > predetermined and the interactions agreed upon before hand." > > No offense to Bill, but that completely misses the point of REST. One of > the biggest benefits you will get from a RESTful design is the > decoupling of the client and server. This is why the HATEOAS constraint > is so important. > (I want to argue my point so that a) i'm not misunderstood b) to make sure I have a point ;) ) What HATEOAS decouples from a machine perspective (NOT A HUMAN ONE!) is the relationship to the endpoint. Specifically, where it lives and to a lesser extent, what message you are exchanging. A service cannot change its acceptable media types without breaking existing clients. So, IMO, the only thing HATEOAS really ends up decoupling from a machine-to-machine perspective is relationship location. This is still much better than relying on a directory or naming service because you can define a conversation contract within the schema of the media type. This was the point I was making to Solomon. Its the schema that defines the contract. The message itself is not self-describing. The message's reference to its schema is. Back to Solomon's apiml example, the machine client already has to know the format of the form it will transmit as it traverses relationships, so the self-describing part of his message is irrelevant. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Mon, Mar 30, 2009 at 11:26 PM, Eric J. Bowman <eric@...>wrote: > John Panzer wrote: > > > > > We may be talking past each other; what would you expect if you > > applied DELETE to the collection URI rather than GET? > > > > I wouldn't expect anything. Atom Protocol leaves the deletion of > collections undefined, which means it could go either way. Since > members can belong to multiple collections, it seems wiser to me, to > not delete individual members when a collection is deleted. Unless my > application logic constrains resources to only be members of one > collection. Still talking past each other :). My question was not about AtomPub, but about REST; if one were defining an AtomPub-like protocol using the REST architectural style, and decided to define DELETE in this way, would it be a violation of REST's uniform interface constraint? My opinion is no, it's fine. -John > > > -Eric >
For me the "common sense" answer is that the collection is a resource and a DELETE to that resource does not entail deletion of resources referenced in the collection's representation. Indeed that would be a bizarre state of affairs. For REST in general imo it's undefined as REST semantics don't include either quantifiers or iterators. Nor do HTTP, which is why things like feed formats had to be invented. Bill Peter Keane wrote: > > > On Tue, Mar 31, 2009 at 1:26 AM, Eric J. Bowman <eric@... > <mailto:eric%40bisonsystems.net>> wrote: > > John Panzer wrote: > > > >> > >> We may be talking past each other; what would you expect if you > >> applied DELETE to the collection URI rather than GET? > >> > > > > I wouldn't expect anything. Atom Protocol leaves the deletion of > > collections undefined, which means it could go either way. Since > > members can belong to multiple collections, it seems wiser to me, to > > not delete individual members when a collection is deleted. Unless my > > application logic constrains resources to only be members of one > > collection. > > > > You raise a valuable point that bears more consideration. There was > an interesting discussion thread on rest-discuss that's related [1]. > How do we deal with fully dependent members of a collection (i.e., > don't have any meaning/existence outside the collection, and thus can > be destroyed when the collection is destoyed) versus self-standing > members that happen to be aggregated in a collection (in which case > deletion should "undo" the aggregation but NOT delete the individual > members). > > --peter keane > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/11383 > <http://tech.groups.yahoo.com/group/rest-discuss/message/11383> > > > -Eric > > > >
Hi Bill, > For me the "common sense" answer is that the collection is a resource > and a DELETE to that resource does not entail deletion of resources > referenced in the collection's representation. Indeed that would be a > bizarre state of affairs. That depends. I see two cases which I'll try to illustrate by (dumb) analogies: 1. Road signs - I delete a road sign, the towns which it references do not get deleted themselves *. 2. Boxes of chocolates - if I delete a box of chocolates, I expect the chocolates themselves to be destroyed. So it's really up to the semantics of the resource I want to delete, no? Jim * Offer not applicable to Slough.
On Tue, Mar 31, 2009 at 2:19 PM, Jim Webber <jim@...> wrote: > Hi Bill, > > > For me the "common sense" answer is that the collection is a resource > > and a DELETE to that resource does not entail deletion of resources > > referenced in the collection's representation. Indeed that would be a > > bizarre state of affairs. > > That depends. I see two cases which I'll try to illustrate by (dumb) > analogies: > > 1. Road signs - I delete a road sign, the towns which it references do > not get deleted themselves *. > 2. Boxes of chocolates - if I delete a box of chocolates, I expect the > chocolates themselves to be destroyed. > > So it's really up to the semantics of the resource I want to delete, no? It could well be, but now your clients have to know about different types of resources and how to map URLs to semantics, which is a different architecture style than REST. If you treat all resources uniformly, since everyone understands DELETE to affect a single resource, the only thing getting deleted is that single resource. The last set of semantics is the hypermedia, where you have a larger vocabulary to communicate more intents. You can differentiate between hide, delete and archive, deal with individual items and collections, etc. In my experience, it's best to never let the hypermedia alter the protocol semantics, so only ever use DELETE to remove a single resource. Assaf > > > Jim > > > * Offer not applicable to Slough. > > > ------------------------------------ > > Yahoo! Groups Links > > > >
> From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@...m] On Behalf Of Jim Webber > Sent: Tuesday, March 31, 2009 5:20 PM > To: Rest List > Subject: Re: [rest-discuss] An approach to deleting multiple resources > use one DELETE > > Hi Bill, > > > For me the "common sense" answer is that the collection is a resource > > and a DELETE to that resource does not entail deletion of resources > > referenced in the collection's representation. Indeed that would be a > > bizarre state of affairs. > > That depends. I see two cases which I'll try to illustrate by (dumb) > analogies: > > 1. Road signs - I delete a road sign, the towns which it references do > not get deleted themselves *. > 2. Boxes of chocolates - if I delete a box of chocolates, I expect the > chocolates themselves to be destroyed. > > So it's really up to the semantics of the resource I want to delete, > no? Looking at Wikipedia reveals: <http://en.wikipedia.org/wiki/REST#RESTful_Web_services> When the resource is a collection URI, DELETE says: "Not generally used. Meaning defined as delete the entire collection", Jim's (2). However, the statement before the tables says: "The following table shows how the HTTP verbs are *typically* used to implement a web service". So that leaves some wiggle room. Jim's two use cases have merit. In (1) there is an associative relationship between a road sign and towns that wish to have their name appear on it. In (2) there is a whole-part relationship between the box and the individual chocolates. Andy.
On Tue, Mar 31, 2009 at 4:42 PM, Houghton,Andrew <houghtoa@...> wrote: >> From: rest-discuss@yahoogroups.com [mailto:rest- >> discuss@yahoogroups.com] On Behalf Of Jim Webber >> Sent: Tuesday, March 31, 2009 5:20 PM >> To: Rest List >> Subject: Re: [rest-discuss] An approach to deleting multiple resources >> use one DELETE >> >> Hi Bill, >> >> > For me the "common sense" answer is that the collection is a resource >> > and a DELETE to that resource does not entail deletion of resources >> > referenced in the collection's representation. Indeed that would be a >> > bizarre state of affairs. >> >> That depends. I see two cases which I'll try to illustrate by (dumb) >> analogies: >> >> 1. Road signs - I delete a road sign, the towns which it references do >> not get deleted themselves *. >> 2. Boxes of chocolates - if I delete a box of chocolates, I expect the >> chocolates themselves to be destroyed. >> >> So it's really up to the semantics of the resource I want to delete, >> no? > > Looking at Wikipedia reveals: > > <http://en.wikipedia.org/wiki/REST#RESTful_Web_services> > > When the resource is a collection URI, DELETE says: "Not generally used. Meaning defined as delete the entire collection", Â Jim's (2). Â However, the statement before the tables says: "The following table shows how the HTTP verbs are *typically* used to implement a web service". Â So that leaves some wiggle room. > > Jim's two use cases have merit. Â In (1) there is an associative relationship between a road sign and towns that wish to have their name appear on it. Â In (2) there is a whole-part relationship between the box and the individual chocolates. > In practice, I'd simply default to deleting only the collection resource (the "aggregation") and not individual items, leaving that to some non-REST backend garbage collection (in the case of the chocolates analogy), if necessary. It raises the obvious question of what is meant by a "collection:" the actual set of things? (see Roy F. comment earlier in this thread). OR an aggregation of things (i.e., just pointers). I certainly tend to think of Atom/AtomPub collections as the latter. --peter > > Andy. > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Tue, Mar 31, 2009 at 2:35 PM, Assaf Arkin <assaf@...> wrote: > On Tue, Mar 31, 2009 at 2:19 PM, Jim Webber <jim@...> wrote: >> >> Hi Bill, >> >> > For me the "common sense" answer is that the collection is a resource >> > and a DELETE to that resource does not entail deletion of resources >> > referenced in the collection's representation. Indeed that would be a >> > bizarre state of affairs. >> >> That depends. I see two cases which I'll try to illustrate by (dumb) >> analogies: >> >> 1. Road signs - I delete a road sign, the towns which it references do >> not get deleted themselves *. >> 2. Boxes of chocolates - if I delete a box of chocolates, I expect the >> chocolates themselves to be destroyed. >> >> So it's really up to the semantics of the resource I want to delete, no? > > It could well be, but now your clients have to know about different types of > resources and how to map URLs to semantics, which is a different > architecture style than REST. > If you treat all resources uniformly, since everyone understands DELETE to > affect a single resource, the only thing getting deleted is that single > resource. If I'm using a REST application based on HATEOAS, where the server gave me the URI of the collection, and an OPTIONS call to that URI says it responds to DELETE, I think it's perfectly reasonable for a client to believe in what the server is declaring. Whether the resource being deleted is, in fact, a collection of other resources is not relevant. > The last set of semantics is the hypermedia, where you have a larger > vocabulary to communicate more intents. You can differentiate between hide, > delete and archive, deal with individual items and collections, etc. > In my experience, it's best to never let the hypermedia alter the protocol > semantics, so only ever use DELETE to remove a single resource. Unix shells have essentially always let you have your choice about deleting subdirectories: * "rmdir foo" -- delete subdirectory foo, but only if it is empty. * "rm -rf foo" -- delete subdirectory foo and all of its (potentially nested) contents. Both operations are semantically useful for this context in different use cases. Clearly the second is a "nice to have" because you can implement recursive loops to accomplish the same purpose, but if the client knows what it wants to do (delete this resource, whatever that happens to mean), and the server gives the client a URI to the collection that accepts DELETE, it should darn well do the deed if it is asked to. > Assaf Craig >> >> Jim >> >> >> * Offer not applicable to Slough. >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > >
On Tue, Mar 31, 2009 at 3:37 PM, Craig McClanahan <craigmcc@...>wrote: > On Tue, Mar 31, 2009 at 2:35 PM, Assaf Arkin <assaf@...> wrote: > > On Tue, Mar 31, 2009 at 2:19 PM, Jim Webber <jim@...> wrote: > >> > >> Hi Bill, > >> > >> > For me the "common sense" answer is that the collection is a resource > >> > and a DELETE to that resource does not entail deletion of resources > >> > referenced in the collection's representation. Indeed that would be a > >> > bizarre state of affairs. > >> > >> That depends. I see two cases which I'll try to illustrate by (dumb) > >> analogies: > >> > >> 1. Road signs - I delete a road sign, the towns which it references do > >> not get deleted themselves *. > >> 2. Boxes of chocolates - if I delete a box of chocolates, I expect the > >> chocolates themselves to be destroyed. > >> > >> So it's really up to the semantics of the resource I want to delete, no? > > > > It could well be, but now your clients have to know about different types > of > > resources and how to map URLs to semantics, which is a different > > architecture style than REST. > > If you treat all resources uniformly, since everyone understands DELETE > to > > affect a single resource, the only thing getting deleted is that single > > resource. > > If I'm using a REST application based on HATEOAS, where the server > gave me the URI of the collection, and an OPTIONS call to that URI > says it responds to DELETE, I think it's perfectly reasonable for a > client to believe in what the server is declaring. Whether the > resource being deleted is, in fact, a collection of other resources is > not relevant. a) server gives you a representation of a "nuke-collection" action in which it tells you to use to operate on a resource. b) more precisely it tells you to use DELETE, but the semantics of delete are extended to delete a lot of related resources. c) it doesn't tell you to use DELETE, but you know to OPTION and that DELETE signifies this particular semantic. Each step adds more coupling between your client applications and the servers because it requires more understanding of special cases. I'm generally leaning towards a). Assaf > > > The last set of semantics is the hypermedia, where you have a larger > > vocabulary to communicate more intents. You can differentiate between > hide, > > delete and archive, deal with individual items and collections, etc. > > In my experience, it's best to never let the hypermedia alter the > protocol > > semantics, so only ever use DELETE to remove a single resource. > > Unix shells have essentially always let you have your choice about > deleting subdirectories: > > * "rmdir foo" -- delete subdirectory foo, but only if it is empty. > > * "rm -rf foo" -- delete subdirectory foo and all of its (potentially > nested) contents. > > Both operations are semantically useful for this context in different > use cases. Clearly the second is a "nice to have" because you can > implement recursive loops to accomplish the same purpose, but if the > client knows what it wants to do (delete this resource, whatever that > happens to mean), and the server gives the client a URI to the > collection that accepts DELETE, it should darn well do the deed if it > is asked to. If the spec for DELETE says that applying it on foo also applies to foo/bar and foo/baz then yes (the spec for rm spells that directly). If the spec doesn't say that, than now you have your understanding of the protocol and I have mine, and we're no longer talking the same language. Assaf > > > > Assaf > > Craig > > >> > >> Jim > >> > >> > >> * Offer not applicable to Slough. > >> > >> > >> ------------------------------------ > >> > >> Yahoo! Groups Links > >> > >> > >> > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
There seems to be a pretty big conceptual and practical barrier of entry to HATEOAS in machine-to-machine interaction. We simply don't have examples of HATEOAS done fully. Why is HATEOAS even worth while? What's the need for HATEOAS in machine-to-machine APIs? Existing SOAP and "REST" APIs don't really take advantage of HATEOAS for some reason. If you're observing all of the REST constraints but HATEOAS, you're may not have a RESTful architecture, but you have something tremendously useful. The best that I can come up with is that HATEOAS will help long-term evolution of APIs, especially the type of intricate fine-grained APIs that enterprises favor. It will also help with "serendipity" - meaning that it's easier in the long run to reuse a distributed REST solution than other types of distributed solutions. While those are interesting benefits, they are really difficult to sell... I can't really sell "something good will happen later if you choose REST." Assuming that the practical barriers of entry are removed, what practical benefits will we see? -Solomon
On Tue, Mar 31, 2009 at 5:01 PM, Solomon Duskis <sduskis@...> wrote: > [snip] > Assuming that the practical barriers of entry are removed, what practical > benefits will we see? > I know exactly where you are coming from with these questions ... I felt the same way until recently. I've designed several REST APIs over the last couple of years, but up until the most recent one, I designed and documented them in the "typical" way, describing the URI structure of the application and letting the client figure out what to send when. My most recent effort is contributing to the design of the REST architecture for the Sun Cloud API[1] to control virtual machines and so on. In addition, I'm very focused on writing client language bindings for this API in multiple languages (Ruby, Python, Java) ... so I get a first hand feel for programming to this API at a very low level. We started from the presumption that the service would publish only *one* well-known URI (returning a "cloud" representation containing representations for, and/or URI links to representations for, all the cloud resources that are accessible to the calling user). Every other URI in the entire system (including all those that do state changes) are discovered by examining these representations. Even in the early days, I can see some significant, practical, short term benefits we have gained from taking this approach: * REDUCED CLIENT CODING ERRORS. Looking back at all the REST client side interfaces that I, or people I work with, have built, about 90% of the bugs have been in the construction of the right URIs for the server. Typical mistakes are leaving out path segments, getting them in the wrong order, or forgetting to URL encode things. All this goes away when the server hands you exactly the right URI to use for every circumstance. * REDUCED INVALID STATE TRANSITION CALLS. When the client decides which URI to call and when, they run the risk of attempting to request state transitions that are not valid for the current state of the server side resource. An example from my problem domain ... it's not allowed to "start" a virtual machine (VM) until you have "deployed" it. The server knows about URIs to initiate each of the state changes (via a POST), but the representation of the VM lists only the URIs for state transitions that are valid from the current state. This makes it extremely easy for the client to understand that trying to start a VM that hasn't been deployed yet is not legal, because there will be no corresponding URI in the VM representation. * FINE GRAINED EVOLUTION WITHOUT (NECESSARILY) BREAKING OLD CLIENTS. At any given time, the client of any REST API is going to be programmed with *some* assumptions about what the system can do. But, if you document a restriction to "pay attention to only those aspects of the representation that you know about", plus a server side discipline to add things later that don't disrupt previous behavior, you can evolve APIs fairly quickly without breaking all clients, or having to support multiple versions of the API simultaneously on your server. You don't have to wait years for serendipity benefits :-). Especially compared to something like SOAP where the syntax of your representations is versioned (in the WSDL), so you have to mess with the clients on every single change. Having drunk the HATEOAS koolaid now, I would have a really hard time going back :-). Craig McClanahan [1] http://kenai.com/projects/suncloudapis/pages/Home
Missed the reply-to on this one. /Rickard
On Tue, Mar 31, 2009 at 5:01 PM, Solomon Duskis <sduskis@...> wrote: > There seems to be a pretty big conceptual and practical barrier of entry > to HATEOAS in machine-to-machine interaction. We simply don't have examples > of HATEOAS done fully. > Why is HATEOAS even worth while? > > What's the need for HATEOAS in machine-to-machine APIs? Existing SOAP and > "REST" APIs don't really take advantage of HATEOAS for some reason. If > you're observing all of the REST constraints but HATEOAS, you're may not > have a RESTful architecture, but you have something tremendously useful. > SOAP is stateless, but many services are not, so the WS-* architecture has to deal with some of the same problems REST deals with. You have WS-Addressing (and friends) that allow you to address a given state behind the service, and all sort of WS-Addressing exchanges that are (to make a point) scripted HATEOS. Some operations are only material at a given state, some states create more states (e.g. at some point an order is joined by shipment tracking) so you need to reason about these. Hence WS-BPEL and friends. It's a different approach but broadly speaking to the same problem HATEOS solves: knowing what actions are relevant at any given state and how to perform them. Assaf > > > The best that I can come up with is that HATEOAS will help long-term > evolution of APIs, especially the type of intricate fine-grained APIs that > enterprises favor. It will also help with "serendipity" - meaning that it's > easier in the long run to reuse a distributed REST solution than other types > of distributed solutions. While those are interesting benefits, they are > really difficult to sell... I can't really sell "something good will happen > later if you choose REST." > > Assuming that the practical barriers of entry are removed, what practical > benefits will we see? > > -Solomon > > > >
Hi Craig, That is a great summary. The key point leakage of business rules. In the absence of hyperlinks, the server will have to explain the clients the rules under which a given transition is valid so that clients can initiate them. By providing hyperlinks, the server can hide those business rules from clients. The clients will still have to know how to make transitions, but not when and why. In other words, HATEOAS helps reduce abstraction leakage. Subbu On Mar 31, 2009, at 5:59 PM, Craig McClanahan wrote: > On Tue, Mar 31, 2009 at 5:01 PM, Solomon Duskis <sduskis@...> > wrote: > >> [snip] >> Assuming that the practical barriers of entry are removed, what >> practical >> benefits will we see? >> > > I know exactly where you are coming from with these questions ... I > felt the same way until recently. I've designed several REST APIs > over the last couple of years, but up until the most recent one, I > designed and documented them in the "typical" way, describing the URI > structure of the application and letting the client figure out what to > send when. My most recent effort is contributing to the design of the > REST architecture for the Sun Cloud API[1] to control virtual > machines and so on. In addition, I'm very focused on writing client > language bindings for this API in multiple languages (Ruby, Python, > Java) ... so I get a first hand feel for programming to this API at a > very low level. > > We started from the presumption that the service would publish only > *one* well-known URI (returning a "cloud" representation containing > representations for, and/or URI links to representations for, all the > cloud resources that are accessible to the calling user). Every other > URI in the entire system (including all those that do state changes) > are discovered by examining these representations. Even in the early > days, I can see some significant, practical, short term benefits we > have gained from taking this approach: > > * REDUCED CLIENT CODING ERRORS. Looking back at all the REST client > side interfaces > that I, or people I work with, have built, about 90% of the bugs > have been in the construction > of the right URIs for the server. Typical mistakes are leaving out > path segments, getting them > in the wrong order, or forgetting to URL encode things. All this > goes away when the server > hands you exactly the right URI to use for every circumstance. > > * REDUCED INVALID STATE TRANSITION CALLS. When the client decides > which URI to call and > when, they run the risk of attempting to request state transitions > that are not valid for the current > state of the server side resource. An example from my problem > domain ... it's not allowed to > "start" a virtual machine (VM) until you have "deployed" it. The > server knows about URIs to > initiate each of the state changes (via a POST), but the > representation of the VM lists only the > URIs for state transitions that are valid from the current state. > This makes it extremely easy > for the client to understand that trying to start a VM that hasn't > been deployed yet is not legal, > because there will be no corresponding URI in the VM representation. > > * FINE GRAINED EVOLUTION WITHOUT (NECESSARILY) BREAKING OLD CLIENTS. > At any given time, the client of any REST API is going to be > programmed with *some* assumptions > about what the system can do. But, if you document a restriction to > "pay attention to only those > aspects of the representation that you know about", plus a server > side discipline to add things later > that don't disrupt previous behavior, you can evolve APIs fairly > quickly without breaking all clients, > or having to support multiple versions of the API simultaneously on > your server. You don't have to > wait years for serendipity benefits :-). Especially compared to > something like SOAP where the > syntax of your representations is versioned (in the WSDL), so you > have to mess with the clients > on every single change. > > Having drunk the HATEOAS koolaid now, I would have a really hard time > going back :-). > > Craig McClanahan > > [1] http://kenai.com/projects/suncloudapis/pages/Home --- http://subbu.org
The last point really hits home for me. If I understand it correctly, as a client consuming an API that adheres to what Craig is saying, I can, for example rely on the fact that a given URI might be changed by the server, say due to a bug fix or a new version deployed, but mean while my client still works without breaking. I would guess the server implementation would document this fact, especially in the case of a newer version deployed that may change the URI, but I particularly find this beneficial to clients for exactly the reason Craig gives, less worry about my client breaking due to a server URI change. As long as the expected functionality and response remains the same we're good.
I never considered the 2nd point. Very interesting indeed. Craig did help me with something like this... pagination. In a search engine app for example, a consumer could get a 1 - 100, 101 - 200, etc back. By returning the proper URI to get the next series, and/or previous series of results, I myself do not need to figure out the values to send in the request.. I can simply pluck the URI the server returns for the next/previous, and use it with assurance.
________________________________
From: Craig McClanahan <craigmcc@...>
To: Solomon Duskis <sduskis@...>
Cc: Rest List <rest-discuss@yahoogroups.com>
Sent: Tuesday, March 31, 2009 5:59:03 PM
Subject: Re: [rest-discuss] Why HATEOAS?
On Tue, Mar 31, 2009 at 5:01 PM, Solomon Duskis <sduskis@gmail. com> wrote:
> [snip]
> Assuming that the practical barriers of entry are removed, what practical
> benefits will we see?
>
I know exactly where you are coming from with these questions ... I
felt the same way until recently. I've designed several REST APIs
over the last couple of years, but up until the most recent one, I
designed and documented them in the "typical" way, describing the URI
structure of the application and letting the client figure out what to
send when. My most recent effort is contributing to the design of the
REST architecture for the Sun Cloud API[1] to control virtual
machines and so on. In addition, I'm very focused on writing client
language bindings for this API in multiple languages (Ruby, Python,
Java) ... so I get a first hand feel for programming to this API at a
very low level.
We started from the presumption that the service would publish only
*one* well-known URI (returning a "cloud" representation containing
representations for, and/or URI links to representations for, all the
cloud resources that are accessible to the calling user). Every other
URI in the entire system (including all those that do state changes)
are discovered by examining these representations. Even in the early
days, I can see some significant, practical, short term benefits we
have gained from taking this approach:
* REDUCED CLIENT CODING ERRORS. Looking back at all the REST client
side interfaces
that I, or people I work with, have built, about 90% of the bugs
have been in the construction
of the right URIs for the server. Typical mistakes are leaving out
path segments, getting them
in the wrong order, or forgetting to URL encode things. All this
goes away when the server
hands you exactly the right URI to use for every circumstance.
* REDUCED INVALID STATE TRANSITION CALLS. When the client decides
which URI to call and
when, they run the risk of attempting to request state transitions
that are not valid for the current
state of the server side resource. An example from my problem
domain ... it's not allowed to
"start" a virtual machine (VM) until you have "deployed" it. The
server knows about URIs to
initiate each of the state changes (via a POST), but the
representation of the VM lists only the
URIs for state transitions that are valid from the current state.
This makes it extremely easy
for the client to understand that trying to start a VM that hasn't
been deployed yet is not legal,
because there will be no corresponding URI in the VM representation.
* FINE GRAINED EVOLUTION WITHOUT (NECESSARILY) BREAKING OLD CLIENTS.
At any given time, the client of any REST API is going to be
programmed with *some* assumptions
about what the system can do. But, if you document a restriction to
"pay attention to only those
aspects of the representation that you know about", plus a server
side discipline to add things later
that don't disrupt previous behavior, you can evolve APIs fairly
quickly without breaking all clients,
or having to support multiple versions of the API simultaneously on
your server. You don't have to
wait years for serendipity benefits :-). Especially compared to
something like SOAP where the
syntax of your representations is versioned (in the WSDL), so you
have to mess with the clients
on every single change.
Having drunk the HATEOAS koolaid now, I would have a really hard time
going back :-).
Craig McClanahan
[1] http://kenai. com/projects/ suncloudapis/ pages/Home
Excellent explanation, you should publish that somewhere for easy reference. I think this will give me the final argument to convince my boss to give me the extra-time i need to fully implement hateoas in our infrastructure... On Apr 1, 2009 1:59am, Craig McClanahan <craigmcc@...> wrote: > On Tue, Mar 31, 2009 at 5:01 PM, Solomon Duskis sduskis@...> wrote: > > [snip] > > Assuming that the practical barriers of entry are removed, what > practical > > benefits will we see? > > > I know exactly where you are coming from with these questions ... I > felt the same way until recently. I've designed several REST APIs > over the last couple of years, but up until the most recent one, I > designed and documented them in the "typical" way, describing the URI > structure of the application and letting the client figure out what to > send when. My most recent effort is contributing to the design of the > REST architecture for the Sun Cloud API[1] to control virtual > machines and so on. In addition, I'm very focused on writing client > language bindings for this API in multiple languages (Ruby, Python, > Java) ... so I get a first hand feel for programming to this API at a > very low level. > We started from the presumption that the service would publish only > *one* well-known URI (returning a "cloud" representation containing > representations for, and/or URI links to representations for, all the > cloud resources that are accessible to the calling user). Every other > URI in the entire system (including all those that do state changes) > are discovered by examining these representations. Even in the early > days, I can see some significant, practical, short term benefits we > have gained from taking this approach: > * REDUCED CLIENT CODING ERRORS. Looking back at all the REST client > side interfaces > that I, or people I work with, have built, about 90% of the bugs > have been in the construction > of the right URIs for the server. Typical mistakes are leaving out > path segments, getting them > in the wrong order, or forgetting to URL encode things. All this > goes away when the server > hands you exactly the right URI to use for every circumstance. > * REDUCED INVALID STATE TRANSITION CALLS. When the client decides > which URI to call and > when, they run the risk of attempting to request state transitions > that are not valid for the current > state of the server side resource. An example from my problem > domain ... it's not allowed to > "start" a virtual machine (VM) until you have "deployed" it. The > server knows about URIs to > initiate each of the state changes (via a POST), but the > representation of the VM lists only the > URIs for state transitions that are valid from the current state. > This makes it extremely easy > for the client to understand that trying to start a VM that hasn't > been deployed yet is not legal, > because there will be no corresponding URI in the VM representation. > * FINE GRAINED EVOLUTION WITHOUT (NECESSARILY) BREAKING OLD CLIENTS. > At any given time, the client of any REST API is going to be > programmed with *some* assumptions > about what the system can do. But, if you document a restriction to > "pay attention to only those > aspects of the representation that you know about", plus a server > side discipline to add things later > that don't disrupt previous behavior, you can evolve APIs fairly > quickly without breaking all clients, > or having to support multiple versions of the API simultaneously on > your server. You don't have to > wait years for serendipity benefits :-). Especially compared to > something like SOAP where the > syntax of your representations is versioned (in the WSDL), so you > have to mess with the clients > on every single change. > Having drunk the HATEOAS koolaid now, I would have a really hard time > going back :-). > Craig McClanahan > [1] http://kenai.com/projects/suncloudapis/pages/Home >
On Tue, 2009-03-31 at 17:59 -0700, Craig McClanahan wrote:
> On Tue, Mar 31, 2009 at 5:01 PM, Solomon Duskis <sduskis@...>
> wrote:
>
> > [snip]
> > Assuming that the practical barriers of entry are removed, what
> practical
> > benefits will we see?
> >
>
> I know exactly where you are coming from with these questions ...
[snip]
Key advantages:
> * REDUCED CLIENT CODING ERRORS.
>
> * REDUCED INVALID STATE TRANSITION CALLS.
>
> * FINE GRAINED EVOLUTION WITHOUT (NECESSARILY) BREAKING OLD CLIENTS.
> Having drunk the HATEOAS koolaid now, I would have a really hard time
> going back :-).
Hi Chris,
This was a great post. I'm looking at doing something similar for an
application as well, but, having looked at the API for the Sun Cloud, I
was planning on taking it a bit further.
One thing that I see missing is "full disclosure" of the operations
(verbs) to be used as well as differentiation between actions vs.
information.
Don't get me wrong, I think the API you have is pretty good! :)
However, the only way that I could think of doing what I'm talking about
was to define some kind of envelope, or at least a series of elements
that were influenced by or imported directly the XHTML forms (and/or
possibly XForms) elements to identify what actions were possible for a
given resource. That way, you'd have the full HATEOAS in the message
and the clients wouldn't have to know anything except how to interpret
the markup. I guess I should also say that I'm looking at XML
representations here rather than JSON.
I was planning on posting some thoughts on this anyway, but the timing
of this post was too good to pass up.
What I was thinking was something like:
<ActionEnvelope>
<Header>
<ActionList>
<Action id="action1" href="uri" method="POST">Human readable description of the action here</Action>
<Action id="delete" href="uri" method="DELETE">Delete this resource</Action>
...
</ActionList>
</Header>
<Body>
<!-- any content can go here, and client processing will be based on
either the elements or the namespace URI(s) used in the root child
element -->
</Body>
</ActionEnvelope>
Now, before everyone gets all fussy and says it's too much like SOAP, it
truly isn't. The only thing in common is that it uses an envelope.
The other thing to note is that the total transitions available to the
client are the sum of any in-lined (like FORM submissions, regular
hyperlink traversal, etc.) and then any of the other, "meta" actions
possible for the system as a whole defined in the envelope's header.
I went through several iterations of putting them in in the "real"
resource vs. in the header, but this is where I'm thinking at the
moment, because it allows you to easily process the resource for both
human and machine interaction (the action list becomes a menu, for
example, if the ultimate user agent wants (X)HTML -- this can be
accomplished a number of different ways).
I was wondering if you guys went through this line of thinking with your
API design and discarded it, or if it was deemed either unnecessary or
too complicated.
Of course, with this approach your automated user agent still needs to
understand the semantics of the action id's, but this would be published
as part of the API specification, separate from the specification for
the underlying content schema(s), and the inputs required would be fully
supplied after making the request defined by the action.
This isn't terribly efficient, because an editing operation for the
resource might look like:
Step 1) Get the resource URI
Step 2) Process the resource XML, recording the actions
Step 3) If an action with ID "edit" exists in the header, but no form
exists in the body, make request for "edit" resource
Step 4) Process the resource XML looking for "resource editing" mark-up
(defined by the API spec, probably a normal FORM in the envelope body)
Step 5) Supply available form values to be changed (also prevents
changing of read-only resource properties)
Step 6) Submit FORM
Step 7) Process HTTP server response
Granted, this certainly not as efficient as:
PUT /vms/33333
Host: example.com
Authorization: Basic xxxxxxxxxxxxxxxxxxx
Accept: application/vnd.com.sun.cloud.VM+json
Content-Length: nnn
Content-Type: application/vnd.com.sun.cloud.VM+json
X-Cloud-Client-Specification-Version: 0.1
{
"description" : "This is the new description"
}
But how does the user agent know it can do this from the original
resource?
HTTP/1.1 200 OK
Content-Type: application/vnd.com.sun.cloud.VM+json
Content-Length: nnn
{
"name" : "web01",
"uri" : "http://example.com/vms/33333"
"run_status" : "RUNNING",
"model_status" : "DEPLOYED",
"description" : "This is the old description"
...
"back_up" : "http://example.com/back-up?vm=33333"
"attach" : "http://example.com/attach?vm=33333",
"detach" : "http://example.com/detach-ip?vm=33333",
...
}
I realize the propsal above isn't perfect either, but it's really still
in the embryonic phases at the moment. However, I plan on actually
working through much of the detail over the next few months, so any
feedback (good, bad or otherwise) is welcome.
The Sun Cloud API is one of the more interesting ones that I've seen
recently, and I'm sure there's lots to learn from it.
Nice work.
ast
--
Andrew S. Townley <ast@...>
http://atownley.org
Excellent points. I'd like to add that following links instead of
constructing URIs also enables an evolutionary change of distribution
of resources across multiple servers or domains – something I've found
extremely valuable in scaling a system from deployment to various
(stages of) production environments.
Stefan
On 01.04.2009, at 02:59, Craig McClanahan wrote:
> On Tue, Mar 31, 2009 at 5:01 PM, Solomon Duskis <sduskis@...>
> wrote:
>
> > [snip]
> > Assuming that the practical barriers of entry are removed, what
> practical
> > benefits will we see?
> >
>
> I know exactly where you are coming from with these questions ... I
> felt the same way until recently. I've designed several REST APIs
> over the last couple of years, but up until the most recent one, I
> designed and documented them in the "typical" way, describing the URI
> structure of the application and letting the client figure out what to
> send when. My most recent effort is contributing to the design of the
> REST architecture for the Sun Cloud API[1] to control virtual
> machines and so on. In addition, I'm very focused on writing client
> language bindings for this API in multiple languages (Ruby, Python,
> Java) ... so I get a first hand feel for programming to this API at a
> very low level.
>
> We started from the presumption that the service would publish only
> *one* well-known URI (returning a "cloud" representation containing
> representations for, and/or URI links to representations for, all the
> cloud resources that are accessible to the calling user). Every other
> URI in the entire system (including all those that do state changes)
> are discovered by examining these representations. Even in the early
> days, I can see some significant, practical, short term benefits we
> have gained from taking this approach:
>
> * REDUCED CLIENT CODING ERRORS. Looking back at all the REST client
> side interfaces
> that I, or people I work with, have built, about 90% of the bugs
> have been in the construction
> of the right URIs for the server. Typical mistakes are leaving out
> path segments, getting them
> in the wrong order, or forgetting to URL encode things. All this
> goes away when the server
> hands you exactly the right URI to use for every circumstance.
>
> * REDUCED INVALID STATE TRANSITION CALLS. When the client decides
> which URI to call and
> when, they run the risk of attempting to request state transitions
> that are not valid for the current
> state of the server side resource. An example from my problem
> domain ... it's not allowed to
> "start" a virtual machine (VM) until you have "deployed" it. The
> server knows about URIs to
> initiate each of the state changes (via a POST), but the
> representation of the VM lists only the
> URIs for state transitions that are valid from the current state.
> This makes it extremely easy
> for the client to understand that trying to start a VM that hasn't
> been deployed yet is not legal,
> because there will be no corresponding URI in the VM representation.
>
> * FINE GRAINED EVOLUTION WITHOUT (NECESSARILY) BREAKING OLD CLIENTS.
> At any given time, the client of any REST API is going to be
> programmed with *some* assumptions
> about what the system can do. But, if you document a restriction to
> "pay attention to only those
> aspects of the representation that you know about", plus a server
> side discipline to add things later
> that don't disrupt previous behavior, you can evolve APIs fairly
> quickly without breaking all clients,
> or having to support multiple versions of the API simultaneously on
> your server. You don't have to
> wait years for serendipity benefits :-). Especially compared to
> something like SOAP where the
> syntax of your representations is versioned (in the WSDL), so you
> have to mess with the clients
> on every single change.
>
> Having drunk the HATEOAS koolaid now, I would have a really hard time
> going back :-).
>
> Craig McClanahan
>
> [1] http://kenai.com/projects/suncloudapis/pages/Home
>
> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%; font-
> weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; } #ygrp-
> mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!-- #ygrp-
> sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%; line-
> height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom: 10px;
> padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px; font-family:
> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> input, textarea {font:99% arial,helvetica,clean,sans-serif;} #ygrp-
> mlmsg pre, code {font:115% monospace;*font-size:100%;} #ygrp-mlmsg *
> {line-height:1.22em;} #ygrp-text{ font-family: Georgia; } #ygrp-
> text p{ margin: 0 0 1em 0; } dd.last p a { font-family: Verdana;
> font-weight: bold; } #ygrp-vitnav{ padding-top: 10px; font-family:
> Verdana; font-size: 77%; margin: 0; } #ygrp-vitnav a{ padding: 0
> 1px; } #ygrp-mlmsg #logo{ padding-bottom: 10px; } #ygrp-reco
> { margin-bottom: 20px; padding: 0px; } #ygrp-reco #reco-head { font-
> weight: bold; color: #ff7900; } #reco-category{ font-size: 77%; }
> #reco-desc{ font-size: 77%; } #ygrp-vital a{ text-decoration:
> none; } #ygrp-vital a:hover{ text-decoration: underline; } #ygrp-
> sponsor #ov ul{ padding: 0 0 0 8px; margin: 0; } #ygrp-sponsor #ov
> li{ list-style-type: square; padding: 6px 0; font-size: 77%; } #ygrp-
> sponsor #ov li a{ text-decoration: none; font-size: 130%; } #ygrp-
> sponsor #nc{ background-color: #eee; margin-bottom: 20px;
> padding: 0 8px; } #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-
> sponsor .ad #hd1{ font-family: Arial; font-weight: bold; color:
> #628c2a; font-size: 100%; line-height: 122%; } #ygrp-sponsor .ad
> a{ text-decoration: none; } #ygrp-sponsor .ad a:hover{ text-
> decoration: underline; } #ygrp-sponsor .ad p{ margin: 0; } o{font-
> size: 0; } .MsoNormal{ margin: 0 0 0 0; } #ygrp-text tt{ font-size:
> 120%; } blockquote{margin: 0 0 0 4px;} .replbq{margin:4} dd.last p
> span { margin-right: 10px; font-family: Verdana; font-weight:
> bold; } dd.last p span.yshortcuts { margin-right: 0; } div.photo-
> title a, div.photo-title a:active, div.photo-title a:hover,
> div.photo-title a:visited { text-decoration: none; } div.file-title
> a, div.file-title a:active, div.file-title a:hover, div.file-title
> a:visited { text-decoration: none; } #ygrp-msg p { clear: both;
> padding: 15px 0 3px 0; overflow: hidden; } #ygrp-msg p span { color:
> #1E66AE; font-weight: bold; } div#ygrp-mlmsg #ygrp-msg p a
> span.yshortcuts { font-family: Verdana; font-size: 10px; font-
> weight: normal; } #ygrp-msg p a { font-family: Verdana; font-size:
> 10px; } #ygrp-mlmsg a { color: #1E66AE; } div.attach-table div div a
> { text-decoration: none; } div.attach-table { width: 400px; } -->
Craig McClanahan wrote: > > * FINE GRAINED EVOLUTION WITHOUT (NECESSARILY) BREAKING OLD CLIENTS. > At any given time, the client of any REST API is going to be > programmed with *some* assumptions > about what the system can do. But, if you document a restriction to > "pay attention to only those > aspects of the representation that you know about", plus a server > side discipline to add things later > that don't disrupt previous behavior, you can evolve APIs fairly > quickly without breaking all clients, > or having to support multiple versions of the API simultaneously on > your server. You don't have to > wait years for serendipity benefits :-). Especially compared to > something like SOAP where the > syntax of your representations is versioned (in the WSDL), so you > have to mess with the clients > on every single change. > How would you initially define and then evolve your XML schema in such an environment? I know Atom allows arbitrary attributes and elements in its schema so that it can easily evolve or allow custom data to be appended. What about validation though? Too bad XML schema isn't polymorphic. Another thought I had on HATEOAS was, what about making the links a part of your schema? i.e. specifying in your schema the exact relationships and types (but not URIs) that will be made available. If you combine this with HTTP content negotiation, the client can guarantee a specific version of its conversation or business process. It also allows the server to tell the client it doesn't support that type of interaction anymore. BTW, thanks a lot for the explanation. This will help me greatly when explaining HATEOAS to colleagues, users, and customers. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
--- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@...> wrote: > If > you're observing all of the REST constraints but HATEOAS, you're may not > have a RESTful architecture, but you have something tremendously useful. > Actually, I'd like to turn this question around. I've always been confused by what folks who design systems that comply with all of REST except HATEOAS think they are getting out of it. It makes no sense to me. The only benefit I can think of is that intermediaries have some insight into what is going on. From a practical perspective this means you get caching (yes, theoretically it can be more than caching but most systems stop there). Is that it? What do folks think? What are the benefits of a REST - HATEOAS based architecture?
Good summary. A perspective helped me understand HATEOAS is to think a system as a combination of multiple state machines. These state machines are distributed and flexible. Hypermedia is the representation of such distributed state machines. Hypermedia makes asynchronous conversations among multiple parties possible and easy. Cheers, Dong On Tue, Mar 31, 2009 at 6:59 PM, Craig McClanahan <craigmcc@...> wrote: > On Tue, Mar 31, 2009 at 5:01 PM, Solomon Duskis <sduskis@...> wrote: > >> [snip] > >> Assuming that the practical barriers of entry are removed, what practical >> benefits will we see? >> > > I know exactly where you are coming from with these questions ... I > felt the same way until recently. I've designed several REST APIs > over the last couple of years, but up until the most recent one, I > designed and documented them in the "typical" way, describing the URI > structure of the application and letting the client figure out what to > send when. My most recent effort is contributing to the design of the > REST architecture for the Sun Cloud API[1] to control virtual > machines and so on. In addition, I'm very focused on writing client > language bindings for this API in multiple languages (Ruby, Python, > Java) ... so I get a first hand feel for programming to this API at a > very low level. > > We started from the presumption that the service would publish only > *one* well-known URI (returning a "cloud" representation containing > representations for, and/or URI links to representations for, all the > cloud resources that are accessible to the calling user). Every other > URI in the entire system (including all those that do state changes) > are discovered by examining these representations. Even in the early > days, I can see some significant, practical, short term benefits we > have gained from taking this approach: > > * REDUCED CLIENT CODING ERRORS. Looking back at all the REST client > side interfaces > that I, or people I work with, have built, about 90% of the bugs > have been in the construction > of the right URIs for the server. Typical mistakes are leaving out > path segments, getting them > in the wrong order, or forgetting to URL encode things. All this > goes away when the server > hands you exactly the right URI to use for every circumstance. > > * REDUCED INVALID STATE TRANSITION CALLS. When the client decides > which URI to call and > when, they run the risk of attempting to request state transitions > that are not valid for the current > state of the server side resource. An example from my problem > domain ... it's not allowed to > "start" a virtual machine (VM) until you have "deployed" it. The > server knows about URIs to > initiate each of the state changes (via a POST), but the > representation of the VM lists only the > URIs for state transitions that are valid from the current state. > This makes it extremely easy > for the client to understand that trying to start a VM that hasn't > been deployed yet is not legal, > because there will be no corresponding URI in the VM representation. > > * FINE GRAINED EVOLUTION WITHOUT (NECESSARILY) BREAKING OLD CLIENTS. > At any given time, the client of any REST API is going to be > programmed with *some* assumptions > about what the system can do. But, if you document a restriction to > "pay attention to only those > aspects of the representation that you know about", plus a server > side discipline to add things later > that don't disrupt previous behavior, you can evolve APIs fairly > quickly without breaking all clients, > or having to support multiple versions of the API simultaneously on > your server. You don't have to > wait years for serendipity benefits :-). Especially compared to > something like SOAP where the > syntax of your representations is versioned (in the WSDL), so you > have to mess with the clients > on every single change. > > Having drunk the HATEOAS koolaid now, I would have a really hard time > going back :-). > > Craig McClanahan > > [1] http://kenai.com/projects/suncloudapis/pages/Home > -- http://dongnotes.blogspot.com/
--- In rest-discuss@yahoogroups.com, "wahbedahbe" <andrew.wahbe@...> wrote: > > --- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@> wrote: > > If > > you're observing all of the REST constraints but HATEOAS, you're may not > > have a RESTful architecture, but you have something tremendously useful. > > > > Actually, I'd like to turn this question around. > > I've always been confused by what folks who design systems that comply with all of REST except HATEOAS think they are getting out of it. It makes no sense to me. The only benefit I can think of is that intermediaries have some insight into what is going on. From a practical perspective this means you get caching (yes, theoretically it can be more than caching but most systems stop there). Is that it? > > What do folks think? What are the benefits of a REST - HATEOAS based architecture? > Wouldn't this have to be compared with a totally non-REST solution like SOAP-bad web services? If yes, then quick benefits that come to mind are a (1)a much simpler and cleaner api and (2)easier integration. The point (IMO) is that you can't really get the the full benefits of REST-based solution (which implicitly includes HATEOAS) if all the principles/properties are not being followed/used but there are incremental benefits as more and more of the properties are included in the design Eb
wahbedahbe wrote: > > What do folks think? What are the benefits of a REST - HATEOAS based > architecture? A message based rather than RPC-based architecture. HTTP content-negotiation. All the HTTP caching semantics that come as a result of constrained interface. All pretty huge. HATEOAS is only a piece of the pie IMO. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
I think there are multiple levels of HATEOAS: One is simple linking of
information - easy to do, very obvious benefits (of course one could
use a different name for this). The next level is to drive the app
state through links, which is quite a bit harder, yet yields more
significant benefits.
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
On 01.04.2009, at 17:00, wahbedahbe wrote:
> --- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@...>
> wrote:
> > If
> > you're observing all of the REST constraints but HATEOAS, you're
> may not
> > have a RESTful architecture, but you have something tremendously
> useful.
> >
>
> Actually, I'd like to turn this question around.
>
> I've always been confused by what folks who design systems that
> comply with all of REST except HATEOAS think they are getting out of
> it. It makes no sense to me. The only benefit I can think of is that
> intermediaries have some insight into what is going on. From a
> practical perspective this means you get caching (yes, theoretically
> it can be more than caching but most systems stop there). Is that it?
>
> What do folks think? What are the benefits of a REST - HATEOAS based
> architecture?
>
>
> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%; font-
> weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; } #ygrp-
> mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!-- #ygrp-
> sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%; line-
> height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom: 10px;
> padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px; font-family:
> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> input, textarea {font:99% arial,helvetica,clean,sans-serif;} #ygrp-
> mlmsg pre, code {font:115% monospace;*font-size:100%;} #ygrp-mlmsg *
> {line-height:1.22em;} #ygrp-text{ font-family: Georgia; } #ygrp-
> text p{ margin: 0 0 1em 0; } dd.last p a { font-family: Verdana;
> font-weight: bold; } #ygrp-vitnav{ padding-top: 10px; font-family:
> Verdana; font-size: 77%; margin: 0; } #ygrp-vitnav a{ padding: 0
> 1px; } #ygrp-mlmsg #logo{ padding-bottom: 10px; } #ygrp-reco
> { margin-bottom: 20px; padding: 0px; } #ygrp-reco #reco-head { font-
> weight: bold; color: #ff7900; } #reco-category{ font-size: 77%; }
> #reco-desc{ font-size: 77%; } #ygrp-vital a{ text-decoration:
> none; } #ygrp-vital a:hover{ text-decoration: underline; } #ygrp-
> sponsor #ov ul{ padding: 0 0 0 8px; margin: 0; } #ygrp-sponsor #ov
> li{ list-style-type: square; padding: 6px 0; font-size: 77%; } #ygrp-
> sponsor #ov li a{ text-decoration: none; font-size: 130%; } #ygrp-
> sponsor #nc{ background-color: #eee; margin-bottom: 20px;
> padding: 0 8px; } #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-
> sponsor .ad #hd1{ font-family: Arial; font-weight: bold; color:
> #628c2a; font-size: 100%; line-height: 122%; } #ygrp-sponsor .ad
> a{ text-decoration: none; } #ygrp-sponsor .ad a:hover{ text-
> decoration: underline; } #ygrp-sponsor .ad p{ margin: 0; } o{font-
> size: 0; } .MsoNormal{ margin: 0 0 0 0; } #ygrp-text tt{ font-size:
> 120%; } blockquote{margin: 0 0 0 4px;} .replbq{margin:4} dd.last p
> span { margin-right: 10px; font-family: Verdana; font-weight:
> bold; } dd.last p span.yshortcuts { margin-right: 0; } div.photo-
> title a, div.photo-title a:active, div.photo-title a:hover,
> div.photo-title a:visited { text-decoration: none; } div.file-title
> a, div.file-title a:active, div.file-title a:hover, div.file-title
> a:visited { text-decoration: none; } #ygrp-msg p { clear: both;
> padding: 15px 0 3px 0; overflow: hidden; } #ygrp-msg p span { color:
> #1E66AE; font-weight: bold; } div#ygrp-mlmsg #ygrp-msg p a
> span.yshortcuts { font-family: Verdana; font-size: 10px; font-
> weight: normal; } #ygrp-msg p a { font-family: Verdana; font-size:
> 10px; } #ygrp-mlmsg a { color: #1E66AE; } div.attach-table div div a
> { text-decoration: none; } div.attach-table { width: 400px; } -->
On Wed, Apr 1, 2009 at 2:11 AM, <amsmota@...> wrote:
> Excellent explanation, you should publish that somewhere for easy reference.
> I think this will give me the final argument to convince my boss to give me
> the extra-time i need to fully implement hateoas in our infrastructure...
Good idea ... should have done that last night:
http://blogs.sun.com/craigmcc/entry/why_hateoas
Craig
On Wed, Apr 1, 2009 at 6:40 AM, Bill Burke <bburke@...> wrote: > Craig McClanahan wrote: >> >> * FINE GRAINED EVOLUTION WITHOUT (NECESSARILY) BREAKING OLD CLIENTS. >> At any given time, the client of any REST API is going to be >> programmed with *some* assumptions >> about what the system can do. But, if you document a restriction to >> "pay attention to only those >> aspects of the representation that you know about", plus a server >> side discipline to add things later >> that don't disrupt previous behavior, you can evolve APIs fairly >> quickly without breaking all clients, >> or having to support multiple versions of the API simultaneously on >> your server. You don't have to >> wait years for serendipity benefits :-). Especially compared to >> something like SOAP where the >> syntax of your representations is versioned (in the WSDL), so you >> have to mess with the clients >> on every single change. >> > > How would you initially define and then evolve your XML schema in such an > environment? I know Atom allows arbitrary attributes and elements in its > schema so that it can easily evolve or allow custom data to be appended. > What about validation though? Too bad XML schema isn't polymorphic. The snarky answer would be "what schema? we don't need no stinkin' schema" :-). If you are using XML message formats, though, schemas are pretty useful ... not just for validation, but also for code generation of client and server side stubs (such as with Java's JAXB). The situation I would focus on is a client that is programmed to handle version "M" of the schema, and you want to update it for version "N". If you can limit yourself to the following changes: * Any new elements must be optional (minOccurs="0") * No existing required elements can be modified to be optional * Server side processing accepts this representation and takes the missing optional element from an old client (who obviously won't be sending it) to have the same semantic meaning as "assume a default value". If you can do this, and if you're willing to *not* embed a version number in your schema identifier (which is probably too radical for many people -- hence your quite accurate complaints about polymorphism), you can cover a pretty surprising percentage of the typical evolution scenarios that I have seen. On the other hand, there are going to be some kinds of changes where this doesn't work. But embedding links in the HATEOAS manner can still help you. If the server knows what version the client is programmed for (in the Sun Cloud API, we allow but do not require the client ot specify this in an HTTP header), it can send back different URIs (to representations based on different versions of the schema). The server has to be willing to support both, but you can deal with that on the server end in a bunch of different ways (server pool "A" supports version "M" and server pool "B" serves version "N", or write server code that understands both formats, or ...) without the client having to worry about changing their URI generation logic to match the (often changed) rules for version "N". > > Another thought I had on HATEOAS was, what about making the links a part of > your schema? i.e. specifying in your schema the exact relationships and > types (but not URIs) that will be made available. If you combine this with > HTTP content negotiation, the client can guarantee a specific version of its > conversation or business process. It also allows the server to tell the > client it doesn't support that type of interaction anymore. Atom goes towards this direction with the <link> element (which I would evaluate as a candidate representation of relationships any time I was looking at an XML representation, simply because it is pretty familiar to people), where you can optionally specify things like the media type of the response you can expect. But doesn't the OPTIONS command tell you what verbs a particular URI supports? Craig > > BTW, thanks a lot for the explanation. This will help me greatly when > explaining HATEOAS to colleagues, users, and customers. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
Snipping and interspersing a few comments:
On Wed, Apr 1, 2009 at 2:51 AM, Andrew S. Townley <ast@...> wrote:
> This was a great post. I'm looking at doing something similar for an
> application as well, but, having looked at the API for the Sun Cloud, I
> was planning on taking it a bit further.
>
> One thing that I see missing is "full disclosure" of the operations
> (verbs) to be used as well as differentiation between actions vs.
> information.
In the specification, this is described on the various pages like
<http://kenai.com/projects/suncloudapis/pages/CloudAPIVMRequests>,
which describes the set of operations that a VM representation (or,
more properly, a URI included in a VM representation). On the wire,
if you use the HTTP OPTIONS command to ask the server what verbs are
supported by that URI. For example, the URI you get for the "Attach
VM to Public Address or VNet" will tell you that it only supports a
POST.
What we are not including in the representations, at least right now,
is media type related restrictions. Partly, that is because many of
the operations are in fact polymorphic (what happens depends on what
media type you send in a request), and partly (at least in my view) is
that client applications using an API are going to have *some*
semantic understanding of what is going on, so they will be "hard
coding" in a sense which representations to send already, so they
don't necessarily need to be told. And, just knowing the media type
still doesn't help you understand which fields have which impacts.
This is certainly a design principle around which people will have
different opinions, but it's the way we have gone so far.
>
> Don't get me wrong, I think the API you have is pretty good! :)
>
> However, the only way that I could think of doing what I'm talking about
> was to define some kind of envelope, or at least a series of elements
> that were influenced by or imported directly the XHTML forms (and/or
> possibly XForms) elements to identify what actions were possible for a
> given resource. That way, you'd have the full HATEOAS in the message
> and the clients wouldn't have to know anything except how to interpret
> the markup. I guess I should also say that I'm looking at XML
> representations here rather than JSON.
XML versus JSON shouldn't really matter all that much. Indeed, I've
seen lots of APIs that support both syntaxes (especially easy to do in
Java if you're using JAX-RS, but not that difficult in other
environments).
>
> I was planning on posting some thoughts on this anyway, but the timing
> of this post was too good to pass up.
>
> What I was thinking was something like:
>
> <ActionEnvelope>
> <Header>
> <ActionList>
> <Action id="action1" href="uri" method="POST">Human readable description of the action here</Action>
> <Action id="delete" href="uri" method="DELETE">Delete this resource</Action>
> ...
> </ActionList>
> </Header>
> <Body>
> <!-- any content can go here, and client processing will be based on
> either the elements or the namespace URI(s) used in the root child
> element -->
> </Body>
> </ActionEnvelope>
>
> Now, before everyone gets all fussy and says it's too much like SOAP, it
> truly isn't. The only thing in common is that it uses an envelope.
A couple of thoughts and questions:
* Why call out DELETE as a separate action? I'd tend to accept a DELETE
back to the URI that got me this representation in the first place if I wanted
to support that semantic.
* It seems like you are focusing on an application environment where the
client is a browser, and therefore potentially limited to "form
like" behaviors.
This leads you to a distinction between the "edit" view of a "read" view of
a resource. My preference is to assume that the client just wants the data,
and is totally in charge of formatting (you can synthesize a <form> or an
XForm in javascript), so I shouldn't make model-versus-view distinctions
in the respresentations.
>
> The other thing to note is that the total transitions available to the
> client are the sum of any in-lined (like FORM submissions, regular
> hyperlink traversal, etc.) and then any of the other, "meta" actions
> possible for the system as a whole defined in the envelope's header.
>
> I went through several iterations of putting them in in the "real"
> resource vs. in the header, but this is where I'm thinking at the
> moment, because it allows you to easily process the resource for both
> human and machine interaction (the action list becomes a menu, for
> example, if the ultimate user agent wants (X)HTML -- this can be
> accomplished a number of different ways).
>
> I was wondering if you guys went through this line of thinking with your
> API design and discarded it, or if it was deemed either unnecessary or
> too complicated.
>
> Of course, with this approach your automated user agent still needs to
> understand the semantics of the action id's, but this would be published
> as part of the API specification, separate from the specification for
> the underlying content schema(s), and the inputs required would be fully
> supplied after making the request defined by the action.
>
> This isn't terribly efficient, because an editing operation for the
> resource might look like:
>
> Step 1) Get the resource URI
>
> Step 2) Process the resource XML, recording the actions
>
> Step 3) If an action with ID "edit" exists in the header, but no form
> exists in the body, make request for "edit" resource
>
> Step 4) Process the resource XML looking for "resource editing" mark-up
> (defined by the API spec, probably a normal FORM in the envelope body)
>
> Step 5) Supply available form values to be changed (also prevents
> changing of read-only resource properties)
>
> Step 6) Submit FORM
>
> Step 7) Process HTTP server response
>
> Granted, this certainly not as efficient as:
>
> PUT /vms/33333
> Host: example.com
> Authorization: Basic xxxxxxxxxxxxxxxxxxx
> Accept: application/vnd.com.sun.cloud.VM+json
> Content-Length: nnn
> Content-Type: application/vnd.com.sun.cloud.VM+json
> X-Cloud-Client-Specification-Version: 0.1
>
> {
> "description" : "This is the new description"
> }
>
> But how does the user agent know it can do this from the original
> resource?
Turn that question around. With your approach, how does the client
know what values are valid in any of the input fields? Or what is
going to happen to the state of the system when you send in a POST or
a PUT or a DELETE? My feeling is that the person developing the
client application is going to have to understand this kind of
semantics anyway, so let's skip the extra round trips, and all the
extra server side logic to create "forms" -- even if the client really
is an application that doesn't need such a thing.
The other thing I'm doing, which is not obvious in the specification,
is writing client language bindings for this API (Java, Ruby, Python
to start). You don't have to use them, but it will make life simpler
for you. In each language, a VM representation is described as a
class VM with attributes/properties for all the fields, plus public
methods like attach() and detach() that trigger the POSTs to the
appropriate URIs, with the appropriately formatted representations. A
client application that leverages a binding like this gets a nice O-O
view of the world, and all the stuff we RESTafarians love to argue
about is hidden inside a black box :-).
I'll be talking more about client bindings once we're ready to publish
these as concrete examples ... there are some really interesting
decisions in how to represent a REST web service programmatically.
But I can tell you that the HATEOAS approach has made writing these
clients quite a lot easier.
>
> HTTP/1.1 200 OK
> Content-Type: application/vnd.com.sun.cloud.VM+json
> Content-Length: nnn
>
> {
> "name" : "web01",
> "uri" : "http://example.com/vms/33333"
> "run_status" : "RUNNING",
> "model_status" : "DEPLOYED",
> "description" : "This is the old description"
> ...
> "back_up" : "http://example.com/back-up?vm=33333"
> "attach" : "http://example.com/attach?vm=33333",
> "detach" : "http://example.com/detach-ip?vm=33333",
> ...
> }
>
> I realize the propsal above isn't perfect either, but it's really still
> in the embryonic phases at the moment. However, I plan on actually
> working through much of the detail over the next few months, so any
> feedback (good, bad or otherwise) is welcome.
>
> The Sun Cloud API is one of the more interesting ones that I've seen
> recently, and I'm sure there's lots to learn from it.
>
> Nice work.
Thanks. This API is still evolving, by the way, so feel free to
provide any direct feedback on the related wiki (free registration
required).
>
> ast
Craig
> --
> Andrew S. Townley <ast@...>
> http://atownley.org
>
>
Craig McClanahan wrote: > > Atom goes towards this direction with the <link> element (which I > would evaluate as a candidate representation of relationships any time > I was looking at an XML representation, simply because it is pretty > familiar to people), where you can optionally specify things like the > media type of the response you can expect. But doesn't the OPTIONS > command tell you what verbs a particular URI supports? > Not talking about verbs, but links/relationships. Define/require your atom links in your schema so that the client is assured that the links will be there in the document because of validation. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Peter Keane wrote: > > That was pretty much my thought as well back when this thread was > alive. But what about when /document/foo is not under your control > (i.e., you can't get/put it)? This is a similar issue that the > activities feed folks are addressing right now. Likewise, the OAI-ORE > effort was all based on creating aggregations of web resources. > Brings up interesting issues, I think. > I don't quite understand the question. I have to POST to get /document/ bar? I approach this as the "mashup" problem. I have a mishmash of remote input, but I want my output in Atom and I want to use my own tags. This is what led me to XML Databases and XQuery. Most XML DBs use XPATH to locate cells. So, create an XML DB cell at /document/bar containing some XQuery code that accesses the remote document and transforms it to Atom, sans category tags, which are under my site's control. I can't edit the remote data, but I can manipulate its metadata as if it were part of my mashup site, give it a local comment thread, etc. Even if I'm making a POST to retrieve remote data, that's opaque to the client making the GET request on my site, so my API is still REST. > > Your approach (stored search for documents w/ a particular category) > gets at something I've often said, which it that we need a good > standard query mechanism for doing queries with "filter by category" > or some such. I actually like what Google Base specifies in this > area. (I know that opensearch is another effort in this area). > I've been looking at OpenSearch, but haven't integrated it yet. Seems to make sense for adding to archive feeds, as if they were searches for "every post in a given month" etc. Just add a few elements to my existing Atom output, looks like... -Eric
On Fri, Apr 3, 2009 at 10:07 AM, Eric J. Bowman <eric@...> wrote: > Peter Keane wrote: > >> >> That was pretty much my thought as well back when this thread was >> alive. Â But what about when /document/foo is not under your control >> (i.e., you can't get/put it)? Â This is a similar issue that the >> activities feed folks are addressing right now. Â Likewise, the OAI-ORE >> effort was all based on creating aggregations of web resources. >> Brings up interesting issues, I think. >> > > I don't quite understand the question. Â I have to POST to get /document/ > bar? Â I approach this as the "mashup" problem. Â I have a mishmash of > remote input, but I want my output in Atom and I want to use my own > tags. Â This is what led me to XML Databases and XQuery. Â Most XML DBs > use XPATH to locate cells. By "you can't get/put" I just meant that you cannot update or change it -- it is someone else's resource. So you have to just "point" to it. Actually, I suppose you might very well use POST to add the "pointer" resource (which will probably be an atom entry with the content@src the URL for /document/foo) to your collection. It gets a bit recursive in activity feeds: "Joe agreed with the thumbs up that Sally gave to John's review of the last episode of 'Life on Mars'" Each of those "activities" being represented by an entry in an activity feed. --peter > > So, create an XML DB cell at /document/bar containing some XQuery code > that accesses the remote document and transforms it to Atom, sans > category tags, which are under my site's control. Â I can't edit the > remote data, but I can manipulate its metadata as if it were part of my > mashup site, give it a local comment thread, etc. > > Even if I'm making a POST to retrieve remote data, that's opaque to the > client making the GET request on my site, so my API is still REST. > >> >> Your approach (stored search for documents w/ a particular category) >> gets at something I've often said, which it that we need a good >> standard query mechanism for doing queries with "filter by category" >> or some such. Â I actually like what Google Base specifies in this >> area. Â (I know that opensearch is another effort in this area). >> > > I've been looking at OpenSearch, but haven't integrated it yet. Â Seems > to make sense for adding to archive feeds, as if they were searches for > "every post in a given month" etc. Â Just add a few elements to my > existing Atom output, looks like... > > -Eric >
John Panzer wrote: > > > I wouldn't expect anything. Atom Protocol leaves the deletion of > > collections undefined, which means it could go either way. Since > > members can belong to multiple collections, it seems wiser to me, to > > not delete individual members when a collection is deleted. Unless > > my application logic constrains resources to only be members of one > > collection. > > Still talking past each other :). My question was not about > AtomPub, but about REST; if one were defining an AtomPub-like > protocol using the REST architectural style, and decided to define > DELETE in this way, would it be a violation of REST's uniform > interface constraint? My opinion is no, it's fine. > Then take out the sentence about AtomPub, the answer still stands. I don't know why the expected behavior of a DELETE request on a URI would ever consider side effects on other resources. The expectation is that the requested URI is deleted, nothing else. Trying my earlier explanation another way, let's say that I have assigned the DELETE method to delete one and only one resource, be it member or collection. Now, I want my API to also allow the deletion of a collection to mean delete all members of the collection. To maintain a uniform, generic interface, I would hijack FTP's MDELETE method to mean delete all members of a named collection. It's only allowed on collections, not members. I now have the expected behavior of DELETE along with its visibility to intermediaries, without muddling its semantics to also sometimes mean bulk delete. As to MDELETE, in a REST API as opposed to FTP, it would only accept the target URI, not a list of URIs. -Eric
Eric J. Bowman wrote: > John Panzer wrote: > > >>> I wouldn't expect anything. Atom Protocol leaves the deletion of >>> collections undefined, which means it could go either way. Since >>> members can belong to multiple collections, it seems wiser to me, to >>> not delete individual members when a collection is deleted. Unless >>> my application logic constrains resources to only be members of one >>> collection. >>> >> Still talking past each other :). My question was not about >> AtomPub, but about REST; if one were defining an AtomPub-like >> protocol using the REST architectural style, and decided to define >> DELETE in this way, would it be a violation of REST's uniform >> interface constraint? My opinion is no, it's fine. >> >> > > Then take out the sentence about AtomPub, the answer still stands. I > don't know why the expected behavior of a DELETE request on a URI would > ever consider side effects on other resources. The expectation is that > the requested URI is deleted, nothing else. > Unless semantics of the resource in question includes the deletion of "attached" or "subordinate" or what have you resources upon deletion of the primary resource. > Trying my earlier explanation another way, let's say that I have > assigned the DELETE method to delete one and only one resource, be it > member or collection. Now, I want my API to also allow the deletion of > a collection to mean delete all members of the collection. > In this case, I agree that you would contradict yourself if you first defined DELETE to mean delete exactly one resource, but then turned around and said that deletion of the collection also deleted subordinate resources. So, as the doctor would say, "don't do that then." (Or are you saying that REST dictates that you must assign the DELETE method to delete one and only one resource?) > To maintain a uniform, generic interface, I would hijack FTP's MDELETE > method to mean delete all members of a named collection. It's only > allowed on collections, not members. I now have the expected behavior > of DELETE along with its visibility to intermediaries, without muddling > its semantics to also sometimes mean bulk delete. > > As to MDELETE, in a REST API as opposed to FTP, it would only accept > the target URI, not a list of URIs. > I agree that this would be perfectly RESTful. It would also be RESTful to define a class of resources (identified via a MIME type of course) that aggregated other resources, and which guaranteed that they would go away when the primary resource goes away. (This might or might not be a good design -- it's hard to argue in the abstract -- but it wouldn't violate REST.) Cheers, John
Apologies for the delayed reply. Was away from email for a bit.
On Wed, 2009-04-01 at 11:12 -0700, Craig McClanahan wrote:
> Snipping and interspersing a few comments:
>
> On Wed, Apr 1, 2009 at 2:51 AM, Andrew S. Townley <ast@...> wrote:
> > This was a great post. I'm looking at doing something similar for an
> > application as well, but, having looked at the API for the Sun Cloud, I
> > was planning on taking it a bit further.
> >
> > One thing that I see missing is "full disclosure" of the operations
> > (verbs) to be used as well as differentiation between actions vs.
> > information.
>
> In the specification, this is described on the various pages like
> <http://kenai.com/projects/suncloudapis/pages/CloudAPIVMRequests>,
> which describes the set of operations that a VM representation (or,
> more properly, a URI included in a VM representation). On the wire,
> if you use the HTTP OPTIONS command to ask the server what verbs are
> supported by that URI. For example, the URI you get for the "Attach
> VM to Public Address or VNet" will tell you that it only supports a
> POST.
That's cool. However, that's not exactly what I meant by "verbs" in the
above. To me, what we're talking about with REST systems is that the
representations transferred between the client and the server are
pictures of the *application* state, not of the resource state. This is
my understanding of Roy's thesis.
In the degenerate case, the application in question is an HTTP server,
and what it's doing is really pretty simple and defined solely by the
bounds of GET, POST, PUT, DELETE and friends. What I'm talking about
are more complex hypermedia applications which are built according to
the REST architectural style and just happen to be using HTTP to
transfer these representations between the client and the server(s) in
question that comprise the overall application implementation.
In this case, the representations of the application state need to be
more complicated, because the application is more complicated. Of
course, the difference here between browser+human user agents and
automated agents actually doesn't matter in the abstract. Where it
differs is in the concrete implementations of how those application
states are represented to the user agent.
For a browser+human user agent, you can get by with "simpler" (X)HTML
representations of the hypermedia aspects triggering the various state
transitions as well as potentially a few less complicated resource types
like PDFs, images, office documents, etc. that may be part of the
overall application interaction scenarios.
However, in the case of the automated user agent, you can't make those
assumptions because the user agent needs to "understand" what each of
the given application state representations from the server "means", so
that it can do the right thing to accomplish its mission.
You only have two options for doing this:
1) Come up with abstraction(s) to describe these application states with
fixed semantics and then express your application's business logic in
terms of these abstractions, or
2) Implement specific assumptions about the application into your user
agent, and implement your business logic based on those assumptions.
I'm not saying that there is a one-size-fits all solution. What I'm
trying to define with what I outlined earlier is a way to allow you to
do #1 as efficiently as possible in the case where you want the same
client business logic to support as many server implementations as
possible. As you say below, all this overhead where you're primarily in
control of both ends is unnecessary complexity in most cases.
> What we are not including in the representations, at least right now,
> is media type related restrictions. Partly, that is because many of
> the operations are in fact polymorphic (what happens depends on what
> media type you send in a request), and partly (at least in my view) is
> that client applications using an API are going to have *some*
> semantic understanding of what is going on, so they will be "hard
> coding" in a sense which representations to send already, so they
> don't necessarily need to be told. And, just knowing the media type
> still doesn't help you understand which fields have which impacts.
> This is certainly a design principle around which people will have
> different opinions, but it's the way we have gone so far.
Fair enough, and as I said above, it may be the right approach for your
particular application. It is much more closely conforming to option #2
above, but as long as you're clear (and your users are clear), then you
can effectively plan for the evolution of the system on both ends.
> >
> > Don't get me wrong, I think the API you have is pretty good! :)
> >
> > However, the only way that I could think of doing what I'm talking about
> > was to define some kind of envelope, or at least a series of elements
> > that were influenced by or imported directly the XHTML forms (and/or
> > possibly XForms) elements to identify what actions were possible for a
> > given resource. That way, you'd have the full HATEOAS in the message
> > and the clients wouldn't have to know anything except how to interpret
> > the markup. I guess I should also say that I'm looking at XML
> > representations here rather than JSON.
>
> XML versus JSON shouldn't really matter all that much. Indeed, I've
> seen lots of APIs that support both syntaxes (especially easy to do in
> Java if you're using JAX-RS, but not that difficult in other
> environments).
I didn't figure it would, but I just wanted to be clear.
> > I was planning on posting some thoughts on this anyway, but the timing
> > of this post was too good to pass up.
> >
> > What I was thinking was something like:
> >
> > <ActionEnvelope>
> > <Header>
> > <ActionList>
> > <Action id="action1" href="uri" method="POST">Human readable description of the action here</Action>
> > <Action id="delete" href="uri" method="DELETE">Delete this resource</Action>
> > ...
> > </ActionList>
> > </Header>
> > <Body>
> > <!-- any content can go here, and client processing will be based on
> > either the elements or the namespace URI(s) used in the root child
> > element -->
> > </Body>
> > </ActionEnvelope>
> >
> > Now, before everyone gets all fussy and says it's too much like SOAP, it
> > truly isn't. The only thing in common is that it uses an envelope.
>
> A couple of thoughts and questions:
>
> * Why call out DELETE as a separate action? I'd tend to accept a DELETE
> back to the URI that got me this representation in the first place if I wanted
> to support that semantic.
Because in the case of scenario #1 above, you're not talking about HTTP
as the application, you're talking about something different. Even if
you leverage the HTTP verbs and map those to your application as closely
as possible, DELETE might not be supported in your particular
environment. You might have to use POST, or you might have to send the
request to an entirely different URI than the one you used to request
the representation.
That's not pure HTTP, but I don't believe that it isn't still pure REST.
I truly don't see a tight coupling between REST and HTTP, even though
using HTTP in practice to deliver REST systems makes a whole lot of
sense.
To one of your other points about minimizing the number of invalid state
transition requests, it also serves to put all of the valid state
transition information explicitly in the hypermedia representation. You
could do an HTTP OPTIONS request on the resource, but that might not
actually tell you what *application* transitions were available in each
case. It certainly SHOULD do it, but there might not be a good mapping
between application state transitions and HTTP verbs. This approach
ensures a clear separation between the two, using the hypermedia
representation.
> * It seems like you are focusing on an application environment where the
> client is a browser, and therefore potentially limited to "form
> like" behaviors.
> This leads you to a distinction between the "edit" view of a "read" view of
> a resource. My preference is to assume that the client just wants the data,
> and is totally in charge of formatting (you can synthesize a <form> or an
> XForm in javascript), so I shouldn't make model-versus-view distinctions
> in the respresentations.
As I hope is clear now, we're talking about following an "equal
opportunity" principle as far as clients are concerned. Based on the
thinking and research I've done to date on REST and hypermedia-based
systems, I think the issues are the same, it's just that the differences
are collapsed due to the human element in browser-based interactions.
This doesn't mean you're limited to using forms, but if it's the right
tool for the job, there's no reason to re-invent the wheel. That's the
other good thing about XML-based hypermedia: it gives you the ability
to selectively layer in the functionality you need.
>From an API perspective, you call out the application state transitions
as part of the specification and how the client/user agent is supposed
to detect and understand the semantics of said transitions. Then, for
each particular state, you describe the format of the hypermedia the
client is likely to receive.
It still has the opportunity to either fully or partially understand
aspects of the hypermedia representations provided, with either graceful
or un-graceful functionality degradation, depending on the complexity of
the user agent and the needs of the application.
Alternatively, you invert the approach and implement common behavior
based on the clients "detecting" the state of the application from the
representation. Instead of knowing ahead of time that you're supposed
to be able to do "create -> view -> edit | delete", it uses the
representation and an understanding of specific semantics associated
with particular action id's, like view, edit, delete, search, etc. and
its "understanding" of particular representation content formats
together to allow it to successfully interact with the system given a
number of possible starting application states, based on what the
business logic of the client says it's trying to accomplish. This would
be my real goal, actually.
>From a user agent perspective, I don't want to have to provide a
javascript environment for every user agent. From an application
perspective, I see "view representation" and "edit representation" as
being two separate states, potentially with two different representation
formats and even data. For example, you might show relationships
between resources in a view representation, but if it doesn't make sense
to edit these in an edit representation, you wouldn't include them.
Only the data the given user agent could actually change would be
supplied.
The other reason this is where my thinking is right now is that
browser-based hypermedia interactions are the only ones really proven to
fully implement the REST style to date. Other systems implement aspects
of it, or they follow specific semantic mappings between application
state transitions and HTTP state transitions or verbs, but they don't go
"whole hog".
The semweb folks are trying one approach to specifying this, but I think
it doesn't really need to be that complicated to provide practical
solutions to scenario #1 above without trying to eat the whole "W3C
Semantic Web vision" elephant at once.
HTTP is a means to an end in my view. It isn't (and shouldn't) be the
limiting design constraint for RESTful systems. That's the role HATEOAS
plays, not HTTP.
> >
> > The other thing to note is that the total transitions available to the
> > client are the sum of any in-lined (like FORM submissions, regular
> > hyperlink traversal, etc.) and then any of the other, "meta" actions
> > possible for the system as a whole defined in the envelope's header.
> >
> > I went through several iterations of putting them in in the "real"
> > resource vs. in the header, but this is where I'm thinking at the
> > moment, because it allows you to easily process the resource for both
> > human and machine interaction (the action list becomes a menu, for
> > example, if the ultimate user agent wants (X)HTML -- this can be
> > accomplished a number of different ways).
> >
> > I was wondering if you guys went through this line of thinking with your
> > API design and discarded it, or if it was deemed either unnecessary or
> > too complicated.
> >
> > Of course, with this approach your automated user agent still needs to
> > understand the semantics of the action id's, but this would be published
> > as part of the API specification, separate from the specification for
> > the underlying content schema(s), and the inputs required would be fully
> > supplied after making the request defined by the action.
> >
> > This isn't terribly efficient, because an editing operation for the
> > resource might look like:
> >
> > Step 1) Get the resource URI
> >
> > Step 2) Process the resource XML, recording the actions
> >
> > Step 3) If an action with ID "edit" exists in the header, but no form
> > exists in the body, make request for "edit" resource
> >
> > Step 4) Process the resource XML looking for "resource editing" mark-up
> > (defined by the API spec, probably a normal FORM in the envelope body)
> >
> > Step 5) Supply available form values to be changed (also prevents
> > changing of read-only resource properties)
> >
> > Step 6) Submit FORM
> >
> > Step 7) Process HTTP server response
> >
> > Granted, this certainly not as efficient as:
> >
> > PUT /vms/33333
> > Host: example.com
> > Authorization: Basic xxxxxxxxxxxxxxxxxxx
> > Accept: application/vnd.com.sun.cloud.VM+json
> > Content-Length: nnn
> > Content-Type: application/vnd.com.sun.cloud.VM+json
> > X-Cloud-Client-Specification-Version: 0.1
> >
> > {
> > "description" : "This is the new description"
> > }
> >
> > But how does the user agent know it can do this from the original
> > resource?
>
> Turn that question around. With your approach, how does the client
> know what values are valid in any of the input fields? Or what is
> going to happen to the state of the system when you send in a POST or
> a PUT or a DELETE? My feeling is that the person developing the
> client application is going to have to understand this kind of
> semantics anyway, so let's skip the extra round trips, and all the
> extra server side logic to create "forms" -- even if the client really
> is an application that doesn't need such a thing.
I'm not 100% convinced that the user agent needs to fully understand
what's going to happen to the state of the system for each transition.
All it should need to know is how to use the available state transitions
from this particular state representation to accomplish what it's trying
to do. It is likely only concerned with a subset of the overall
application state and available transitions, again, depending on what
it's trying to do and how complex the system actually is.
Again, I'm not saying that what I'm proposing is a one-size-fits all
solution. However, at this stage, I do believe that the client only
needs to know:
a) the available state transitions (based on what it "sees" in the given
representation of the current application state)
b) what each of those state transitions "mean" (their semantics) in
terms of both generic application behavior (CRUD operations, for
example) as well as application-specific behavior (start/stop servers,
etc.) in the context of the job it's trying to do
c) how to recognize appropriate inputs and data provided by the system
and map those to information it holds locally
b & c are the key aspects here. If your user agent is built to
understand that, in general if it sees an HTML form, then it can assume
that the ID values for each of the input elements corresponds to an
object property of an internal object it maintains corresponding to
either the base URI (minus any query parameters) of the resource being
edited or some other identifier present in the hypermedia, then it can
auto-populate the form fields in much the same way that modern browsers
do.
They are also able to mostly do this successfully simply based on the
information provided by the individual HTML elements without any
knowledge of the particular application state. This "caveman
mentality", e.g. "me see form field id; me have data matching form field
id; me populate field", actually seems to work reasonably well.
If your application is written in such a way that it can leverage this
behavior, then I think you've greatly simplified your individual
application clients' business logic definition. Of course, it means
that you might need a more complex interaction library than just using
HTTP, but I don't really see that an an issue.
I want to get to the point where I have the ability to define "simple"
clients based on an understanding of a small set of semantic actions
that can easily be applied across a number of different systems broadly
performing the same task, but with different implementation specifics.
Why, yes, this is a bit of semweb stuff, but rather than saying "how do
I understand the semantics of every possible application interface and
negotiate a way to interact with it," I'm trying to take the approach of
"let's (try) to define a smaller set of semantics that can be broadly
applied and push the burden of mapping to these back to the application
rather than making the client 'smart enough' to figure it out."
I think there's a sweet spot of application types and interactions where
this would be immensely useful. You can argue that you can already do
this kind of mapping with HTTP apis defined in terms of the HTTP verbs,
and that's true in some cases. However, I don't believe that HTTP was
designed to support this particular scenario. Of course, many people
have proven that it can work, but I think there might be a better way.
Presently, I think that "better way" is based on leveraging hypermedia a
lot more than I see in most RESTful systems for automated interaction.
I also realize that there's a ton of implicit assumptions in this
approach as well, but I am trying to make a concerted effort to keep the
interaction assumptions orthogonal to the application assumptions.
> The other thing I'm doing, which is not obvious in the specification,
> is writing client language bindings for this API (Java, Ruby, Python
> to start). You don't have to use them, but it will make life simpler
> for you. In each language, a VM representation is described as a
> class VM with attributes/properties for all the fields, plus public
> methods like attach() and detach() that trigger the POSTs to the
> appropriate URIs, with the appropriately formatted representations. A
> client application that leverages a binding like this gets a nice O-O
> view of the world, and all the stuff we RESTafarians love to argue
> about is hidden inside a black box :-).
>
> I'll be talking more about client bindings once we're ready to publish
> these as concrete examples ... there are some really interesting
> decisions in how to represent a REST web service programmatically.
> But I can tell you that the HATEOAS approach has made writing these
> clients quite a lot easier.
I'm sure that it has, and I think from a pragmatic perspective, you're
taking the right approach to ensure that you've both provided convenient
interaction mechanisms for existing popular environments and allowed you
(or someone else) to implement new ones as needed since you've an open
'on-the-wire' protocol and data formats that leverage functionality
present in just about every modern language environment (an HTTP client
implementation).
> > I realize the propsal above isn't perfect either, but it's really still
> > in the embryonic phases at the moment. However, I plan on actually
> > working through much of the detail over the next few months, so any
> > feedback (good, bad or otherwise) is welcome.
> >
> > The Sun Cloud API is one of the more interesting ones that I've seen
> > recently, and I'm sure there's lots to learn from it.
> >
> > Nice work.
>
> Thanks. This API is still evolving, by the way, so feel free to
> provide any direct feedback on the related wiki (free registration
> required).
If I have anything specifically related to the API, I will certainly
take this approach.
>
> >
> > ast
>
> Craig
>
ast
--
Andrew S. Townley <ast@...>
http://atownley.org
John Panzer wrote: > > (Or are you saying that REST dictates that you must assign > the DELETE method to delete one and only one resource?) > No, I'm saying to choose a meaning for DELETE and stick with it. If you want DELETE to delete a collection and all its members, then deleting members directly should be disallowed. This would be a uniform interface, but not, IMO, a generic one. Or, don't allow DELETE on a collection resource, unless its members have already been deleted. Or, use DELETE in a generic-interface fashion and assign some other method (EXPUNGE, MDELETE, BDELETE, RMD) the task of deleting all members when a collection is deleted. > > > To maintain a uniform, generic interface, I would hijack FTP's > > MDELETE method to mean delete all members of a named collection. > > It's only allowed on collections, not members. I now have the > > expected behavior of DELETE along with its visibility to > > intermediaries, without muddling its semantics to also sometimes > > mean bulk delete. > > > > As to MDELETE, in a REST API as opposed to FTP, it would only accept > > the target URI, not a list of URIs. > > I agree that this would be perfectly RESTful. It would also be > RESTful to define a class of resources (identified via a MIME type of > course) that aggregated other resources, and which guaranteed that > they would go away when the primary resource goes away. (This might > or might not be a good design -- it's hard to argue in the abstract > -- but it wouldn't violate REST.) > I disagree about using media types to change method semantics for a class of resources -- the goal is "a consistent set of semantics for all resources". Media types aren't meant as contracts. The protocol defines the method semantics for all media types; in HTTP, DELETE isn't guaranteed even if the response indicates success, regardless of media type or API design. If an API defines MDELETE to delete all members of a collection, then the MDELETE method should be restricted to only those resources where it makes sense. Say, for example, a 'trashcan' resource would 'Allow: GET, POST, MDELETE' but would respond '405 Method Not Allowed' to a DELETE request. An archive collection used as the basis of site navigation, i.e. /weblog/2009/april, would 'Allow: GET, POST' while a collection that's a stored search for a tag would 'Allow: GET, DELETE', while a temporary archive created for the purpose of batch deletion would 'Allow: GET, DELETE, MDELETE' where members are first removed via MDELETE, then the collection resource itself is removed via DELETE. (If a DELETE is tried on the last resource class before its members have been either DELETEd or MDELETEd, the response would be 409 Conflict.) Those different classes of collection resource have the same method semantics regardless of media type, which could be all the same, or different for each class, doesn't matter -- it's orthogonal to the protocol behavior of such an API. What methods may be applied to what resources is defined in documentation, and made visible through OPTIONS requests and HEAS, not media types. Particularly since those collection resources may already have multiple media types, say 'text/html' and 'application/xhtml+xml' and 'application/atom+xml' configured with content negotiation. To single some of those out as MDELETE-able collections by adding a parameter is something I've seen suggested often, but it's really overloading the purpose of media types and making for much trickier implementation than OPTIONS + Allow. -Eric
Sebastien Lambla wrote: > > > The expectation is that > > the requested URI is deleted, nothing else. > > Could you point me to some references where this expectation is > documented? > I'd start with RFC 959 (FTP), which has both DELETE and REMOVE DIRECTORY methods, making clear the meaning of the DELETE method's original definition, "causes the file specified... to be deleted" as literally meaning "the file" not "the collection of files." So I wouldn't "expect" DELETE to cause the deletion of more than one member resource, in a generic interface. HTTP, from its inception right on up through RFC 2616bis says, "DELETE... requests that the origin server delete the resource identified by the Request-URI," leaving the door wide open to use it on collections without defining what that behavior would be. Although this is not allowed in FTP, HTTP isn't the same paradigm -- HTTP isn't a filesystem, and filesystems don't allow the late binding of representation to resource. HTTP has been extended over the years to include both WebDAV and Atom Protocol. While it leaves the issue of deleting a collection unaddressed, AtomPub does constrain DELETE to the semantics of FTP DELETE -- confirming my expectation that DELETE should behave as FTP DELETE in a generic interface. But then there's WebDAV... http://tools.ietf.org/html/rfc4918#section-9.6.1 Ugh. Then again, that makes perfect sense if your goal is to use HTTP for filesystem operations. This use of DELETE works, because WebDAV isn't concerned with the late binding of representation to resource. But, "expected" behavior is a subjective notion. IMHO, the "common case of the Web" contains far more applications which allow member resources to belong to multiple collections, than there are Web applications implementing a filesystem paradigm over HTTP. So let's take a look at another, unrelated protocol... IMAP: http://tools.ietf.org/html/rfc3501#section-6.3.4 While DELETE is used to delete mailboxes, this is only allowed if the mailbox is empty. To remove all member resources, a flag is set on each one and the EXPUNGE method is called. I suppose my definition of expected behavior, is the difference between a generic interface and a uniform interface. You can have a uniform interface where DELETE removes all members of a collection, but I wouldn't call that a generic use of DELETE, since so many other methods have been defined to accomplish that specific task, not to mention protocols like FTP and IMAP clearly disallowing DELETE's use for removing all members of a collection. So, in a generic sense, I don't expect DELETE to remove member resources, because in the common case of the Web (as opposed to that of a filesystem), members may belong to more than one collection, while protocols either leave the deletion of collections undefined or assign this task to some other method. The generic semantics of DELETE are defined by FTP, IMHO. -Eric
Jim Webber wrote: > > > Hi Bill, Hi Jim!, > > For me the "common sense" answer is that the collection is a resource > > and a DELETE to that resource does not entail deletion of resources > > referenced in the collection's representation. Indeed that would be a > > bizarre state of affairs. > > That depends. I see two cases which I'll try to illustrate by (dumb) > analogies: Not naive; other interesting deletion analogies are manufactured objects like cars, guns and aeroplanes, as they have parts that might get deleted or get recycled, it depends. > 1. Road signs - I delete a road sign, the towns which it references do > not get deleted themselves *. Semantics of associativity, or maybe indexicals, not sure about that one. > 2. Boxes of chocolates - if I delete a box of chocolates, I expect the > chocolates themselves to be destroyed. Semantics of composition, but which are contingent to the chocolate being in the box, so it's hard to to know the consequences of a uniform delete method. > So it's really up to the semantics of the resource I want to delete, no? I think it's up to the semantics that describe the relationship between the resources (I'm thinking of 'semantics' in a formal sense). This is a hard aspect to model well, and I'm fairly sure neither REST or HTTP covers it. Put another way - maybe we could use RDF/OWL or some such to describe part/whole or composite semantics for some resources, but clients and servers will need to understand that to understand what a DELETE entails, eg if only to manage caches - Mike Amundsen pointed this pout a while back wrt to partial updates (which seem to be an inverse of the delete problem we're talking about here). Bill
wahbedahbe wrote: > Actually, I'd like to turn this question around. > > I've always been confused by what folks who design systems that comply > with all of REST except HATEOAS think they are getting out of it. It > makes no sense to me. The only benefit I can think of is that > intermediaries have some insight into what is going on. From a practical > perspective this means you get caching (yes, theoretically it can be > more than caching but most systems stop there). Is that it? To turn this round again - I would not agree that all REST benefits are derived from links in content. > What do folks think? What are the benefits of a REST - HATEOAS based > architecture? Each principle and constraint adopted gets you some benefit, and those are well documented. So the answer seems to the benefits except those derived from links in content. Bill
Andrew S. Townley wrote: > Alternatively, you invert the approach and implement common behavior > based on the clients "detecting" the state of the application from the > representation. Sorry to pick out one tiny piece of your excellent post...But... IMO, there are very very few applications/clients that can approach integration in this manner. In production systems, things have to be well planned out and predictable or it will just be a disaster. Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Ok, but I'm more wondering about the specific gains folks are seeing in practice in the systems they are building. The reason I'm curious is because there are a lot of frameworks like Rails which claim "RESTfulness" but seem to just deliver REST - HATEOAS (well at least on the "machine to machine" ActiveResource side of things when I last looked at it). Lots of folks seem to think this is really great and is light years better than RPC but I don't really understand why. Also, things like the idempotency of PUT and DELETE have never yielded any practical benefits to me (though I get how they can in _theory_) so I'm also really curious to know how people are making practical use of them in the systems they are building. I have personally seen huge gains with "full" REST in systems I've built -- chiefly in decoupling clients and servers (a lot of the stuff Craig McClanahan brings up in this thread) -- and so I really "get" that. REST - HATEOAS -- not so much. On another note: I think HATEOAS is much more than "links in content" unless your client is something like a spider. What's your take on the discussion here: http://www.intertwingly.net/blog/2008/03/23/Connecting Andrew Wahbe --- In rest-discuss@yahoogroups.com, Bill de hOra <bill@...> wrote: > > wahbedahbe wrote: > > > Actually, I'd like to turn this question around. > > > > I've always been confused by what folks who design systems that comply > > with all of REST except HATEOAS think they are getting out of it. It > > makes no sense to me. The only benefit I can think of is that > > intermediaries have some insight into what is going on. From a practical > > perspective this means you get caching (yes, theoretically it can be > > more than caching but most systems stop there). Is that it? > > To turn this round again - I would not agree that all REST benefits are > derived from links in content. > > > > What do folks think? What are the benefits of a REST - HATEOAS based > > architecture? > > Each principle and constraint adopted gets you some benefit, and those > are well documented. So the answer seems to the benefits except those > derived from links in content. > Bill >
On Mon, Apr 6, 2009 at 8:26 AM, wahbedahbe <andrew.wahbe@...> wrote: > Ok, but I'm more wondering about the specific gains folks are seeing in > practice in the systems they are building. The reason I'm curious is because > there are a lot of frameworks like Rails which claim "RESTfulness" but seem > to just deliver REST - HATEOAS (well at least on the "machine to machine" > ActiveResource side of things when I last looked at it). Lots of folks seem > to think this is really great and is light years better than RPC but I don't > really understand why. Let's say you're writing a new database server based on a new theory for massively scaling megadata to the cloud. To get taken seriously you'll need connectors for Java, .Net and C, with wrappers for PHP, Ruby and Python. That's a lot of connectivity overhead. Or you can do CRUD over HTTP. You can just adapt a client library from another database, or use one of the many rapid-client devkits out there. You get the four basic operations, authentication, encryption, load balancing and caching out of the box. Or think about it a different way. I'm writing a script to add user accounts from CSV file onto remote server, I wrote the back-end so I'm in control of the protocol. I can get something up and running with ActiveResource in a few minutes. It won't have the survivability of HATEOAS, but it only takes a few minutes to build and only takes a few minutes to fix. If you work with HTTP day in an day out, I can see why it would be significantly better to use HTTP over IIOP or RMI or insert-other-binary-format. Assaf > > Also, things like the idempotency of PUT and DELETE have never yielded any > practical benefits to me (though I get how they can in _theory_) so I'm also > really curious to know how people are making practical use of them in the > systems they are building. > > I have personally seen huge gains with "full" REST in systems I've built -- > chiefly in decoupling clients and servers (a lot of the stuff Craig > McClanahan brings up in this thread) -- and so I really "get" that. REST - > HATEOAS -- not so much. > > On another note: I think HATEOAS is much more than "links in content" > unless your client is something like a spider. What's your take on the > discussion here: > http://www.intertwingly.net/blog/2008/03/23/Connecting > > Andrew Wahbe > > --- In rest-discuss@yahoogroups.com, Bill de hOra <bill@...> wrote: > > > > wahbedahbe wrote: > > > > > Actually, I'd like to turn this question around. > > > > > > I've always been confused by what folks who design systems that comply > > > with all of REST except HATEOAS think they are getting out of it. It > > > makes no sense to me. The only benefit I can think of is that > > > intermediaries have some insight into what is going on. From a > practical > > > perspective this means you get caching (yes, theoretically it can be > > > more than caching but most systems stop there). Is that it? > > > > To turn this round again - I would not agree that all REST benefits are > > derived from links in content. > > > > > > > What do folks think? What are the benefits of a REST - HATEOAS based > > > architecture? > > > > Each principle and constraint adopted gets you some benefit, and those > > are well documented. So the answer seems to the benefits except those > > derived from links in content. > > Bill > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
At Sat, 4 Apr 2009 08:11:20 -0600, Eric J. Bowman wrote: > No, I'm saying to choose a meaning for DELETE and stick with it. If > you want DELETE to delete a collection and all its members, then > deleting members directly should be disallowed. This would be a > uniform interface, but not, IMO, a generic one. > > Or, don't allow DELETE on a collection resource, unless its members > have already been deleted. Or, use DELETE in a generic-interface > fashion and assign some other method (EXPUNGE, MDELETE, BDELETE, RMD) > the task of deleting all members when a collection is deleted. > > […] The web doesn’t have application boundaries, it is a global hypertext system. Either delete has a certain semantics globally, or it doesn’t have those semantics. You can’t ‘choose’ one semantic meaning for your application. I could be wrong about this, but I haven’t heard any arguments to convince me otherwise. best, Erik Hetzner
Erik Hetzner wrote: > > > No, I'm saying to choose a meaning for DELETE and stick with it. If > > you want DELETE to delete a collection and all its members, then > > deleting members directly should be disallowed. This would be a > > uniform interface, but not, IMO, a generic one. > > > > Or, don't allow DELETE on a collection resource, unless its members > > have already been deleted. Or, use DELETE in a generic-interface > > fashion and assign some other method (EXPUNGE, MDELETE, BDELETE, > > RMD) the task of deleting all members when a collection is deleted. > > > > […] > > The web doesn’t have application boundaries, it is a global hypertext > system. Either delete has a certain semantics globally, or it doesn’t > have those semantics. You can’t ‘choose’ one semantic meaning for your > application. > Sure you can, and in fact you should, according to Roy: " ...The main reason for my lack of specificity is because the methods defined by HTTP are part of the Web’s architecture definition, not the REST architectural style. Specific method definitions (aside from the retrieval:resource duality of GET) simply don’t matter to the REST architectural style, so it is difficult to have a style discussion about them. The only thing REST requires of methods is that they be uniformly defined for all resources (i.e., so that intermediaries don’t have to know the resource type in order to understand the meaning of the request). As long as the method is being used according to its own definition, REST doesn’t have much to say about it. " http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post The problem with PUT is it can mean either create or update in HTTP. Same problem with DELETE, in HTTP it means different things to different specs for different resource types. In a REST API, methods are constrained to be used according to their own definitions, and applied the same way across all resources. (Roy has also said the REST doesn't discourage the coining of new methods, one that occurs to me in light of recent discussions is REBOOT since that operation doesn't really fit with any existing method). For example, Sun's new Cloud API fails the "consistent set of semantics for all resources" test in two ways, where PUT is concerned -- PUT means "partial update" for particular content-types. First, media types don't define method semantics, doing this inherently means that your methods aren't uniformly defined for all resources. Second, as has been discussed here regarding many claims to RESTful APIs, PUT is not defined to mean "partial update". That's PATCH's job. No intermediary would assume "partial update" for PUT unless it had knowledge of the specific media type attempting to redefine the semantics of PUT. In HTTP, DELETE can be used as batch-delete as well as individual- delete. In my REST API example, I've constrained DELETE to mean the same thing to all resources, while adding MDELETE to cover the batch- delete case. That's a uniform interface. Using DELETE in any ol' way it's defined, varying by resource type, is not. This goes back to the point I was making before, about Atom Protocol constraining PUT to only mean update, instead of both update and create, and assigning create to POST. Any API that wants to be RESTful needs to constrain the semantics of any method implemented, to one and only one action, otherwise methods aren't uniformly defined for all resources. -Eric
At Mon, 6 Apr 2009 12:16:26 -0600, Eric J. Bowman wrote: > > Erik Hetzner wrote: > > > The web doesn’t have application boundaries, it is a global hypertext > > system. Either delete has a certain semantics globally, or it doesn’t > > have those semantics. You can’t ‘choose’ one semantic meaning for your > > application. > > Sure you can, and in fact you should, according to Roy: > > " > ...The main reason for my lack of specificity is because the methods > defined by HTTP are part of the Web’s architecture definition, not the > REST architectural style. Specific method definitions (aside from the > retrieval:resource duality of GET) simply don’t matter to the REST > architectural style, so it is difficult to have a style discussion > about them. The only thing REST requires of methods is that they be > uniformly defined for all resources (i.e., so that intermediaries don’t > have to know the resource type in order to understand the meaning of > the request). As long as the method is being used according to its own > definition, REST doesn’t have much to say about it. > " If you are interested in developing an application that uses HTTP but is *not part of the web*, I suppose you can further constrain DELETE to mean whatever you want. Otherwise you need to re-read what you just quoted: > […] The only thing REST requires of methods is that they be uniformly > defined for all resources […] ‘Uniformly defined for all resources’ means *all* resources on the web, e.g. anything addressable using a URI which is part of the web’s global hypertext infrastructure. There are no application boundaries in terms of method semantics. DELETE means the same thing everywhere. It can’t mean one thing for one URI which happens to be part of one application and another thing for another URI which happens to be part of another. The semantics must be general enough that it means the same thing for all URIs. best, Erik Hetzner
Erik Hetzner wrote: > > Otherwise you need to re-read what you just quoted: > > > […] The only thing REST requires of methods is that they be > > uniformly defined for all resources […] > > ‘Uniformly defined for all resources’ means *all* resources on the > web, e.g. anything addressable using a URI which is part of the web’s > global hypertext infrastructure. > I don't think so. When designing an API, my concern is only those resources I control. Since methods like PUT and DELETE have variable semantics, it's impossible for all resources on the Web to have uniformly-defined semantics. REST may be seen as an architectural style consisting of constraints imposed upon the Web architectural style, i.e. HTTP is not REST. > > There are no application boundaries > in terms of method semantics. DELETE means the same thing everywhere. > It can’t mean one thing for one URI which happens to be part of one > application and another thing for another URI which happens to be part > of another. The semantics must be general enough that it means the > same thing for all URIs. > Strongly disagree. In HTTP, PUT may mean either create or replace. In Atom Protocol, PUT is constrained to mean replace. In Protocol XYZ, PUT is constrained to mean create. Both protocols use HTTP, both protocols are RESTful, but PUT has different semantics within each application boundary. PUT cannot possibly mean the same thing for all URIs, since PUT means two different things. We'll never get DELETE to mean the same thing everywhere, even within the HTTP protocol (see again the WebDAV definition of DELETE compared to the Atom Protocol definition of DELETE). The best we can do is agree that it has a generic-interface meaning, and design our APIs accordingly, regardless of what the RFCs involved allow. The only method which fits your criteria, that it must unambiguously mean the same thing for all URIs everywhere, is GET. -Eric
wahbedahbe wrote: > > > Ok, but I'm more wondering about the specific gains folks are seeing in > practice in the systems they are building. The reason I'm curious is > because there are a lot of frameworks like Rails which claim > "RESTfulness" but seem to just deliver REST - HATEOAS (well at least on > the "machine to machine" ActiveResource side of things when I last > looked at it). Lots of folks seem to think this is really great and is > light years better than RPC but I don't really understand why. > > Also, things like the idempotency of PUT and DELETE have never yielded > any practical benefits to me (though I get how they can in _theory_) so > I'm also really curious to know how people are making practical use of > them in the systems they are building. > > I have personally seen huge gains with "full" REST in systems I've built > -- chiefly in decoupling clients and servers (a lot of the stuff Craig > McClanahan brings up in this thread) -- and so I really "get" that. REST > - HATEOAS -- not so much. So for me, some partical things come to mind. - the methods give you high level support for potential operations/scaling pain. Just knowing a system could internally be partitioned at the http level into HEAD/OPTIONS/GET and PUT/POST/DELETE makes me sleep better at night. Much easier to do it at the load balancers than in application code imvho. - PUT and DELETE are useful to have as I don't have to disambiguate POST. I believe that when smart developers are encouraged to use the full method set from the get go, they will naturally use POST well and for dealing with the inevitable corner cases (also forms posting tends to get used well, which is a big thing for me). So I think having a method complement helps you fall into the pit of success. - URL construction (or the lack of). I was reviewing an API today and realized it could be geolocated by allowing a server to supply URLs to different domains/administrations. If the clients were putting the URLs together, that would not work. It also means basic stuff like media serving/cdns will work when you need them to. - Well known formats. Or at least well specified ones. You get so much futureproofing against versioning by making the media type explicit. I'm not a huge conneg fan (think it doesn't get used well), but the Accept header is a huge win if you're building something that has to evolve and support already deployed clients for years to come. - Caching, but this is well known. - Organisation of application v resource state. Giving non-domain codey type things URLs is big win. Jim Webber does a good job here explaining the practical benefits: http://www.infoq.com/articles/webber-rest-workflow. I don't know whether you can express the full BPM/BPEL/piCalculus thing via REST's notions of state, but I do suspect in many cases you don't need that level of expressive power. Ultimately what I get via REST is the notion of applying constraints to obtain systemic properties. The REST community have done a good job articulating what happens when you add and remove constraints. It's objective architectural/systems analysis, not the flimflam I see coming from EAI/SOA which tend to describe /desirable outcomes/ and not /how to obtain them/. You don't have to like REST as a style (I personally don't have much time for the current hype), but you can at least analyse the design. > On another note: I think HATEOAS is much more than "links in content" > unless your client is something like a spider. Granted, but 'lick' is a better acronym (links in content are king) than 'hateoas' ;) > What's your take on the > discussion here: > http://www.intertwingly.net/blog/2008/03/23/Connecting > <http://www.intertwingly.net/blog/2008/03/23/Connecting> I sympathise with Sam's view on things, but still find "connectedness" a bit abstract. So I distill it even further by asking/cajoling people to put links in content, to increase the likelihood that a format will be useful across as many clients as possible. Bill
At Mon, 6 Apr 2009 15:29:32 -0600, Eric J. Bowman wrote: > > Erik Hetzner wrote: > > ‘Uniformly defined for all resources’ means *all* resources on the > > web, e.g. anything addressable using a URI which is part of the > > web’s global hypertext infrastructure. > > I don't think so. When designing an API, my concern is only those > resources I control. Since methods like PUT and DELETE have variable > semantics, it's impossible for all resources on the Web to have > uniformly-defined semantics. REST may be seen as an architectural style > consisting of constraints imposed upon the Web architectural style, > i.e. HTTP is not REST. If you don’t think so, then you don’t think that your application is part of the web. Of course when designing a web API you can constrain the semantics as you like (as long as it is *more* specific than HTTP). For the web as a whole, HTTP methods have the semantics described by HTTP - no more, no less. > > There are no application boundaries in terms of method semantics. > > DELETE means the same thing everywhere. It can’t mean one thing > > for one URI which happens to be part of one application and > > another thing for another URI which happens to be part of another. > > The semantics must be general enough that it means the same thing > > for all URIs. > > Strongly disagree. In HTTP, PUT may mean either create or replace. In > Atom Protocol, PUT is constrained to mean replace. In Protocol XYZ, > PUT is constrained to mean create. Both protocols use HTTP, both > protocols are RESTful, but PUT has different semantics within each > application boundary. PUT cannot possibly mean the same thing for all > URIs, since PUT means two different things. So you are saying you do not believe in a uniform interface? We need to know which application a URI part of in order to build caches? > We'll never get DELETE to mean the same thing everywhere, even within > the HTTP protocol (see again the WebDAV definition of DELETE compared > to the Atom Protocol definition of DELETE). The best we can do is > agree that it has a generic-interface meaning, and design our APIs > accordingly, regardless of what the RFCs involved allow. > > The only method which fits your criteria, that it must unambiguously > mean the same thing for all URIs everywhere, is GET. Why then does HTTP even try to define the meanings of methods? DELETE means what it says in RFC 2616 (or whatever the HTTP wg comes up with next). That is all that it means - on the web. What it means in your application is up to you - as long as it extends the meaning of what is in RFC 2616. But you cannot insist that people ‘choose’ one or the other *more* specific semantics of DELETE in their ‘application’ and stick to it. All that is needed to be part of the web - and to be RESTful! - is to stick to the semantics of DELETE as defined in RFC 2616. best, Erik Hetzner
Erik Hetzner wrote: > > > > ‘Uniformly defined for all resources’ means *all* resources on the > > > web, e.g. anything addressable using a URI which is part of the > > > web’s global hypertext infrastructure. > > > > I don't think so. When designing an API, my concern is only those > > resources I control. Since methods like PUT and DELETE have > > variable semantics, it's impossible for all resources on the Web to > > have uniformly-defined semantics. REST may be seen as an > > architectural style consisting of constraints imposed upon the Web > > architectural style, i.e. HTTP is not REST. > > If you don’t think so, then you don’t think that your application is > part of the web. > Howzat? My application has URIs and uses HTTP methods, like any Web app, but I'm applying a set of constraints to achieve desirable behaviors. By constraining DELETE to the deletion of individual resources, and constraining PUT to replacement and not creation, I achieve (part of) a uniform interface, without violating RFC 2616 or somehow taking my application off the Web. HTTP may or may not be used in a REST API. Obviously, to be part of the Web, the URIs must be public, and dereferenceable using HTTP. But, just having a URI-based HTTP application is not the same as having a REST application, since HTTP makes no mention of the uniform interface. > > Of course when designing a web API you can constrain the semantics as > you like (as long as it is *more* specific than HTTP). For the web as > a whole, HTTP methods have the semantics described by HTTP - no more, > no less. > By using PUT for both creation and replacement, I'm not violating Web architecture or RFC 2616, but I have failed to constrain my interface to be uniform. The "creation" semantics of PUT are only RESTful, IMO, if the URI is created on the server and sent to the client as the target for a PUT request. But, particularly if I've implemented Atom Protocol, it's better to use POST for creation and let the assigned URI be given to the client in response. I've never found constraining away the creation semantics of PUT to be a big loss. But, "create" and "overwrite" are clearly different semantics for the same method, and while this is perfectly acceptable in uniform- interface-agnostic HTTP, this is clearly verboten in REST. So pick one or the other meaning and stick with it for all resources within your application boundary, API, namespace, workspace, neck-of-the-woods or whatever else you want to call it. > > > > There are no application boundaries in terms of method semantics. > > > DELETE means the same thing everywhere. It can’t mean one thing > > > for one URI which happens to be part of one application and > > > another thing for another URI which happens to be part of another. > > > The semantics must be general enough that it means the same thing > > > for all URIs. > > > > Strongly disagree. In HTTP, PUT may mean either create or > > replace. In Atom Protocol, PUT is constrained to mean replace. In > > Protocol XYZ, PUT is constrained to mean create. Both protocols > > use HTTP, both protocols are RESTful, but PUT has different > > semantics within each application boundary. PUT cannot possibly > > mean the same thing for all URIs, since PUT means two different > > things. > > So you are saying you do not believe in a uniform interface? We need > to know which application a URI part of in order to build caches? > Howzat? My API's application of a constraint to PUT, to mean either create or overwrite but not both, has no effect on how intermediaries handle PUT requests. My application of a constraint to PUT helps implement a uniform interface, a requirement of REST that isn't mentioned anywhere in HTTP, and has no bearing on the behavior of intermediaries -- merely leverages their behavior. HTTP never had a uniform interface as its goal, so we have more than one meaning for methods like PUT and DELETE, choosing one and only one meaning for each method and sticking with it for all resources in your app, is a key part of applying the uniform interface constraint to your use of HTTP. > > > We'll never get DELETE to mean the same thing everywhere, even > > within the HTTP protocol (see again the WebDAV definition of DELETE > > compared to the Atom Protocol definition of DELETE). The best we > > can do is agree that it has a generic-interface meaning, and design > > our APIs accordingly, regardless of what the RFCs involved allow. > > > > The only method which fits your criteria, that it must unambiguously > > mean the same thing for all URIs everywhere, is GET. > > Why then does HTTP even try to define the meanings of methods? > Separation of concerns, visibility, all the reasons one uses REST instead of POST-only WS-*/SOAP. HTTP isn't an API, it's a protocol that can be fashioned into any number of APIs, RESTful or not. The protocol gives pretty broad definitions, a RESTful API is built by applying certain constraints to any protocols in order to achieve a uniform interface. Atom Protocol's constraints on HTTP comes closer to defining a uniform interface than other HTTP-derived protocols, but even it makes no mention of a uniform interface. Atom Protocol may be constrained to build many different uniform-interface APIs, with all the considerable variation allowed for within the REST architectural style. > > DELETE means what it says in RFC 2616 (or whatever the HTTP wg comes > up with next). That is all that it means - on the web. What it means > in your application is up to you - as long as it extends the meaning > of what is in RFC 2616. > DELETE could also mean what it means in WebDAV, which is completely different from what it means in RFC 2616, while both are HTTP. As I detailed in another message, DELETE has many different meanings, and from them we can deduce a "generic" meaning of deleting one and only one resource. Genericity, like visibility, varies within REST. The constraint is a uniform interface, not a generic interface. Non- generic uses of DELETE are certainly allowed, provided they basically mean "remove something". Like DELETE in WebDAV can be interpreted in the generic sense, or in the sense of batch deletion, while DELETE in FTP can only be interpreted in the generic sense. But I prefer to use a different method instead of stretching a method beyond its generic meaning, like MDELETE. This allows my consistent- across-all-resources, constrained use of DELETE to match its generic meaning. Like using DELETE as a batch delete method, using MDELETE also reduces visibility, but without the cost in interface genericity, or the non-uniformity of an interface which interprets DELETE in its generic sense for some resources, while being used as batch delete for other resources. > > But you cannot insist that people ‘choose’ one or the other *more* > specific semantics of DELETE in their ‘application’ and stick to it. > All that is needed to be part of the web - and to be RESTful! - is to > stick to the semantics of DELETE as defined in RFC 2616. > So, you're saying WebDAV isn't part of the Web? HTTP consists of more methods, and in fact different definitions of methods, than just RFC 2616. It's possible to build a WebDAV app that's RESTful, but doesn't use DELETE as defined in RFC 2616, or use PUT or POST, but the protocol is still HTTP. It's possible to repurpose a method from some other protocol and make it part of your RESTful HTTP-based API -- if it's serendipitously re-used by enough others then it will eventually become part of the evolving HTTP standards family. Or, just use some other protocol for part of your API, like FTP. REST is protocol-agnostic, and beyond GET, method-agnostic. So REST can't be simplified by just follwing RFC 2616. HTTP makes no attempt to describe a uniform interface, and wasn't designed with the implementation of a uniform interface in mind. REST describes a set of constraints which may be applied to HTTP, or FTP, or other protocols (none of which say anything more about a uniform interface than HTTP does), singularly or in combination, to achieve any number of variations on the notion of a uniform interfaces. There is no reference implementation of REST, REST isn't a protocol, and therefore doesn't define a uniform interface -- only the constraints which need to be applied to achieve one. Saying that "‘Uniformly defined for all resources’ means *all* resources on the web" implies that REST is a protocol defining a single globally uniform interface, i.e. it's a building code rather than just a style of house. What's required for a uniform interface, is "a consistent set of semantics across all resources" and that means choosing which of the possible meanings of DELETE is the best fit for your API, which meaning of PUT to constrain your API to, and whether or not some other method is more suitable to your usage of POST, even though POST is allowed multiple meanings as a catch-all method. -Eric
Eric -
Here is how I see it. I think (hope) others also see it this way.
The web defines the architecture of a global hypertext system. [*]
The web is defined by a number of RFCs (HTTP 1/1, URI generic syntax,
etc.) and other specs.
Roy Fielding’s dissertation defines an architectural style, REST.
(Here I get hazy because I have not studied this sort of thing
deeply).
The web, as defined in these RFCs, (generally) conforms to the
constraints laid out in Roy Fielding’s dissertation.
In other words:
REST (architectural style)
> the web (architecture)
> your web app (the implementation)
REST defines the general constraints that the web conforms to. The web
defines the way things actually get done.
For instance, REST says you should have a uniform interface. The web
defines a uniform interface.
If you accept this model of the web, and I do, the things that you are
saying about DELETE don’t make much sense.
best,
Erik Hetzner
* These statements are general; there are certainly problematic edge
cases where HTTP does not, in fact, align with the REST style.
Bill de hOra wrote: > - Well known formats. Or at least well specified ones. You get so much > futureproofing against versioning by making the media type explicit. I'm > not a huge conneg fan (think it doesn't get used well), but the Accept > header is a huge win if you're building something that has to evolve and > support already deployed clients for years to come. > Just curious. Why not a big conneg fan? Is it that you prefer existing, well defined formats? To me, conneg seems to be one of the most powerful features of HTTP. Since REST pushes complexity into the data format, conneg seems uber critical. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Mon, 2009-04-06 at 10:25 -0400, Bill Burke wrote: > > Andrew S. Townley wrote: > > Alternatively, you invert the approach and implement common behavior > > based on the clients "detecting" the state of the application from the > > representation. > > > Sorry to pick out one tiny piece of your excellent post...But... > > IMO, there are very very few applications/clients that can approach > integration in this manner. In production systems, things have to be > well planned out and predictable or it will just be a disaster. I certainly agree with you that there are very, very few apps that *do* approach integration in this manner, but I don't agree that you can't have apps that *can* approach integration in this manner, even in highly structured, regulated and mission-critical deployments. You bring up a great point: "if things aren't well planned out and predictable...things will be a disaster." However, if you stop and think about it, do you know why this is true? I've been doing both large & small system integration for over 10 years, and I've both lived through the reality you described, and also been trying to find ways to make systems less brittle and more resilient to change because there's two core axioms of large systems development: 1) Things are going to change between when you start the system and when you get it "finished", if ever.... 2) The system is likely to live far longer than you expect it to If your integration is based on lots of out-of-band shared knowledge about the system state transitions, you do need a lot of formal planning and predictability in the way they work, because you code the endpoints based on those assumptions. It's really a self-fulfilling prophecy, actually. However, if you take the approach that you need to identify and expect a particular number of states and transitions, each of which are specified well rather than working from an API reference manual and system user guide (or functional specification), then you can potentially have more scalable, flexible and long-lived systems. You can also end up with a big mess if you don't manage it properly... I'm not saying that you still won't have to go through a similar amount of organizational management, politics and pain to arrive at these "interface" definitions any less than you will for a more traditional integration approach, but it's the difference in perspective (and outputs) that matter. Again, I'm not saying that this approach makes sense for every system on the planet, but I think it's critical to start thinking differently about the way we design, implement and extend the large-scale, cross organisational (and multi-national, in some cases) systems used by every one of us as businesses and individuals (directly or indirectly) each day. Another key point to remember: I'm not talking about altering the mission profile of the particular application or system, I'm simply focused on taking a (perhaps radically) different approach to how those systems interact to deliver the system's mission profile and the corresponding long-term business objectives of those who built and operate it. Cheers, ast -- Andrew S. Townley <ast@...> http://atownley.org
Andrew S. Townley wrote: > On Mon, 2009-04-06 at 10:25 -0400, Bill Burke wrote: >> Andrew S. Townley wrote: >>> Alternatively, you invert the approach and implement common behavior >>> based on the clients "detecting" the state of the application from the >>> representation. >> >> Sorry to pick out one tiny piece of your excellent post...But... >> >> IMO, there are very very few applications/clients that can approach >> integration in this manner. In production systems, things have to be >> well planned out and predictable or it will just be a disaster. > > I certainly agree with you that there are very, very few apps that *do* > approach integration in this manner, but I don't agree that you can't > have apps that *can* approach integration in this manner, even in highly > structured, regulated and mission-critical deployments. > > You bring up a great point: "if things aren't well planned out and > predictable...things will be a disaster." However, if you stop and > think about it, do you know why this is true? > Its true because stable systems are well tested. You can't test variability. > I've been doing both large & small system integration for over 10 years, > and I've both lived through the reality you described, and also been > trying to find ways to make systems less brittle and more resilient to > change because there's two core axioms of large systems development: > > 1) Things are going to change between when you start the system and when > you get it "finished", if ever.... > > 2) The system is likely to live far longer than you expect it to > > If your integration is based on lots of out-of-band shared knowledge > about the system state transitions, you do need a lot of formal planning > and predictability in the way they work, because you code the endpoints > based on those assumptions. It's really a self-fulfilling prophecy, > actually. > > However, if you take the approach that you need to identify and expect a > particular number of states and transitions, each of which are specified > well rather than working from an API reference manual and system user > guide (or functional specification), then you can potentially have more > scalable, flexible and long-lived systems. You can also end up with a > big mess if you don't manage it properly... > FYI, I wasn't bashing HATEOAS. I think it is extremely useful to have relationship links embedded in your messages and to traverse these links. I just don't think its realistic to think that a client is going to be able to make state transition decisions dynamically based on looking at a self-describing message. Machines aren't humans. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
With all the recent discussion of HATEOAS, are there any JSON-based examples/exemplars worth looking at and learning from? Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
See http://www.subbu.org/blog/2008/10/generalized-linking for one
possibility.
Subbu
On Apr 11, 2009, at 5:36 PM, Michael Schuerig wrote:
>
>
>
> With all the recent discussion of HATEOAS, are there any JSON-based
> examples/exemplars worth looking at and learning from?
>
> Michael
>
> --
> Michael Schuerig
> mailto:michael@...
> http://www.schuerig.de/michael/
>
>
> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%; font-
> weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; } #ygrp-
> mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!-- #ygrp-
> sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%; line-
> height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom: 10px;
> padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px; font-family:
> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> input, textarea {font:99% arial,helvetica,clean,sans-serif;} #ygrp-
> mlmsg pre, code {font:115% monospace;*font-size:100%;} #ygrp-mlmsg *
> {line-height:1.22em;} #ygrp-text{ font-family: Georgia; } #ygrp-
> text p{ margin: 0 0 1em 0; } dd.last p a { font-family: Verdana;
> font-weight: bold; } #ygrp-vitnav{ padding-top: 10px; font-family:
> Verdana; font-size: 77%; margin: 0; } #ygrp-vitnav a{ padding: 0
> 1px; } #ygrp-mlmsg #logo{ padding-bottom: 10px; } #ygrp-reco
> { margin-bottom: 20px; padding: 0px; } #ygrp-reco #reco-head { font-
> weight: bold; color: #ff7900; } #reco-category{ font-size: 77%; }
> #reco-desc{ font-size: 77%; } #ygrp-vital a{ text-decoration:
> none; } #ygrp-vital a:hover{ text-decoration: underline; } #ygrp-
> sponsor #ov ul{ padding: 0 0 0 8px; margin: 0; } #ygrp-sponsor #ov
> li{ list-style-type: square; padding: 6px 0; font-size: 77%; } #ygrp-
> sponsor #ov li a{ text-decoration: none; font-size: 130%; } #ygrp-
> sponsor #nc{ background-color: #eee; margin-bottom: 20px; padding: 0
> 8px; } #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-sponsor .ad
> #hd1{ font-family: Arial; font-weight: bold; color: #628c2a; font-
> size: 100%; line-height: 122%; } #ygrp-sponsor .ad a{ text-
> decoration: none; } #ygrp-sponsor .ad a:hover{ text-decoration:
> underline; } #ygrp-sponsor .ad p{ margin: 0; font-weight: normal;
> color: #000000; } o{font-size: 0; } .MsoNormal{ margin: 0 0 0 0; }
> #ygrp-text tt{ font-size: 120%; } blockquote{margin: 0 0 0
> 4px;} .replbq{margin:4} dd.last p span { margin-right: 10px; font-
> family: Verdana; font-weight: bold; } dd.last p span.yshortcuts
> { margin-right: 0; } div.photo-title a, div.photo-title a:active,
> div.photo-title a:hover, div.photo-title a:visited { text-
> decoration: none; } div.file-title a, div.file-title a:active,
> div.file-title a:hover, div.file-title a:visited { text-decoration:
> none; } #ygrp-msg p#attach-count { clear: both; padding: 15px 0 3px
> 0; overflow: hidden; } #ygrp-msg p#attach-count span { color:
> #1E66AE; font-weight: bold; } div#ygrp-mlmsg #ygrp-msg p a
> span.yshortcuts { font-family: Verdana; font-size: 10px; font-
> weight: normal; } #ygrp-msg p a { font-family: Verdana; font-size:
> 10px; } #ygrp-mlmsg a { color: #1E66AE; } div.attach-table div div a
> { text-decoration: none; } div.attach-table { width: 400px; } -->
On Sunday 12 April 2009, Subbu Allamaraju wrote: > See http://www.subbu.org/blog/2008/10/generalized-linking for one > possibility. Would it be fair to summarize this and the linked articles as "the experts still need to make up their minds and their's no clear way to go for practitioners yet"? I'm only being slightly facetious. While I find REST and its surroundings worthwhile and somewhat interesting, the main topics of my work and interests are elsewhere. As such, I leave the driving to the experts and my question was a kind of "are we there yet?". I'm not complaining if the work is going to take more time. I'm just trying to find out whether there already is something for non- experts to use in their work. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
Hi, this is my first post, so forgive me if it's too platform specific for the discussion. I'm heavily involved in reengineering a website with ASP.Net MVC, and in designing the architecture I'm trying to use as many RESTful principles as the framework will allow. As such I've started by re-engineering all controllers to be resource-based, and implementing default Golden Seven actions (Index, Show, Create, Add, Edit, Update, Delete) on controllers that every resource inherits from as the standard. This is where I realised pain with my first tradeoff between graceful degradation of a web rendering and adherence to REST since I can't take PUT or DELETE headers in my Update and Delete actions without using Ajax on the View itself. I have had to default to the Microsoft method of using POST in a bad way as the substitute for PUT and DELETE headers, but the intention is in a subsequent iteration to redo them using their proper PUT and DELETE headers implemented by Ajax, at least until HTML5 becomes the standard. (Thanks Alan Dean for that tweetful bit of info) I'm currently struggling with 'groking' other REST concepts in preparation for a new iteration, since it's not enough for me to just understand as I need to bring the rest of my development team on board with my proposals at every step (believe me, it was a challenge to get them to accept the simplicity of re-engineering our awfully overcomplex Controllers to be Resource-oriented, and use Golden Seven actions as default). Is a web resource available that members know about that help someone like me understand the challenges involved in getting an ASP.NET web application (specifically an ASP.Net MVC app) conforming to the architectural style that is REST? I went through the mailing list archives looking for any past threads on this without success. -- Nissan Dookeran http://redditech.blogspot.com http://redditech.wordpress.com ---- "Find a problem. Figure out how to solve the problem. Find more people with the same problem and you have a business." (Gary Schoeniger, founder of the Entrepreneurial Learning Initiative) The Law of Motion & Responsibility: If you are neither learning nor contributing you are needed elsewhere.
The Sun Cloud APIs are RESTful and use JSON for their resource represenations <http://kenai.com/projects/suncloudapis/pages/CloudAPISpecificationResourceModels>. They totally respect HATEOAS and are a great example from what I've seen so far. Rich On Sun, Apr 12, 2009 at 10:15 AM, Michael Schuerig <michael@...> wrote: > On Sunday 12 April 2009, Subbu Allamaraju wrote: >> See http://www.subbu.org/blog/2008/10/generalized-linking for one >> possibility. > > Would it be fair to summarize this and the linked articles as "the > experts still need to make up their minds and their's no clear way to go > for practitioners yet"? I'm only being slightly facetious. While I find > REST and its surroundings worthwhile and somewhat interesting, the main > topics of my work and interests are elsewhere. As such, I leave the > driving to the experts and my question was a kind of "are we there > yet?". I'm not complaining if the work is going to take more time. I'm > just trying to find out whether there already is something for non- > experts to use in their work. > > Michael > > -- > Michael Schuerig > mailto:michael@... > http://www.schuerig.de/michael/ > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi, Could you comment pros and cons of using one of the RESTfull service implementations from the list below? Restlet Jersey Spring MVC Axis any other thanks
On Sunday 12 April 2009, Richard Wallace wrote: > The Sun Cloud APIs are RESTful and use JSON for their resource > represenations > <http://kenai.com/projects/suncloudapis/pages/CloudAPISpecificationRe >sourceModels>. They totally respect HATEOAS and are a great example > from what I've seen so far. They are RESTful, but about HATEOAS I'm not sure. I may just be betraying my ignorance here as I haven't even tried to follow the discussion. I formed my (mis-)understanding of HATEOAS mostly from (mis-)reading Richardson & Ruby's "RESTful Web Services". There I got the idea that HATEOAS implies something like "affordances for machines", in other words, a client program (i) doesn't have to guess which states there are to advance to from the current state and, once it has decided where to go, it (ii) doesn't have to tinker with bits and pieces of the target address to go there. A client program would be able to discern possible transitions from the resource representation itself, thus becoming more resilient to changes. The Sun Cloud APIs don't seem to follow this particular interpretation. As far as I can tell, it consists of hierarchically related resources, which are delivered in such a way that lower-level resources are physically contained within higher-level ones. I don't see how a client program would be able to do anything with this representation without a complete understanding of it. There would have to be built-in knowledge, that Virtual Data Centers have addresses and clusters as sub-resources; and as representations of these sub-resources are simply contained in the representation of the VDC, there is no linking, or hypermedia, involved. This way of modeling breaks for circular relations among resources, or if there are just too many to include or even link them individually. Again, I might be utterly confused about HATEOAS. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
Michael,
Agreed - that particular solution seems to lack HATEOAS.
However, I don't think that should reflect on JSON as a potential
hypermedia representation. The article Subbu provided
(http://www.subbu.org/blog/2008/10/generalized-linking) is a great
example of what is possible with JSON in that regard.
Kris Zyp's efforts on JSON referencing, whilst maybe not as extensible
as Subbu's proposal, have been implemented on the client side in the
Dojo toolkit
(http://www.sitepen.com/blog/2008/06/17/json-referencing-in-dojo/), and
they seem to provide an adequate hypermedia mechanism for 'basic' HATEOAS.
The most significant difference I see between Subbu and Kris' proposals
are the optional content negotation properties - namely 'type' and
'hreflang' - I absolutely agree with Subbu that these properties are
important for extensbility, but clearly (since they are optional) they
are not 'crucial'.
Interestingly; other hypermedia formats like HTML provide no concrete
markup for content negotation either, although they probably should in
my opinion! HTML5 is not adding markup for this at time of writing :(
Subbu - one thing that struck me was that the 'title' and 'length'
properties seem like meta-data that could (should?) be aquired from
headers in a HEAD response from the URI being linked to, rather being
burned into the referencing hyperlink itself.
I've come up with another possible alternative for greater
extensibility, which is to allow an abritrary list of headers for a
hyperlink, rather than singling out content-type and language.
For example:
"link" : {
"rel" : "self",
"href" : "http://example.org/movie/11",
"headers" : {
"Accept" : "application/json",
"Accept-Language" : "en-gb",
"Accept-Encoding" : "compress, gzip",
"x-custom-header" : "foo"
}
}
The headers list could be treated as advisory in the sense that a client
could chose to ignore/override the headers specified in the link and
with an understanding that the server could opt to not respect them at
all (as per the HTTP RFC).
Any thoughts?
Regards,
Mike
http://twitter.com/mike__kelly
Michael Schuerig wrote:
> On Sunday 12 April 2009, Richard Wallace wrote:
>
>> The Sun Cloud APIs are RESTful and use JSON for their resource
>> represenations
>> <http://kenai.com/projects/suncloudapis/pages/CloudAPISpecificationRe
>> sourceModels>. They totally respect HATEOAS and are a great example
>> from what I've seen so far.
>>
>
> They are RESTful, but about HATEOAS I'm not sure. I may just be
> betraying my ignorance here as I haven't even tried to follow the
> discussion. I formed my (mis-)understanding of HATEOAS mostly from
> (mis-)reading Richardson & Ruby's "RESTful Web Services". There I got
> the idea that HATEOAS implies something like "affordances for machines",
> in other words, a client program (i) doesn't have to guess which states
> there are to advance to from the current state and, once it has decided
> where to go, it (ii) doesn't have to tinker with bits and pieces of the
> target address to go there. A client program would be able to discern
> possible transitions from the resource representation itself, thus
> becoming more resilient to changes.
>
> The Sun Cloud APIs don't seem to follow this particular interpretation.
> As far as I can tell, it consists of hierarchically related resources,
> which are delivered in such a way that lower-level resources are
> physically contained within higher-level ones. I don't see how a client
> program would be able to do anything with this representation without a
> complete understanding of it. There would have to be built-in knowledge,
> that Virtual Data Centers have addresses and clusters as sub-resources;
> and as representations of these sub-resources are simply contained in
> the representation of the VDC, there is no linking, or hypermedia,
> involved. This way of modeling breaks for circular relations among
> resources, or if there are just too many to include or even link them
> individually.
>
> Again, I might be utterly confused about HATEOAS.
>
> Michael
>
>
On Tuesday 14 April 2009, Mike Kelly wrote: > Michael, > > Agreed - that particular solution seems to lack HATEOAS. > > However, I don't think that should reflect on JSON as a potential > hypermedia representation. The article Subbu provided > (http://www.subbu.org/blog/2008/10/generalized-linking) is a great > example of what is possible with JSON in that regard. > > Kris Zyp's efforts on JSON referencing, whilst maybe not as > extensible as Subbu's proposal, have been implemented on the client > side in the Dojo toolkit > (http://www.sitepen.com/blog/2008/06/17/json-referencing-in-dojo/), > and they seem to provide an adequate hypermedia mechanism for 'basic' > HATEOAS. Yes, I know Kris's articles and, indeed, I'm using Dojo for a client accessing a RESTful service. However, unless I'm much mistaken, these proposals only address linking, which, of course, is essential for HATEOAS, but not all of it. > The most significant difference I see between Subbu and Kris' > proposals are the optional content negotation properties - namely > 'type' and 'hreflang' - I absolutely agree with Subbu that these > properties are important for extensbility, but clearly (since they > are optional) they are not 'crucial'. > > Interestingly; other hypermedia formats like HTML provide no concrete > markup for content negotation either, although they probably should > in my opinion! HTML5 is not adding markup for this at time of writing > :( IOW, the experts haven't decided on how things should be done, yet. That's fine, but as I'm looking for practically applicable guidelines and examples, I realized that I'm too early and just have to wait some more. As much as I'm interested in the results, I don't want to enter this particular discussion. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
Michael Schuerig wrote: > On Tuesday 14 April 2009, Mike Kelly wrote: > >> Michael, >> >> Agreed - that particular solution seems to lack HATEOAS. >> >> However, I don't think that should reflect on JSON as a potential >> hypermedia representation. The article Subbu provided >> (http://www.subbu.org/blog/2008/10/generalized-linking) is a great >> example of what is possible with JSON in that regard. >> >> Kris Zyp's efforts on JSON referencing, whilst maybe not as >> extensible as Subbu's proposal, have been implemented on the client >> side in the Dojo toolkit >> (http://www.sitepen.com/blog/2008/06/17/json-referencing-in-dojo/), >> and they seem to provide an adequate hypermedia mechanism for 'basic' >> HATEOAS. >> > > Yes, I know Kris's articles and, indeed, I'm using Dojo for a client > accessing a RESTful service. However, unless I'm much mistaken, these > proposals only address linking, which, of course, is essential for > HATEOAS, but not all of it. > > Hi Michael, Subbu's proposal addresses media-type and language based content negotiation, so it's more than just linking. Either way, I think it's fair to say that the uniform interface and URIs provide everything required to build a RESTful interface that leverages HATEOAS. Additional hyperlink markup is icing on the hypermedia cake that makes server-driven negotiation possible but, as I understand it, HTTP is fairly liberal in defining how negotiation should be approached (http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html) which is a good indication that HATEOAS can actually be agent or server driven (or both). The most common solution is to use dot notation tacked-on to the end of a URI (document.xml, document.json) and to perform agent-driven or transparent content negotiation - I'm not a particularly big fan of this approach because it dilutes the meaningfulness of URIs, and makes life more complicated for intemediaries like caches - e.g. how does a cache know whether or not a PUT to document.xml invalidates the cache for document.json? There are solutions to this problem by teaching intemediaries about your special URI patterns, but this is unnecessarily complicated and expensive when HTTP provides an adequate protocol level alternative. It's also not very uniform. Despite this, putting content negotation in URIs and leaving it to agents is extremely common - so in some respect you could actually consider linking 'all of HATEOAS'. >> The most significant difference I see between Subbu and Kris' >> proposals are the optional content negotation properties - namely >> 'type' and 'hreflang' - I absolutely agree with Subbu that these >> properties are important for extensbility, but clearly (since they >> are optional) they are not 'crucial'. >> >> Interestingly; other hypermedia formats like HTML provide no concrete >> markup for content negotation either, although they probably should >> in my opinion! HTML5 is not adding markup for this at time of writing >> :( >> > > IOW, the experts haven't decided on how things should be done, yet. > That's fine, but as I'm looking for practically applicable guidelines > and examples, I realized that I'm too early and just have to wait some > more. As much as I'm interested in the results, I don't want to enter > this particular discussion. > > I only really mentioned HTML as an example that this ambiguity in hyperlink semantics exists in well established markup, not just in newbies like JSON. The cake's fine in both though so don't worry! :) So, practically, I think the best thing to do is pick a markup and client library that provide 'basic HATEOAS' and allow for backward compatible extension of the hyperlink markup. Whether Dojo and JSON referencing provide this, I'm not sure. Subbu's markup looked pretty good but I don't know of any existing clients for it. Cheers, Mike http://twitter.com/mike__kelly
Some of these issues have been discussed on the restful-json google group [1], although you may already be aware of that. If not, it might worth looking at the previous discussions there, as a lot of the different use cases and needs are mentioned there. The JSON referencing mechanism that we use in Dojo does indeed lack extensibility, but I think we had made significant effort towards a more extensible mechanism for hyperlinking in JSON with a meta-specification that uses JSON Schema to define hyperlink structures [2] (and provides a superset of the capabilities of Subbu's proposal as well. Discussions on restful-json have cooled off lately, but I'd be glad to continue working on evolving a JSON hyperlinking mechanism, as I think we have a good foundation now. [1] http://groups.google.com/group/restful-json [2] http://groups.google.com/group/restful-json/browse_thread/thread/cf4b0bd444f5fd83 Thanks, Kris
If you're not a Java programmer, ignore: For the Resteasy project, I created JAXB mapping for Atom. Since we can automatically marshall any JAXB hierarchy to JSON you can use Atom feeds or just embed Atom link objects into your JAXB classes. Michael Schuerig wrote: > > > > > With all the recent discussion of HATEOAS, are there any JSON-based > examples/exemplars worth looking at and learning from? > > Michael > > -- > Michael Schuerig > mailto:michael@... <mailto:michael%40schuerig.de> > http://www.schuerig.de/michael/ <http://www.schuerig.de/michael/> > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Hi Michael, On 12.04.2009, at 19:15, Michael Schuerig wrote: > Would it be fair to summarize this and the linked articles as "the > experts still need to make up their minds and their's no clear way > to go > for practitioners yet"? Yes, I think that's a fair summary. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Tuesday 14 April 2009, Bill Burke wrote: > If you're not a Java programmer, ignore: I'm multi-lingual, as far as programming is concerned, and quite willing to learn from examples in languages other than the ones I'm currently using myself. > For the Resteasy project, I created JAXB mapping for Atom. Since we > can automatically marshall any JAXB hierarchy to JSON you can use > Atom feeds or just embed Atom link objects into your JAXB classes. Plainly, I can't get excited, but maybe that's just me not understanding what I'm looking at. I've had another look at Subbu's article on generalized linking[*] and there my reaction is the same: So what? All I see are proposals for expressing links among resources. What I haven't seen anywhere are examples of clients using these representations. In particular, I'd like to see examples that demonstrate that and how HATEOAS results in better, more robust client code. Michael [*] http://www.subbu.org/blog/2008/10/generalized-linking -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
On Apr 14, 2009, at 4:36 AM, Mike Kelly wrote: > Subbu - one thing that struck me was that the 'title' and 'length' > properties seem like meta-data that could (should?) be aquired from > headers in a HEAD response from the URI being linked to, rather > being burned into the referencing hyperlink itself. True, for the length property. I included it for completeness. However I am not aware of any link in HTML or Atom advertising length for linked resources. Subbu --- http://subbu.org
On Apr 14, 2009, at 9:03 AM, Michael Schuerig wrote: > generalized linking[*] and there my reaction is the same: So what? > All I > see are proposals for expressing links among resources. What I haven't > seen anywhere are examples of clients using these representations. In > particular, I'd like to see examples that demonstrate that and how > HATEOAS results in better, more robust client code. Here is the problem. There are very few external facing and successful services that illustrate things like this. For internal apps, IMO, there is less motivation to treat URIs as opaque as it is tempting to assume that clients know about URIs as well as all possible state transitions. Under such assumptions, HATEOAS may seem like an unnecessary pedantic exercise. IMO, it is best to evaluate pros and costs of HATEOAS, or for that matter, any other idea, in the context of your own apps. If you are convinced that HATEOAS is beneficial for your apps, then the proposals that have been discussed so far can help. Lack of an IETF RFC or a W3C recommendation should not, IMHO, prevent anyone from doing the right things for their apps. Subbu --- http://subbu.org
IMHO, HATEOAS techniques and tooling are a key aspect of success for REST in internal apps. I respectfully disagree on the following point: For internal apps, IMO, there is less motivation to treat URIs as opaque as > it is tempting to assume that clients know about URIs as well as all > possible state transitions I'm guessing that external APIs haven't delved into HATEOAS as deeply as they could have specifically because they have different problems that internal apps do. If anything, I would suggest that internal apps have a bigger need for opaque URIs and possible state transitions. Internal APIs tend to be more fine-grained and have a richer feature set than their external counterparts. Those fine-grained APIs are more prone to change over time. HATEOAS is meant help manage those kinds of changes . -Solomon On Tue, Apr 14, 2009 at 12:38 PM, Subbu Allamaraju <subbu@...> wrote: > > > > On Apr 14, 2009, at 9:03 AM, Michael Schuerig wrote: > > > generalized linking[*] and there my reaction is the same: So what? > > All I > > see are proposals for expressing links among resources. What I haven't > > seen anywhere are examples of clients using these representations. In > > particular, I'd like to see examples that demonstrate that and how > > HATEOAS results in better, more robust client code. > > Here is the problem. There are very few external facing and successful > services that illustrate things like this. For internal apps, IMO, > there is less motivation to treat URIs as opaque as it is tempting to > assume that clients know about URIs as well as all possible state > transitions. Under such assumptions, HATEOAS may seem like an > unnecessary pedantic exercise. > > IMO, it is best to evaluate pros and costs of HATEOAS, or for that > matter, any other idea, in the context of your own apps. If you are > convinced that HATEOAS is beneficial for your apps, then the proposals > that have been discussed so far can help. > > Lack of an IETF RFC or a W3C recommendation should not, IMHO, prevent > anyone from doing the right things for their apps. > > Subbu > --- > http://subbu.org > > >
I don't think I said it right. Should have said "For internal apps, developers may find it less motivating to ..." Subbu On Apr 14, 2009, at 10:30 AM, Solomon Duskis wrote: > IMHO, HATEOAS techniques and tooling are a key aspect of success for > REST > in internal apps. > > I respectfully disagree on the following point: > > For internal apps, IMO, there is less motivation to treat URIs as > opaque as >> it is tempting to assume that clients know about URIs as well as all >> possible state transitions > > > I'm guessing that external APIs haven't delved into HATEOAS as > deeply as > they could have specifically because they have different problems that > internal apps do. > > If anything, I would suggest that internal apps have a bigger need for > opaque URIs and possible state transitions. Internal APIs tend to > be more > fine-grained and have a richer feature set than their external > counterparts. Those fine-grained APIs are more prone to change over > time. > HATEOAS is meant help manage those kinds of changes . > > -Solomon > > On Tue, Apr 14, 2009 at 12:38 PM, Subbu Allamaraju <subbu@...> > wrote: > >> >> >> >> On Apr 14, 2009, at 9:03 AM, Michael Schuerig wrote: >> >>> generalized linking[*] and there my reaction is the same: So what? >>> All I >>> see are proposals for expressing links among resources. What I >>> haven't >>> seen anywhere are examples of clients using these representations. >>> In >>> particular, I'd like to see examples that demonstrate that and how >>> HATEOAS results in better, more robust client code. >> >> Here is the problem. There are very few external facing and >> successful >> services that illustrate things like this. For internal apps, IMO, >> there is less motivation to treat URIs as opaque as it is tempting to >> assume that clients know about URIs as well as all possible state >> transitions. Under such assumptions, HATEOAS may seem like an >> unnecessary pedantic exercise. >> >> IMO, it is best to evaluate pros and costs of HATEOAS, or for that >> matter, any other idea, in the context of your own apps. If you are >> convinced that HATEOAS is beneficial for your apps, then the >> proposals >> that have been discussed so far can help. >> >> Lack of an IETF RFC or a W3C recommendation should not, IMHO, prevent >> anyone from doing the right things for their apps. >> >> Subbu >> --- >> http://subbu.org >> >> >> --- http://subbu.org
Solomon Duskis wrote: > > > > IMHO, HATEOAS techniques and tooling are a key aspect of success for > REST in internal apps. > > I respectfully disagree on the following point: > > For internal apps, IMO, there is less motivation to treat URIs as > opaque as it is tempting to assume that clients know about URIs as > well as all possible state transitions > > > I'm guessing that external APIs haven't delved into HATEOAS as deeply as > they could have specifically because they have different problems that > internal apps do. > > If anything, I would suggest that internal apps have a bigger need for > opaque URIs and possible state transitions. Internal APIs tend to be > more fine-grained and have a richer feature set than their external > counterparts. Those fine-grained APIs are more prone to change over > time. HATEOAS is meant help manage those kinds of changes . > I disagree. HATEOAS helps humans manage change pretty well on the web. I'm very skeptical it can do the same for machine clients, for reasons stated in an earlier thread. Still, that doesn't mean its not useful. Its just not as useful for machine clients as it is for humans. I know I stated this in an earlier thread, but combining links (HATEOAS), conneg, custom versioned media types, and XML schema is the most interesting for me. Then, you can have validated, guaranteed, versioned interactions with your services. Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill Burke wrote: > Solomon Duskis wrote: > >> >> IMHO, HATEOAS techniques and tooling are a key aspect of success for >> REST in internal apps. >> >> I respectfully disagree on the following point: >> >> For internal apps, IMO, there is less motivation to treat URIs as >> opaque as it is tempting to assume that clients know about URIs as >> well as all possible state transitions >> >> >> I'm guessing that external APIs haven't delved into HATEOAS as deeply as >> they could have specifically because they have different problems that >> internal apps do. >> >> If anything, I would suggest that internal apps have a bigger need for >> opaque URIs and possible state transitions. Internal APIs tend to be >> more fine-grained and have a richer feature set than their external >> counterparts. Those fine-grained APIs are more prone to change over >> time. HATEOAS is meant help manage those kinds of changes . >> >> > > I disagree. HATEOAS helps humans manage change pretty well on the web. > I'm very skeptical it can do the same for machine clients, for reasons > stated in an earlier thread. Still, that doesn't mean its not useful. > Its just not as useful for machine clients as it is for humans. > > I know I stated this in an earlier thread, but combining links > (HATEOAS), conneg, custom versioned media types, and XML schema is the > most interesting for me. Then, you can have validated, guaranteed, > versioned interactions with your services. > Hi Bill, Are there benefits of using custom versioned media types instead of using a combination of standard media types and a custom version header? Regards, Mike
--- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: > > Are there benefits of using custom versioned media types instead of > using a combination of standard media types and a custom version > header? Well, there is the fact that HTTP clients tend to have passable to good support for custom media types. Not so much for a version header. Further, a media type is a better way to distinguish between versions of an API because the representations are highly likely to have changed.
Why should a client or server care about explicit versioning? What does it buy you? You say that "representations are highly likely to have changed." Shouldn't a RESTful interaction inherently handle those changes gracefully? -Solomon On Tue, Apr 14, 2009 at 3:28 PM, Peter Williams <pezra@barelyenough.org>wrote: > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, Mike > Kelly <mike@...> wrote: > > > > Are there benefits of using custom versioned media types instead of > > using a combination of standard media types and a custom version > > header? > > Well, there is the fact that HTTP clients tend to have passable to good > support for custom media types. Not so much for a version header. Further, a > media type is a better way to distinguish between versions of an API because > the representations are highly likely to have changed. > > >
For what it's worth, the document format versioning issue has recently been beaten to death as part of the HTML 5 effort. A quick google search generated this summary: http://edward.oconnor.cx/2008/01/html-versioning and this W3C article: http://www.w3.org/QA/2007/12/version_identifiers_reconsider.html (Not sure if there is more recent coverage...) In short: Do your damnedest to maintain backwards compatibility as you evolve your document format. If you have to break it, then it's either a new media type or you indicate it in the document somehow -- but there doesn't seem to be strong consensus on the right way to do that. Andrew Wahbe --- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@...> wrote: > > Why should a client or server care about explicit versioning? What does it > buy you? You say that "representations are highly likely to have changed." > Shouldn't a RESTful interaction inherently handle those changes gracefully? > > -Solomon > > On Tue, Apr 14, 2009 at 3:28 PM, Peter Williams <pezra@...>wrote: > > > > > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, Mike > > Kelly <mike@> wrote: > > > > > > Are there benefits of using custom versioned media types instead of > > > using a combination of standard media types and a custom version > > > header? > > > > Well, there is the fact that HTTP clients tend to have passable to good > > support for custom media types. Not so much for a version header. Further, a > > media type is a better way to distinguish between versions of an API because > > the representations are highly likely to have changed. > > > > > > >
--- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@...> wrote: > > Why should a client or server care about explicit versioning? What does it > buy you? You say that "representations are highly likely to have changed." > Shouldn't a RESTful interaction inherently handle those changes gracefully? REST does provide a rather graceful way to handle versioning. By exposing the application semantics explicitly in the representations, the application semantics can be change just by changing the representations. Of course, this has the potential to break clients if such changes are implemented unilaterally. HTTP's content negotiations provides a powerful implementation of this approach. The client and server get to negotiate which of the available flavor of representations, and there by application semantics, to use. The server can prevent breakage by supporting multiple flavors/versions of the representations simultaneously. > -Solomon > > On Tue, Apr 14, 2009 at 3:28 PM, Peter Williams <pezra@...>wrote: > > > > > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, Mike > > Kelly <mike@> wrote: > > > > > > Are there benefits of using custom versioned media types instead of > > > using a combination of standard media types and a custom version > > > header? > > > > Well, there is the fact that HTTP clients tend to have passable to good > > support for custom media types. Not so much for a version header. Further, a > > media type is a better way to distinguish between versions of an API because > > the representations are highly likely to have changed. > > > > > > >
> HTTP's content negotiations provides a powerful implementation of this approach. > The client and server get to negotiate which of the available flavor of representations, > and there by application semantics, to use. The server can prevent breakage by > supporting multiple flavors/versions of the representations simultaneously. I've heard that a lot lately. The approach of using mediatypes for versioning has two massive drawbacks: 1. Multiplication of media types 2. Side-by-side approach to building formats I think it sends the wrong message. You can and should do versioning inside an existing media type, provided you accounted for this by not unreasonably limiting your options (for example by restricting yourself to a closed xsd). Plus, it's not gonna be simple. It's like argueing that having all the flavours of RSS and ATOM to support is better to support extensibility. If you have to create a new version, you've already accepted there's a fatal flaw in your current format. Seb > -Solomon > > On Tue, Apr 14, 2009 at 3:28 PM, Peter Williams <pezra@...>wrote: > > > > > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, Mike > > Kelly <mike@> wrote: > > > > > > Are there benefits of using custom versioned media types instead of > > > using a combination of standard media types and a custom version > > > header? > > > > Well, there is the fact that HTTP clients tend to have passable to good > > support for custom media types. Not so much for a version header. Further, a > > media type is a better way to distinguish between versions of an API because > > the representations are highly likely to have changed. > > > > > > > ------------------------------------ Yahoo! Groups Links
Peter Williams wrote: > --- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@...> wrote: > >> Why should a client or server care about explicit versioning? What does it >> buy you? You say that "representations are highly likely to have changed." >> Shouldn't a RESTful interaction inherently handle those changes gracefully? >> > > REST does provide a rather graceful way to handle versioning. By exposing the application semantics explicitly in the representations, the application semantics can be change just by changing the representations. Of course, this has the potential to break clients if such changes are implemented unilaterally. > > HTTP's content negotiations provides a powerful implementation of this approach. The client and server get to negotiate which of the available flavor of representations, and there by application semantics, to use. The server can prevent breakage by supporting multiple flavors/versions of the representations simultaneously. > Would you do this with custom versioned media types, or with standard media types and an extra version header?
Sebastien Lambla wrote: > > I've heard that a lot lately. The approach of using mediatypes for > versioning has two massive drawbacks: > 1. Multiplication of media types > 2. Side-by-side approach to building formats > > I think it sends the wrong message. You can and should do versioning inside > an existing media type, provided you accounted for this by not unreasonably > limiting your options (for example by restricting yourself to a closed xsd). > > Plus, it's not gonna be simple. It's like argueing that having all the > flavours of RSS and ATOM to support is better to support extensibility. If > you have to create a new version, you've already accepted there's a fatal > flaw in your current format. > > Seb > > If the rationale is that a given format's document structure could change over time, I can see how versioning could make some sense. This kind of versioning adds another dimension to conneg on top of media types, language, encoding, etc. which is why I'm interested in whether it's feasible or not to use an additional header for these purposes. Wouldn't versioning inside existing media types unnecessarily increase the data throughput for client requests that only require older (i.e. smaller) versions of a format? Regards, Mike
> Wouldn't versioning inside existing media types unnecessarily increase > the data throughput for client requests that only require older (i.e. > smaller) versions of a format? The thing is that I disagree with the "versioning" within the format as well. I've had success designing forward and backward compatible formats based on container media types. If your container is generic enough it won't need to change. You can support multiple versions by continuing supporting backward compat and adding new elements. Your v1 client still behaves the way, your v2 client doesn't know about v1. On the server side, you process the code by supporting backward compat (understand the previous elements) and forward compat (ignore the ones you don't understand, and advertise an error if something goes wrong). Some people would argue that it makes the code of your media type more complicated on the server, I'd argue that this level of complexity to support multiple version will be the same even with different media types: you don't do the switch in the document parser, you somehow either reimplement functionality in each, or have a switch somewhere else. It follows the principle of the web: easy to produce, not easy to consume. But then again multiple version support is never easy, whichever path you go. I'd rather leave that complexity where I control it. Seb
Sebastien Lambla wrote: >> Wouldn't versioning inside existing media types unnecessarily increase >> the data throughput for client requests that only require older (i.e. >> smaller) versions of a format? >> > > The thing is that I disagree with the "versioning" within the format as > well. I've had success designing forward and backward compatible formats > based on container media types. > > If your container is generic enough it won't need to change. You can support > multiple versions by continuing supporting backward compat and adding new > elements. Your v1 client still behaves the way, your v2 client doesn't know > about v1. On the server side, you process the code by supporting backward > compat (understand the previous elements) and forward compat (ignore the > ones you don't understand, and advertise an error if something goes wrong). > > Some people would argue that it makes the code of your media type more > complicated on the server, I'd argue that this level of complexity to > support multiple version will be the same even with different media types: > you don't do the switch in the document parser, you somehow either > reimplement functionality in each, or have a switch somewhere else. > > It follows the principle of the web: easy to produce, not easy to consume. > But then again multiple version support is never easy, whichever path you > go. I'd rather leave that complexity where I control it. > > Seb > > I'm not proposing to use different media types, I would use standard media types and simply add an extra versioning header to the conneg process i.e. Accept: application/json X-Version: 1.2 This removes the obligation of clients to handle evolving document structures (but still allows this behaviour by omitting the version header), and also removes the need for complicated, and potentially wasteful, multiple versioning within media types. I would argue that different versions of media types are seperate representations of a resource, the same as content-language, and versioning should therefore be part of the conneg process in its own right. - Mike
> I would argue that different versions of media types are seperate > representations of a resource, the same as content-language, and > versioning should therefore be part of the conneg process in its own right. Be it that you version based on the media type or on an additional accept:, you still negotiate the communication format version. If you accept the precept that a media type defines the format that is to be used for the state exchange, then you'll agree with me that any attribute that changes the media type (aka the document i need to read to implement said media type) becomes inherently part of the media type identifier. So I really see no difference between Accept: application/vnd.acme.customer+json X-Version: 1.2 And Accept: application/vnd.acme.customer+json;version=1.2 As I've mentioned, I'm not fan of either because the intent of explicit versioning is to run various versions side by side, rather than support back / forward compatibility. My experience in that area is that the former is a recipe for hell and disaster, as it encourages a lack of investment and extensibility in your original media type, somehow breaking the 'design for serendipity' mantra. That said, back and forward compatibility is not easy to achieve either, but I believe it triggers better results in the long term. Seb
--- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: > > Peter Williams wrote: > > --- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@> wrote: > > > >> Why should a client or server care about explicit versioning? What does it > >> buy you? You say that "representations are highly likely to have changed." > >> Shouldn't a RESTful interaction inherently handle those changes gracefully? > >> > > > > REST does provide a rather graceful way to handle versioning. By exposing the application semantics explicitly in the representations, the application semantics can be change just by changing the representations. Of course, this has the potential to break clients if such changes are implemented unilaterally. > > > > HTTP's content negotiations provides a powerful implementation of this approach. The client and server get to negotiate which of the available flavor of representations, and there by application semantics, to use. The server can prevent breakage by supporting multiple flavors/versions of the representations simultaneously. > > > > Would you do this with custom versioned media types, or with standard > media types and an extra version header? > Application specific media types that change as the API is versioned are definitely the way to go. For example, "application/vnd.mycompany.fancyapp-v1+json". Sticking the version in a parameter is not acceptable because parameters are not allowed to change the basic meaning of a media type. I don't like the version header because it is non-standard, and therefore ignored by the content negotiation support that exists in both http client and servers infrastructure. Peter http://barelyenough.org
--- In rest-discuss@yahoogroups.com, "Sebastien Lambla" <seb@...> wrote: > I've heard that a lot lately. The approach of using mediatypes for > versioning has two massive drawbacks: > 1. Multiplication of media types How is this a problem? > 2. Side-by-side approach to building formats This mechanism can also be used to explicitly deprecate obsolete API versions. I wouldn't suggest that people have multiple formats just for fun. But if existing application semantics change sufficiently separate formats are much simpler and cleaner than the alternatives. > I think it sends the wrong message. You can and should do versioning inside > an existing media type, provided you accounted for this by not unreasonably > limiting your options (for example by restricting yourself to a closed xsd). By "an existing mime type" do you mean something like "application/json" or something like "application/vnd.myapp+json". The former is poor choice because it requires an out-of-band agreement between the client and server about exactly what flavor of JSON document the server will return. Not just any JSON document will do for the client but that fact is not made explicit. The latter would work. However, it assumes that you are never going to change the application semantics in any incompatible way. I am not comfortable assumptions that an application is going to be designed correctly the first time. > Plus, it's not gonna be simple. It's like argueing that having all the > flavours of RSS and ATOM to support is better to support extensibility. If > you have to create a new version, you've already accepted there's a fatal > flaw in your current format. Indeed. Supporting obsolescent formats/API versions is hard and annoying. However, it can be very useful even when the obsolescent formats are fatally flawed. The parallel support of both formats allows clients to be transitioned to the new API one at a time, rather than all at once. This extra runway can be crucial for small teams working on large systems. Or for systems where the clients are outside of the direct control of the API developers. -- Peter http://barelyenough.org
Mike Kelly wrote: > Bill Burke wrote: >> Solomon Duskis wrote: >> >>> >>> IMHO, HATEOAS techniques and tooling are a key aspect of success for >>> REST in internal apps. >>> I respectfully disagree on the following point: >>> >>> For internal apps, IMO, there is less motivation to treat URIs as >>> opaque as it is tempting to assume that clients know about URIs as >>> well as all possible state transitions >>> >>> >>> I'm guessing that external APIs haven't delved into HATEOAS as deeply >>> as they could have specifically because they have different problems >>> that internal apps do. >>> >>> If anything, I would suggest that internal apps have a bigger need >>> for opaque URIs and possible state transitions. Internal APIs tend >>> to be more fine-grained and have a richer feature set than their >>> external counterparts. Those fine-grained APIs are more prone to >>> change over time. HATEOAS is meant help manage those kinds of changes . >>> >>> >> >> I disagree. HATEOAS helps humans manage change pretty well on the >> web. I'm very skeptical it can do the same for machine clients, for >> reasons stated in an earlier thread. Still, that doesn't mean its not >> useful. Its just not as useful for machine clients as it is for humans. >> >> I know I stated this in an earlier thread, but combining links >> (HATEOAS), conneg, custom versioned media types, and XML schema is the >> most interesting for me. Then, you can have validated, guaranteed, >> versioned interactions with your services. >> > Hi Bill, > > Are there benefits of using custom versioned media types instead of > using a combination of standard media types and a custom version header? > My thoughts are all theory, no practice but, IMO custom versioned media types are better because they follow an existing standard. Others have warned against the explosion of custom media types. I'm not sure how you can avoid it if you want validation and to leverage conneg. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Wed, Apr 15, 2009 at 4:45 PM, Bill Burke <bburke@...> wrote: > > > > > Mike Kelly wrote: >> Bill Burke wrote: >>> Solomon Duskis wrote: >>> >>>> >>>> IMHO, HATEOAS techniques and tooling are a key aspect of success for >>>> REST in internal apps. >>>> I respectfully disagree on the following point: >>>> >>>> For internal apps, IMO, there is less motivation to treat URIs as >>>> opaque as it is tempting to assume that clients know about URIs as >>>> well as all possible state transitions >>>> >>>> >>>> I'm guessing that external APIs haven't delved into HATEOAS as deeply >>>> as they could have specifically because they have different problems >>>> that internal apps do. >>>> >>>> If anything, I would suggest that internal apps have a bigger need >>>> for opaque URIs and possible state transitions. Internal APIs tend >>>> to be more fine-grained and have a richer feature set than their >>>> external counterparts. Those fine-grained APIs are more prone to >>>> change over time. HATEOAS is meant help manage those kinds of changes . >>>> >>>> >>> >>> I disagree. HATEOAS helps humans manage change pretty well on the >>> web. I'm very skeptical it can do the same for machine clients, for >>> reasons stated in an earlier thread. Still, that doesn't mean its not >>> useful. Its just not as useful for machine clients as it is for humans. >>> >>> I know I stated this in an earlier thread, but combining links >>> (HATEOAS), conneg, custom versioned media types, and XML schema is the >>> most interesting for me. Then, you can have validated, guaranteed, >>> versioned interactions with your services. >>> >> Hi Bill, >> >> Are there benefits of using custom versioned media types instead of >> using a combination of standard media types and a custom version header? >> > > My thoughts are all theory, no practice but, IMO custom versioned media > types are better because they follow an existing standard. Others have > warned against the explosion of custom media types. I'm not sure how > you can avoid it if you want validation and to leverage conneg. I've had a little bit of experience in this area (we have not made dramatic changes, but we have modified some things), and so far the following set of rules has avoided the need to do any kind of versioning in my representations so far, but also avoided breaking old clients: * Clients MUST ignore fields in the representation that they don't understand (i.e. that are not defined in whatever version of the spec the client is programmed to expect). * Clients SHOULD indicate the version of the API specification they are programmed to expect. Because this has to happen on GET requests too, there's no representation to include it in, so we use a custom HTTP header -- sort of analogous to a User-Agent header that webapps can use to customize their responses. An alternative might be to use a request parameter, or bake the version number into the service URI or something like that. * Servers MUST respect the client's indication of version preference if it matters. Given the rules above, it's OK to include additional fields added in some later version -- the client should just ignore them -- but it's not OK to remove a field that was required to be present in the version of the spec that the client specifies. * Servers MAY assume that a client not describing their version preference should get the latest and greatest version. * If the representation sent by a server includes links (per our HATEOAS threads), the server MAY send different URIs depending on the version, or MAY handle multiple versions at the same URI ... whatever it wants. > Bill Burke Craig McClanahan
On Wed, Apr 15, 2009 at 6:38 PM, Craig McClanahan <craigmcc@...> wrote: > > I've had a little bit of experience in this area (we have not made > dramatic changes, but we have modified some things), and so far the > following set of rules has avoided the need to do any kind of > versioning in my representations so far, but also avoided breaking old > clients: > > * Clients MUST ignore fields in the representation that they don't > understand > (i.e. that are not defined in whatever version of the spec the client is > programmed to expect). > > * Clients SHOULD indicate the version of the API specification they are > programmed to expect. Because this has to happen on GET requests > too, there's no representation to include it in, so we use a custom HTTP > header -- sort of analogous to a User-Agent header that webapps > can use to customize their responses. An alternative might be to use > a request parameter, or bake the version number into the service URI > or something like that. > > * Servers MUST respect the client's indication of version preference > if it matters. Given the rules above, it's OK to include additional fields > added in some later version -- the client should just ignore them -- > but it's not OK to remove a field that was required to be present > in the version of the spec that the client specifies. > > * Servers MAY assume that a client not describing their version preference > should get the latest and greatest version. > > * If the representation sent by a server includes links (per our HATEOAS > threads), the server MAY send different URIs depending on the version, > or MAY handle multiple versions at the same URI ... whatever it wants. The above describes a great approach to versioning an API. Clients must ignore parts of representations they don't understand. New application semantics are added to existing representations in backwards compatible ways. Clients have a way to tell the server they require it support a particular set of semantics. However, using a non-standard mechanism, like a custom HTTP head, to specify the version is deeply suboptimal. The same behavior can be more easily and transparently provided by using media types . The required set of semantics (backwards compatible version) can specified as a media type parameter (e.g. `application/vnd.myapp+json;level=42`). And should you ever run into the a situation where a non-compatible change is absolutely required you can switch to a different media type. -- Peter http://barelyenough.org
On Apr 14, 2009, at 4:36 AM, Mike Kelly wrote: > Kris Zyp's efforts on JSON referencing, whilst maybe not as > extensible as Subbu's proposal, have been implemented on the client > side in the Dojo toolkit (http://www.sitepen.com/blog/2008/06/17/json-referencing-in-dojo/ > ), and they seem to provide an adequate hypermedia mechanism for > 'basic' HATEOAS. > > The most significant difference I see between Subbu and Kris' > proposals are the optional content negotation properties - namely > 'type' and 'hreflang' - I absolutely agree with Subbu that these > properties are important for extensbility, but clearly (since they > are optional) they are not 'crucial'. By the way, the key difference is not negotiation related properties, but link relations. Link relations help classify links, and without those, clients are left with guessing the intent of any URIs they receive. Link relations are almost are as important as media types for better discoverability. Subbu --- http://subbu.org
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Subbu Allamaraju wrote:
>
> On Apr 14, 2009, at 4:36 AM, Mike Kelly wrote:
>
> > Kris Zyp's efforts on JSON referencing, whilst maybe not as
> > extensible as Subbu's proposal, have been implemented on the client
> > side in the Dojo toolkit
(http://www.sitepen.com/blog/2008/06/17/json-referencing-in-dojo/
> > ), and they seem to provide an adequate hypermedia mechanism for
> > 'basic' HATEOAS.
> >
> > The most significant difference I see between Subbu and Kris'
> > proposals are the optional content negotation properties - namely
> > 'type' and 'hreflang' - I absolutely agree with Subbu that these
> > properties are important for extensbility, but clearly (since they
> > are optional) they are not 'crucial'.
>
> By the way, the key difference is not negotiation related properties,
> but link relations. Link relations help classify links, and without
> those, clients are left with guessing the intent of any URIs they
> receive. Link relations are almost are as important as media types for
> better discoverability.
The elegance of JSON in a REST architecture is that JSON implicitly
provides link relations. I am skeptical that we need yet another link
relation mechanism in addition to the natural links are already
defined by JSON itself. For example, there is no need to define that
this link's relationship to the current resource is a "father" link,
it spelled out in the structure:
{ "name": "Kris",
"father": {"$ref": "http://www.somesite.com/bill"}
}
This of course is not so unambiguously stated in XML, hence the need
for a link relationship attribute, but JSON is a different world, and
the structure itself elegantly aligns with relational nature of linking.
Kris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iEYEARECAAYFAknmq7kACgkQ9VpNnHc4zAztRgCfdSBIZPrETFGHIuugG+73wv3z
gsoAmQGo1/qawywgVjTIZlOOFYhhOlZl
=H/io
-----END PGP SIGNATURE-----
On Apr 15, 2009, at 8:53 PM, Kris Zyp wrote:
> The elegance of JSON in a REST architecture is that JSON implicitly
> provides link relations. I am skeptical that we need yet another link
> relation mechanism in addition to the natural links are already
> defined by JSON itself. For example, there is no need to define that
> this link's relationship to the current resource is a "father" link,
> it spelled out in the structure:
> { "name": "Kris",
> "father": {"$ref": "http://www.somesite.com/bill"}
> }
>
> This of course is not so unambiguously stated in XML, hence the need
> for a link relationship attribute, but JSON is a different world, and
> the structure itself elegantly aligns with relational nature of
> linking.
Good point. I missed that interpretation. But I don't think that style
of usage has anything to do with JSON. It can be done in any
extensible format including XML.
Subbu
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Subbu Allamaraju wrote:
>
> On Apr 15, 2009, at 8:53 PM, Kris Zyp wrote:
>
>> The elegance of JSON in a REST architecture is that JSON
>> implicitly provides link relations. I am skeptical that we need
>> yet another link relation mechanism in addition to the natural
>> links are already defined by JSON itself. For example, there is
>> no need to define that this link's relationship to the current
>> resource is a "father" link, it spelled out in the structure: {
>> "name": "Kris", "father": {"$ref":
>> "http://www.somesite.com/bill"} }
>>
>> This of course is not so unambiguously stated in XML, hence the
>> need for a link relationship attribute, but JSON is a different
>> world, and the structure itself elegantly aligns with relational
>> nature of linking.
>
>
> Good point. I missed that interpretation. But I don't think that
> style of usage has anything to do with JSON. It can be done in any
> extensible format including XML.
>
I don't deny that it is possible in other formats, but doesn't it seem
less obvious how it should be done in XML? Should link relationships
be derived from element names in the hierarchy, from an specific
attribute, or something else? You can obviously choose one, but in
JSON, it seems more obvious, IMO.
Kris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iEYEARECAAYFAknmstgACgkQ9VpNnHc4zAzzZwCfe1l0NZjbUtfs/L3X6A9moV0W
6hEAoJm+cxG+V7Cgmzd/sUSgfrWdELme
=P6qm
-----END PGP SIGNATURE-----
Peter Williams wrote: > On Wed, Apr 15, 2009 at 6:38 PM, Craig McClanahan <craigmcc@...> wrote: > >> I've had a little bit of experience in this area (we have not made >> dramatic changes, but we have modified some things), and so far the >> following set of rules has avoided the need to do any kind of >> versioning in my representations so far, but also avoided breaking old >> clients: >> >> * Clients MUST ignore fields in the representation that they don't >> understand >> (i.e. that are not defined in whatever version of the spec the client is >> programmed to expect). >> >> * Clients SHOULD indicate the version of the API specification they are >> programmed to expect. Because this has to happen on GET requests >> too, there's no representation to include it in, so we use a custom HTTP >> header -- sort of analogous to a User-Agent header that webapps >> can use to customize their responses. An alternative might be to use >> a request parameter, or bake the version number into the service URI >> or something like that. >> >> * Servers MUST respect the client's indication of version preference >> if it matters. Given the rules above, it's OK to include additional fields >> added in some later version -- the client should just ignore them -- >> but it's not OK to remove a field that was required to be present >> in the version of the spec that the client specifies. >> >> * Servers MAY assume that a client not describing their version preference >> should get the latest and greatest version. >> >> * If the representation sent by a server includes links (per our HATEOAS >> threads), the server MAY send different URIs depending on the version, >> or MAY handle multiple versions at the same URI ... whatever it wants. >> > > The above describes a great approach to versioning an API. Clients > must ignore parts of representations they don't understand. New > application semantics are added to existing representations in > backwards compatible ways. Clients have a way to tell the server they > require it support a particular set of semantics. > > However, using a non-standard mechanism, like a custom HTTP head, to > specify the version is deeply suboptimal. The same behavior can be > more easily and transparently provided by using media types . The > required set of semantics (backwards compatible version) can specified > as a media type parameter (e.g. > `application/vnd.myapp+json;level=42`). And should you ever run into > the a situation where a non-compatible change is absolutely required > you can switch to a different media type. > If it was actually preferable to put all content negotiation into the one content-type header, then why would HTTP bother to provide an Accept-Language header? It seems you get less self-descriptive messages by treating versioning as an accept-extension, and doing it that way also makes updating/upgrading clients and server-side message routing more complicated. Versioning within representations is sub-optimal and costly because your server will provide useless extra data to old clients - this cost could be avoided with adequate conneg. As I mentioned before, clients that wished to receive evolving representations can simply omit the version header altogether. Regards, Mike
Couple more things! Peter Williams wrote: > However, using a non-standard mechanism, like a custom HTTP head, to > specify the version is deeply suboptimal. I do appreciate the point you're making. Here's what the HTTP RFC has to say: (http://www.w3.org/Protocols/rfc2616/rfc2616-sec5.html#sec5.3) ".. new or experimental header fields MAY be given the semantics of request- header fields if all parties in the communication recognize them to be request-header fields." Many modern HTTP client libraries support sending custom request headers. Even if they don't - it's not the end of the world since, as we all agree, the client can be designed to handle the evolving representations that would result from not providing the version header. Also, from a pure REST perspective I don't believe this approach is non-standard Regards, Mike
Craig McClanahan wrote: > * Clients SHOULD indicate the version of the API specification they are > programmed to expect. Because this has to happen on GET requests > too, there's no representation to include it in, so we use a custom HTTP > header -- sort of analogous to a User-Agent header that webapps > can use to customize their responses. An alternative might be to use > a request parameter, or bake the version number into the service URI > or something like that. > Seems like you've invented your own conneg when you could have used HTTP's conneg. I just don't see why creating new media types would be a worse solution, if anything it would be better as you'd be following the constraints of HTTP rather than tunneling your own protocol. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Thu, Apr 16, 2009 at 5:49 AM, Bill Burke <bburke@...> wrote: > > > Craig McClanahan wrote: >> >> * Clients SHOULD indicate the version of the API specification they are >> programmed to expect. Because this has to happen on GET requests >> too, there's no representation to include it in, so we use a custom HTTP >> header -- sort of analogous to a User-Agent header that webapps >> can use to customize their responses. An alternative might be to use >> a request parameter, or bake the version number into the service URI >> or something like that. >> > > Seems like you've invented your own conneg when you could have used HTTP's > conneg. I just don't see why creating new media types would be a worse > solution, if anything it would be better as you'd be following the > constraints of HTTP rather than tunneling your own protocol. Are you referring to adding a version attribute on the media type as the "standard" approach to this? I'm actually *not* trying to version my representations ... I'm trying to version the client software that talks to my service. That's not the same question. Craig > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
Hi guys, How does a consumer of a specific RESTful service work? From my perspective, a good RESTful client should be able to survive with just (1) the root URL of the service through which it can follow the hypermedia and (2) basic knowledge on the format of the representations of the resources are to be interpreted. My concerns are: 1 - Let as assume another server app consuming a RESTful service. At start-up, it gets the links useful from the root URL then traverses them as necessary. Assuming there are elements, such as forms, these are probably stored as well. However, when the RESTful service evolves, say, changes the URIs, etc, the consumer's data would be outdated. How is this best handled? I could opt to always start each request with the root URL all the time, then follow the necessary links all the time. Of course, it'll be best to take advantage of caching and/or conditional GETs here. (Alternatively, I can just have a document that contains all the possible operations in one document - say WADL instead of traversing several links.) 2 - What's a good guideline on what stuff to watch out in the representations? I wouldn't want my representations to always adhere to a specific schema so as not to hinder its evolution. But some things have to be kept constant for older REST clients on the same service working right? What's a good guideline for those? (i.e. a specific XPath will always point to a specific information regardless of whatever revisions the service goes through.) Lastly, I was wondering why there aren't that many good articles on how to create a good RESTful client for a specific service. There are tons for building services, but not so much on building good clients.
On 16.04.2009, at 19:30, Craig McClanahan wrote: > On Thu, Apr 16, 2009 at 5:49 AM, Bill Burke <bburke@...> wrote: > > > > Craig McClanahan wrote: > >> > >> * Clients SHOULD indicate the version of the API specification > they are > >> programmed to expect. Because this has to happen on GET requests > >> too, there's no representation to include it in, so we use a > custom HTTP > >> header -- sort of analogous to a User-Agent header that webapps > >> can use to customize their responses. An alternative might be > to use > >> a request parameter, or bake the version number into the service > URI > >> or something like that. > >> > > > > Seems like you've invented your own conneg when you could have > used HTTP's > > conneg. I just don't see why creating new media types would be a > worse > > solution, if anything it would be better as you'd be following the > > constraints of HTTP rather than tunneling your own protocol. > > Are you referring to adding a version attribute on the media type as > the "standard" approach to this? I'm actually *not* trying to version > my representations ... I'm trying to version the client software that > talks to my service. That's not the same question. > It's not the same question only if there's a difference between "API specification" and "media type". Arguably there shouldn't be any – your client would simply indicate what media type(s) it accepts, which is something "that webapps can use to customize their responses". I don't see this as exclusive. I'm in favor of interpreting information liberally, i.e. a consumer of a message should ignore stuff it doesn't understand, and it should be possible to do backwards- compatible changes in the to a certain degree. But at some point there'll be a breaking change, and that should be reflectable in the media type. So being able to ask for/send each of application/vnd.myformat application/vnd.myformat;version=1 application/vnd.myformat;version=2 seems to be a good match for this requirement. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Stefan Tilkov wrote: > On 16.04.2009, at 19:30, Craig McClanahan wrote: > > >> On Thu, Apr 16, 2009 at 5:49 AM, Bill Burke <bburke@...> wrote: >> >>> Craig McClanahan wrote: >>> >>>> * Clients SHOULD indicate the version of the API specification >>>> >> they are >> >>>> programmed to expect. Because this has to happen on GET requests >>>> too, there's no representation to include it in, so we use a >>>> >> custom HTTP >> >>>> header -- sort of analogous to a User-Agent header that webapps >>>> can use to customize their responses. An alternative might be >>>> >> to use >> >>>> a request parameter, or bake the version number into the service >>>> >> URI >> >>>> or something like that. >>>> >>>> >>> Seems like you've invented your own conneg when you could have >>> >> used HTTP's >> >>> conneg. I just don't see why creating new media types would be a >>> >> worse >> >>> solution, if anything it would be better as you'd be following the >>> constraints of HTTP rather than tunneling your own protocol. >>> >> Are you referring to adding a version attribute on the media type as >> the "standard" approach to this? I'm actually *not* trying to version >> my representations ... I'm trying to version the client software that >> talks to my service. That's not the same question. >> >> > > It's not the same question only if there's a difference between "API > specification" and "media type". Arguably there shouldn't be any � > your client would simply indicate what media type(s) it accepts, which > is something "that webapps can use to customize their responses". > > I don't see this as exclusive. I'm in favor of interpreting > information liberally, i.e. a consumer of a message should ignore > stuff it doesn't understand, and it should be possible to do backwards- > compatible changes in the to a certain degree. But at some point > there'll be a breaking change, and that should be reflectable in the > media type. So being able to ask for/send each of > > application/vnd.myformat > application/vnd.myformat;version=1 > application/vnd.myformat;version=2 > > seems to be a good match for this requirement. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > Why then is Content-Language not treated as an accept-extension? With your breaking change example, the media type isn't changing - the version is. Is there any distinction there between document schema and media type? Regards, Mike
On Apr 17, 2009, at 6:53 AM, Mike Kelly wrote: > > Why then is Content-Language not treated as an accept-extension? > > With your breaking change example, the media type isn't changing - the > version is. Is there any distinction there between document schema and > media type? Since these are custom media types, the owner of these types can say that adding a different version param makes it a representation of a different version. Regarding the second question, similarities end with the fact that both are "types". --- http://subbu.org
Subbu Allamaraju wrote: > On Apr 17, 2009, at 6:53 AM, Mike Kelly wrote: > > >> Why then is Content-Language not treated as an accept-extension? >> >> With your breaking change example, the media type isn't changing - the >> version is. Is there any distinction there between document schema and >> media type? >> > > > > Since these are custom media types, the owner of these types can say > that adding a different version param makes it a representation of a > different version. > > Regarding the second question, similarities end with the fact that > both are "types". > > --- > http://subbu.org > So, they're different representations of the same media type? If that is the case, wouldn't a version header make messages more self-descriptive? Regards, Mike
On 17.04.2009, at 15:53, Mike Kelly wrote: > > Why then is Content-Language not treated as an accept-extension? I don't know. It could have, but obviously people deemed it important enough to be handled separately. > > With your breaking change example, the media type isn't changing - > the version is. Is there any distinction there between document > schema and media type? It's a matter of interpretation; I like to be able to group compatible formats together under the same media type. I've also grown fond of labeling related document types (in the XML complex type sense) with the same media type, i.e. instead of vnd.customer+xml and vnd.customers +xml I'll just use vnd.crm+xml for both <customer> and <customer-list> (appending a version when it matters). But I can see how one might come to a different decision depending on the use case. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Stefan Tilkov wrote: > > > > On 17.04.2009, at 15:53, Mike Kelly wrote: >> Why then is Content-Language not treated as an accept-extension? > > I don't know. It could have, but obviously people deemed it important > enough to be handled separately. > So is it fair to say that using an additional header for versioned conneg is more in-keeping with HTTP? >> >> With your breaking change example, the media type isn't changing - >> the version is. Is there any distinction there between document >> schema and media type? > > It's a matter of interpretation; I like to be able to group compatible > formats together under the same media type. I've also grown fond of > labeling related document types (in the XML complex type sense) with > the same media type, i.e. instead of vnd.customer+xml and > vnd.customers+xml I'll just use vnd.crm+xml for both <customer> and > <customer-list> (appending a version when it matters). But I can see > how one might come to a different decision depending on the use case. > What's the benefit of using custom media types over generic ones? The only ones I can think of are already provided by schemas, and having custom media types for every 'type' in the system seems like it would complicate things on the client side. Regards, Mike
On Fri, Apr 17, 2009 at 7:23 AM, Stefan Tilkov <stefan.tilkov@...> wrote: > But at some point > there'll be a breaking change, and that should be reflectable in the > media type. So being able to ask for/send each of > > application/vnd.myformat > application/vnd.myformat;version=1 > application/vnd.myformat;version=2 I don't think the version parameter works very well for incompatible changes. Consider a client that claims to accept `application/vnd.myformat`. Do you send them verison 1 or version 2? Well, it depends, of course. If it a developer using curl to explore the service they want version 2 because it is the "best" available version. If, on the other hand, it is a older client written before version 2 was released it wants version 1. Either way you choose it will the wrong format some of the time. If you have made a change significant enough to that your new representations are not compatible with previous versions it is a derivative format, not the same format. We did not call XML SGML version 2, and for good reason. -- Peter http://barelyenough.org
On 17.04.2009, at 17:18, Mike Kelly wrote: > Stefan Tilkov wrote: >> >> >> >> On 17.04.2009, at 15:53, Mike Kelly wrote: >>> Why then is Content-Language not treated as an accept-extension? >> >> I don't know. It could have, but obviously people deemed it >> important enough to be handled separately. >> > > So is it fair to say that using an additional header for versioned > conneg is more in-keeping with HTTP? How does that follow? If I provide the logically "same" information to clients with different needs with regards to the type - e.g. in image/ jpeg and image/gif - this sounds like content negotiation to me. How are v1 and v2 of a format different? > >>> >>> With your breaking change example, the media type isn't changing - >>> the version is. Is there any distinction there between document >>> schema and media type? >> >> It's a matter of interpretation; I like to be able to group >> compatible formats together under the same media type. I've also >> grown fond of labeling related document types (in the XML complex >> type sense) with the same media type, i.e. instead of vnd.customer >> +xml and vnd.customers+xml I'll just use vnd.crm+xml for both >> <customer> and <customer-list> (appending a version when it >> matters). But I can see how one might come to a different decision >> depending on the use case. >> > What's the benefit of using custom media types over generic ones? > The only ones I can think of are already provided by schemas, and > having custom media types for every 'type' in the system seems like > it would complicate things on the client side. There's been a long discussion about this on this list multiple times – you get clearer semantics in exchange for a less widely understood format. Personally, I can see good reasons for both, but I agree that whenever a meaningful (i.e., not application/xml) existing standard media type is available that matches the requirements, it should be used. Stefan > > Regards, > Mike >
Stefan Tilkov wrote: > On 17.04.2009, at 17:18, Mike Kelly wrote: > > >> Stefan Tilkov wrote: >> >>> >>> On 17.04.2009, at 15:53, Mike Kelly wrote: >>> >>>> Why then is Content-Language not treated as an accept-extension? >>>> >>> I don't know. It could have, but obviously people deemed it >>> important enough to be handled separately. >>> >>> >> So is it fair to say that using an additional header for versioned >> conneg is more in-keeping with HTTP? >> > > How does that follow? If I provide the logically "same" information to > clients with different needs with regards to the type - e.g. in image/ > jpeg and image/gif - this sounds like content negotiation to me. How > are v1 and v2 of a format different? > Different versions of a format are different representations the same way that different language of a format are different representations. If HTTP's current solution to language negotiation is with an separate header, it seems more consistent to treat version negotiation in the same way (i.e. with a separate header) - Mike
On Fri, Apr 17, 2009 at 1:31 AM, jv.liwanag <jvliwanag@...> wrote: > My concerns are: > 1 - Let as assume another server app consuming a RESTful service. At > start-up, it gets the links useful from the root URL then traverses them as > necessary. Assuming there are elements, such as forms, these are probably > stored as well. However, when the RESTful service evolves, say, changes the > URIs, etc, the consumer's data would be outdated. How is this best handled? > > I could opt to always start each request with the root URL all the time, > then follow the necessary links all the time. Of course, it'll be best to > take advantage of caching and/or conditional GETs here. Starting at the top and working through the hypermedia is my preferred approach. With basic caching and conditional requests acceptable performance is quite easy to maintain. > 2 - What's a good guideline on what stuff to watch out in the > representations? I wouldn't want my representations to always adhere to a > specific schema so as not to hinder its evolution. But some things have to > be kept constant for older REST clients on the same service working right? > What's a good guideline for those? (i.e. a specific XPath will always point > to a specific information regardless of whatever revisions the service goes > through.) I have not built any clients that use XML base services, but for clients that use JSON representations i have used a very similar approach. Basically, creating domain objects by making requests and extracting each individual piece of the data i wanted by name, or path, and storing them in instance variables in the object. In XML, using XPath would be equivalent so i expect that would work pretty well. -- Peter Williams http://barelyenough.org
Most of the time, I want to version the representation; not the app; not the resource. For example, many times the changes to a representation version have to do with changing the hypermedia links in the representation (to adjust workflow); not the elements/fields. Also, I deal with clients and media-types that do not always support strong schema. For that reason, I cannot rely on a schema document alone to signal details on versions. Even in cases where I have XML clients, changing workflow (hypermedia) is not something that is easily (or even desirably) validated using schema. Finally, I use the OPTIONS method to allow clients to request which media-types are acceptable and which are returned. This allows me to easily and accurately report minor changes in the application at the resource URI level and allow clients to negotiate for the version they wish. For these reasons, I like to keep the version information in the media-type and not as a separate Header (or in the URI or via a schema ref). mca http://amundsen.com/blog/ On Fri, Apr 17, 2009 at 11:48, Mike Kelly <mike@...> wrote: > Stefan Tilkov wrote: > > On 17.04.2009, at 17:18, Mike Kelly wrote: > > > > > >> Stefan Tilkov wrote: > >> > >>> > >>> On 17.04.2009, at 15:53, Mike Kelly wrote: > >>> > >>>> Why then is Content-Language not treated as an accept-extension? > >>>> > >>> I don't know. It could have, but obviously people deemed it > >>> important enough to be handled separately. > >>> > >>> > >> So is it fair to say that using an additional header for versioned > >> conneg is more in-keeping with HTTP? > >> > > > > How does that follow? If I provide the logically "same" information to > > clients with different needs with regards to the type - e.g. in image/ > > jpeg and image/gif - this sounds like content negotiation to me. How > > are v1 and v2 of a format different? > > > > Different versions of a format are different representations the same > way that different language of a format are different representations. > > If HTTP's current solution to language negotiation is with an separate > header, it seems more consistent to treat version negotiation in the > same way (i.e. with a separate header) > > - Mike > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Michael Schuerig wrote:
> Would it be fair to summarize this and the linked articles as "the
> experts still need to make up their minds and their's no clear way to go
> for practitioners yet"? I'm only being slightly facetious. While I find
> REST and its surroundings worthwhile and somewhat interesting, the main
> topics of my work and interests are elsewhere. As such, I leave the
> driving to the experts and my question was a kind of "are we there
> yet?". I'm not complaining if the work is going to take more time. I'm
> just trying to find out whether there already is something for non-
> experts to use in their work.
Use this in your JSON formats:
{
"links":[
{"href":"", "type":"", "rel":"", "hreflang":"", "size":""}
]
}
href: the url; must have one
rel: the relationship to the enclosing document, must have one
type:the media type; optional
hreflang:the lang code for the link, optional
size: the size of the representation, optional
I'm only being slightly facetious.
Bill
Kris Zyp wrote: > The elegance of JSON in a REST architecture is that JSON implicitly > provides link relations. I am skeptical that we need yet another link > relation mechanism in addition to the natural links are already > defined by JSON itself. This assumption was a huge mistake in the XML community; I hope JSON users aren't doing the same thing. Bill
On Fri, Apr 17, 2009 at 9:48 AM, Mike Kelly <mike@...> wrote: > > Different versions of a format are different representations the same > way that different language of a format are different representations. I don't agree. Representations that vary only by language contain same information, just encoded differently. This is not true of different versions of format. Each version of format will allow information to be expressed that was not encodeable in the previous version. Otherwise you would just keep using the previous version. -- Peter Williams http://barelyenough.org
On Fri, Apr 17, 2009 at 2:48 PM, Bill de hOra <bill@...> wrote: > > > Kris Zyp wrote: > >> The elegance of JSON in a REST architecture is that JSON implicitly >> provides link relations. I am skeptical that we need yet another link >> relation mechanism in addition to the natural links are already >> defined by JSON itself. > > This assumption was a huge mistake in the XML community; I hope JSON > users aren't doing the same thing. Can you elaborate a bit more on which assumption was a huge mistake and how the problems that assumption caused manifested themselves? -- Peter Williams http://barelyenough.org
On Sat, Apr 11, 2009 at 8:36 PM, Michael Schuerig <michael@...> wrote: > With all the recent discussion of HATEOAS, are there any JSON-based > examples/exemplars worth looking at and learning from? It's not HATEOAS, but I thought people might be interested in CloudKit: http://getcloudkit.com/index.html . Described as CloudKit provides schema-free, auto-versioned, RESTful JSON storage with optional OpenID and OAuth support, including OAuth Discovery. CloudKit is Rack middleware. It can be used on its own or alongside other Rack-based applications or middleware components such as Rails, Merb or Sinatra. The API looks interesting. For example: OPTIONS /%uri% Return an Allow header containing the available methods for a given URI. -- Nick
Peter Williams wrote: > > > > On Fri, Apr 17, 2009 at 2:48 PM, Bill de hOra <bill@... > <mailto:bill%40dehora.net>> wrote: > > > > > > Kris Zyp wrote: > > > >> The elegance of JSON in a REST architecture is that JSON implicitly > >> provides link relations. I am skeptical that we need yet another link > >> relation mechanism in addition to the natural links are already > >> defined by JSON itself. > > > > This assumption was a huge mistake in the XML community; I hope JSON > > users aren't doing the same thing. > > Can you elaborate a bit more on which assumption was a huge mistake > and how the problems that assumption caused manifested themselves? That being a child element entails cardinality or relational or other semantics wrt a parent element other than what XML specifies. <parent> <child> <document> <chapter> <folder> <file> <car> <wheel> <entry> <author> These have no semantics beyond XML parsing. Unless you document the semantics for your format. If you want to define semantics a machine can automatically leverage, a language like RDF or KIF would be better. I don't buy that JSON provides any such entailments either. Bill
On Fri, Apr 17, 2009 at 2:11 PM, Nick Gall <nick.gall@...> wrote: > > > On Sat, Apr 11, 2009 at 8:36 PM, Michael Schuerig <michael@...> > wrote: >> With all the recent discussion of HATEOAS, are there any JSON-based >> examples/exemplars worth looking at and learning from? > > It's not HATEOAS, but I thought people might be interested in CloudKit: > http://getcloudkit.com/index.html . > Described as > > CloudKit provides schema-free, auto-versioned, RESTful JSON storage with > optional OpenID and OAuth support, including OAuth Discovery. > CloudKit is Rack middleware. It can be used on its own or alongside other > Rack-based applications or middleware components such as Rails, Merb or > Sinatra. > > The API looks interesting. For example: > > OPTIONS /%uri% > > Return an Allow header containing the available methods for a given URI. CloudKit does address an interesting problem, but OPTIONS is an HTTP/1.1 thing, and should be supported by whatever library you're using for REST development. That's certainly the case with any of the (Java-based) JAX-RS implementations, which are required to synthesize an appropriate response to an OPTIONS request (based on what resources methods you've provided for the various HTTP verbs), unless you have provided a custom resource method to handle it. Craig
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Bill de hOra wrote: > > > Kris Zyp wrote: > > > The elegance of JSON in a REST architecture is that JSON implicitly > > provides link relations. I am skeptical that we need yet another link > > relation mechanism in addition to the natural links are already > > defined by JSON itself. > > This assumption was a huge mistake in the XML community; I hope JSON > users aren't doing the same thing. It is mistake in XML. And it would be an even bigger mistake to think the same rules apply in JSON. Hyperbole aside, perhaps you have some technical arguments for this feeling that we could discuss? Thanks, Kris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAknpHgsACgkQ9VpNnHc4zAy04gCgqzi4THtPS7bXamnyVMEjE4h0 H9oAoJzL6aXSnhF+V/c7o80uVLgI1cTb =2TEp -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 My apologies on the last email, I should have read further ahead in my inbox and responded to this. Will now... Bill de hOra wrote: > > > Peter Williams wrote: >> >> >> >> On Fri, Apr 17, 2009 at 2:48 PM, Bill de hOra <bill@... >> <mailto:bill%40dehora.net>> wrote: >>> >>> >>> Kris Zyp wrote: >>> >>>> The elegance of JSON in a REST architecture is that JSON >>>> implicitly provides link relations. I am skeptical that we >>>> need yet another link >>>> relation mechanism in addition to the natural links are >>>> already defined by JSON itself. >>> >>> This assumption was a huge mistake in the XML community; I hope >>> JSON users aren't doing the same thing. >> >> Can you elaborate a bit more on which assumption was a huge >> mistake and how the problems that assumption caused manifested >> themselves? > > That being a child element entails cardinality or relational or > other semantics wrt a parent element other than what XML specifies. > > > <parent> <child> > > <document> <chapter> > > <folder> <file> > > <car> <wheel> > > <entry> <author> > > These have no semantics beyond XML parsing. Unless you document the > semantics for your format. If you want to define semantics a > machine can automatically leverage, a language like RDF or KIF > would be better. > > I don't buy that JSON provides any such entailments either. Certainly one of the key differences between JSON and XML is that in XML elements are simply named, but they don't define any relationship between child and parent. In JSON, it is exactly the opposite, property names are defining relationships between values rather than naming values. This is exactly why determining link relationships from structure is inappropriate for XML and appropriate for JSON. Kris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAknpH9AACgkQ9VpNnHc4zAy9ywCgocuqkSREGFvHekyR5NtylcSy Cq4AnRp5fSwd4IMoFjwa1FXyQzPzWY4F =k/6k -----END PGP SIGNATURE-----
The mistake was making assumptions without either the media type or something like RDF saying anything about it. This is not an XML or JSON format-level mistake, but assumptions people may be making in software. Subbu On Apr 17, 2009, at 5:25 PM, Kris Zyp wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > > Bill de hOra wrote: >> >> >> Kris Zyp wrote: >> >>> The elegance of JSON in a REST architecture is that JSON implicitly >>> provides link relations. I am skeptical that we need yet another >>> link >>> relation mechanism in addition to the natural links are already >>> defined by JSON itself. >> >> This assumption was a huge mistake in the XML community; I hope JSON >> users aren't doing the same thing. > > It is mistake in XML. And it would be an even bigger mistake to think > the same rules apply in JSON. > > Hyperbole aside, perhaps you have some technical arguments for this > feeling that we could discuss? > > Thanks, > Kris > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iEYEARECAAYFAknpHgsACgkQ9VpNnHc4zAy04gCgqzi4THtPS7bXamnyVMEjE4h0 > H9oAoJzL6aXSnhF+V/c7o80uVLgI1cTb > =2TEp > -----END PGP SIGNATURE----- >
Kris Zyp wrote: > Certainly one of the key differences between JSON and XML is that in > XML elements are simply named, but they don't define any relationship > between child and parent. In JSON, it is exactly the opposite, > property names are defining relationships between values rather than > naming values. How, exactly does JSON's grammar do this that holds true for all JSON formatted data? I don't get it. Bill
Kris Zyp wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > > Bill de hOra wrote: >> >> Kris Zyp wrote: >> >>> The elegance of JSON in a REST architecture is that JSON implicitly >>> provides link relations. I am skeptical that we need yet another link >>> relation mechanism in addition to the natural links are already >>> defined by JSON itself. >> This assumption was a huge mistake in the XML community; I hope JSON >> users aren't doing the same thing. > > It is mistake in XML. And it would be an even bigger mistake to think > the same rules apply in JSON. > > Hyperbole aside, perhaps you have some technical arguments for this > feeling that we could discuss? Good. json.org's homepage defines a grammar, I saw nothing there about semantics. So where's the technical argument to back up your claim - "JSON implicitly provides link relation" - ? Bill
On 4/17/09 11:59 PM, Peter Williams wrote:
> On Fri, Apr 17, 2009 at 1:31 AM, jv.liwanag<jvliwanag@...> wrote:
>
>> My concerns are:
>> 1 - Let as assume another server app consuming a RESTful service. At
>> start-up, it gets the links useful from the root URL then traverses them as
>> necessary. Assuming there are elements, such as forms, these are probably
>> stored as well. However, when the RESTful service evolves, say, changes the
>> URIs, etc, the consumer's data would be outdated. How is this best handled?
>>
>> I could opt to always start each request with the root URL all the time,
>> then follow the necessary links all the time. Of course, it'll be best to
>> take advantage of caching and/or conditional GETs here.
>>
>
> Starting at the top and working through the hypermedia is my preferred
> approach. With basic caching and conditional requests acceptable
> performance is quite easy to maintain.
>
>
>> 2 - What's a good guideline on what stuff to watch out in the
>> representations? I wouldn't want my representations to always adhere to a
>> specific schema so as not to hinder its evolution. But some things have to
>> be kept constant for older REST clients on the same service working right?
>> What's a good guideline for those? (i.e. a specific XPath will always point
>> to a specific information regardless of whatever revisions the service goes
>> through.)
>>
>
> I have not built any clients that use XML base services, but for
> clients that use JSON representations i have used a very similar
> approach. Basically, creating domain objects by making requests and
> extracting each individual piece of the data i wanted by name, or
> path, and storing them in instance variables in the object. In XML,
> using XPath would be equivalent so i expect that would work pretty
> well.
>
My concern about using XPath though (or traversing objects using '.' in
JSON) is that I can't freely change my representation. Say, if I wanted
to change from
{'first_name':'jv', 'last_name':'liwanag'}
to
{'name':{'first':'jv', 'last_name':'liwanag'}}
on a system that is already deployed.
I was wondering if there are good guidelines/tools my clients can use so
that it can handle that type of change. I was looking recently at WADL
and it does offer a good solution to changing URLs and request
parameters. I was wondering if there is a good tool to anticipate
changing representations as well.
In XML, a (possibly bad) idea I can think of is to give the users a
fixed schema then have stylesheets ready to transform the XML if a
change is present. Maybe create a workable standard which defines the
stylesheets for the resources that changed.
> --
> Peter Williams
> http://barelyenough.org
>
Jan Vincent Liwanag
On Fri, Apr 17, 2009 at 7:59 PM, Jan Vincent Liwanag
<jvliwanag@...> wrote:
>
> My concern about using XPath though (or traversing objects using '.' in
> JSON) is that I can't freely change my representation. Say, if I wanted to
> change from
>
> {'first_name':'jv', 'last_name':'liwanag'}
>
> to
>
> {'name':{'first':'jv', 'last_name':'liwanag'}}
>
> on a system that is already deployed.
I think handle changes of this class require the application of human
like intelligence. <https://www.mturk.com/mturk/welcome> is probably
the best bet for automatically handling such changes in the near
future.
I would suggest not making changes like that. If you need to make
that sort of change I think a new media type would be in order. That
way the server can continue to provide both varieties of
representations. Or it can explicitly inform the client that it no
longer supports the older variety via a 406 (Not Acceptable) response.
> I was wondering if there are good guidelines/tools my clients can use so
> that it can handle that type of change. I was looking recently at WADL and
> it does offer a good solution to changing URLs and request parameters. I was
> wondering if there is a good tool to anticipate changing representations as
> well.
Using XPath or JSONPath (or any approach that will similarly ignore
any non-required parts of the representations) to extract the needed
data will insulate the client from many of the common ways
representations evolve. But even that will require the server side
developers be disciplined enough not to introduce breaking changes in
existing media types.
Clients should be resilient to the addition of new information to
representations. But there is not much a client can do if the server
is going to suddenly remove, or change the way it encodes, existing
information.
--
Peter Williams
http://barelyenough.org
On Apr 17, 2009, at 5:33 PM, Kris Zyp wrote: > Certainly one of the key differences between JSON and XML is that in > XML elements are simply named, but they don't define any relationship > between child and parent. In JSON, it is exactly the opposite, > property names are defining relationships between values rather than > naming values. This is exactly why determining link relationships from > structure is inappropriate for XML and appropriate for JSON. By the way, XLink (http://www.w3.org/TR/xlink/) tried exactly that for XML. However, AFAIK, it is not widely adopted. Subbu
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Bill de hOra wrote: > > > Kris Zyp wrote: > > > Certainly one of the key differences between JSON and XML is that in > > XML elements are simply named, but they don't define any relationship > > between child and parent. In JSON, it is exactly the opposite, > > property names are defining relationships between values rather than > > naming values. > > How, exactly does JSON's grammar do this that holds true for all JSON > formatted data? I don't get it. > You are absolutely right, JSON simply defines a grammar, and the interpretation and semantics are up to the users. Consequently, we certainly can not claim that all JSON will or should align with link relationships. However, from what I have seen, JSON is used very consistently amongst different languages in terms of how it mappings and behavior. And the structure of a JSON object and its normal usage of an entity with string-keyed references to other entities does align very well with link relationships due to the similarity of their structure (as opposed to XML, which is usually treated as a completely different structural style). You are right that we should not assume that JSON structures should always imply the link relationships, I am sure there is certainly value to explicit mechanisms for defining link relationships. But the best complement to explicit mechanisms is reasonable defaults, and defining relationships through string-keyed structures facilitates a continuity between data structures and relationships that seems like a very reasonable default. Kris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAknseSgACgkQ9VpNnHc4zAxOpwCghSKCO/Ptm2jR6rLmr631TqwP Zg8AoIj4//R81RVtNousYIvG/oPDrL3i =PQI6 -----END PGP SIGNATURE-----
Hello, Could you please explain me what is the best way to secure REST based services? Is SSL only way? Expecting your expert adviceon this. Thank you! With regards, Saravan.
Hi, I wondered if anyone had links to case studies/examples of enterprises that use REST to build services and what benefits they gained from it. Thanks Anand
I have just start with REST so there are some difficuties for me in my researching. I know that REST is an architecture style. what can be design by REST? I only find that REST used to design webservice. Are there anything that can be design by REST, such as a win application,or somthing else?? Would you please tell me some thing about this. Thank you very very much?
Anand, I have a Case Study/example for you. I am just wrapping up a project that had disparate applications, one with MySQL and the other with SQL Server. We used Semantic ReSTful Web Services to generate Web Feeds (ATOM and RSS) to indicate the changes to resources made in one database. We used the feeds, which had RDFa markup to tell our program what kind of data was in our feeds. The program was then able to generate SQL to insert into the other database. I can see this technique applied on other projects. The same ReSTFul Web Services were also used to display html. Multiple representations of a resource is one benefit of the ReST style. The main benefit to ReSTful is it allows for scalability because it does not rely on maintaining state and makes caching easier. One more thing, ReSTful Web Services are much easier on the programmer to develop. David Yuctan Hodge, Partner Lucid Technics, LLC - Think Clear. Think Lucid. www.lucidtechnics.com Phone 703.798.9067 Fax 703.563.6279 On Tue, Apr 21, 2009 at 3:18 PM, rcanand <rcanand@...> wrote: > > > Hi, > > I wondered if anyone had links to case studies/examples of enterprises that > use REST to build services and what benefits they gained from it. > > Thanks > Anand > > >
REST is a distributed programming architecture style. It's used to exchange data between two applications. If you're using HTTP based REST, then you can use REST anywhere that you can create an HTTP connection. You can create web apps, windows apps, mobiles apps, embedded apps and etc. -Solomon On Tue, Apr 21, 2009 at 8:17 PM, cule_barca <vantu.ituns@...> wrote: > > > I have just start with REST so there are some difficuties for me in my > researching. > > I know that REST is an architecture style. what can be design by REST? I > only find that REST used to design webservice. Are there anything that can > be design by REST, such as a win application,or somthing else?? > > Would you please tell me some thing about this. Thank you very very much? > > >
InfoQ has just put up the video and slides from a very interesting presentation by Mark Nottingham on HTTPbis: http://www.infoq.com/news/2009/04/mnot-http-status Stefan
On Wed, Apr 22, 2009 at 1:07 PM, Solomon Duskis <sduskis@...> wrote: > > > REST is a distributed programming architecture style. It's used to exchange > data between two applications. If you're using HTTP based REST, then you REST is always based on HTTP. You could say that REST is a set of best-practises for working with HTTP. On Wed, Apr 22, 2009 at 2:17 AM, cule_barca <vantu.ituns@...> wrote: > > > I have just start with REST so there are some difficuties for me in my > researching. > > I know that REST is an architecture style. what can be design by REST? I > only find that REST used to design webservice. Are there anything that can > be design by REST, such as a win application,or somthing else?? > > Would you please tell me some thing about this. Thank you very very much? http://www.infoq.com/articles/rest-introduction -- troels
+1 on the into article. I also highly recommend reading the REST dissertation - http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm, specifically chapter 5 and 6. troels, "REST is always based on HTTP"? I think you just opened a can of worms... Roy Fielding specifically stays away from HTTP specific implementation details :). There are plenty of implementation that use RESTful ideas without HTTP. -Solomon On Wed, Apr 22, 2009 at 7:12 AM, troels knak-nielsen <troelskn@...>wrote: > > > On Wed, Apr 22, 2009 at 1:07 PM, Solomon Duskis <sduskis@...<sduskis%40gmail.com>> > wrote: > > > > > > REST is a distributed programming architecture style. It's used to > exchange > > data between two applications. If you're using HTTP based REST, then you > > REST is always based on HTTP. You could say that REST is a set of > best-practises for working with HTTP. > > On Wed, Apr 22, 2009 at 2:17 AM, cule_barca <vantu.ituns@...<vantu.ituns%40gmail.com>> > wrote: > > > > > > I have just start with REST so there are some difficuties for me in my > > researching. > > > > I know that REST is an architecture style. what can be design by REST? I > > only find that REST used to design webservice. Are there anything that > can > > be design by REST, such as a win application,or somthing else?? > > > > Would you please tell me some thing about this. Thank you very very much? > > http://www.infoq.com/articles/rest-introduction > > -- > troels > >
On Apr 22, 2009, at 1:12 PM, troels knak-nielsen wrote: > On Wed, Apr 22, 2009 at 1:07 PM, Solomon Duskis <sduskis@...> > wrote: >> >> >> REST is a distributed programming architecture style. It's used to >> exchange >> data between two applications. If you're using HTTP based REST, >> then you > > REST is always based on HTTP. You could say that REST is a set of > best-practises for working with HTTP. Umm, no. REST is an architectural style and you could implement this style as something completely different than HTTP. Jan > > > On Wed, Apr 22, 2009 at 2:17 AM, cule_barca <vantu.ituns@...> > wrote: >> >> >> I have just start with REST so there are some difficuties for me in >> my >> researching. >> >> I know that REST is an architecture style. what can be design by >> REST? I >> only find that REST used to design webservice. Are there anything >> that can >> be design by REST, such as a win application,or somthing else?? >> >> Would you please tell me some thing about this. Thank you very very >> much? > > http://www.infoq.com/articles/rest-introduction > > -- > troels > > > ------------------------------------ > > Yahoo! Groups Links > > >
REST is an Architectural Style and one of its current, famous implementation is using HTTP and URI. Some even says that implementations haven't yet conveyed all the ideas of REST. And it can be used to develop Network Based Applications***. To have a quick review about REST, RPC (SOAP is an example), REST-RPC Hybrid Architectural Styles: 1. RPC - Style (SOAP for instance): Method Information* (what server should do with the data: delete/add/update etc) - Scoping Information* (what data server should act on) are both in an "envelop" e.g., SOAP message. So, you can't have direct access to a resource using solely "URL". Why is that a matter is you can't/or it is difficult to make links among your resources or you can't just pass around your resources to other users by the URLs. That's the case you loose the hypermedia/hypertext property of the Web. 2. REST-RPC hybrid - you "may" have Scoping Information and Method Information both in a URL. But the problem is about the Uniform Interface. E.g., the mis-use of standard HTTP method in REST-RPC of some Web Applications e.g., you may have GET http://www.example.com?method=delete&Id=123 to delete the employee whose Id is 123. While GET should only be used to retrieve representation of a resource. Why this Uniform Interface matters here? One example is that it would be alright for human web but it would be a problem for an Automated Tool (e.g., Google Web Accelerator** at the first releasing time) which "thinks" GET is safe* (not changing the resource state) so It fetches the URL without "knowing" it was deleting the resource (not safe). Another problem of this kind of Web Applications is about (handling the Browser's Back button for instance) you may resend the information and redo the transaction for more than one time. This could normally happen in the "not always available of connection" like the case of the Internet, when you are not sure your transaction was done, you may try to resend the transaction. 3. RESTful and one of its implementation using HTTP and URI. If used correctly the PUT/HEAD/GET are the safe or idempotent (you can resend the transaction for several times). And most of the time (if you designed your web app. well) Scoping Information and Method Information are in the URL so you can have direct access to the resources (their representations), you can make links among resources, and automatated tool may not have mis-understanding of the Scoping Information. All in all, you have the hypermedia/hypertext property of the web, increase the interoperability and also the use of automated tools. Furthermore, its implementation using URI, will enable you to use current XML technologies using URI. The problem is sometimes we can't just use the safe/idempotent HTTP standard methods but have to use the overloaded POST*, so you may somehow loose the safe/idempotent properties. This is the reason why some says the REST implementation using HTTP & URI not yet conveys all the ideas of REST. You may have to give some further thoughts in the supported tools (ease of development) and the needs of your applications (interoperability is one of the concerns) when develop your network based applications (not just web services - except you have the concept that web services include both programming web service and the normal current web* . I think I better put Network Based Applications in general as in the dessertation of Roy Thomas Fielding stated.) * :these terminologes are taken Leonard. R, Sam. R. “RESTful Web Servicesâ€. First Edition, 2007 O’Reilly Media, Inc. USA.. **: Google Web Accelerator, now is not available for download - and I am not so sure whether the cause of the problem is this problem ***: Roy Thomas Fielding Dessertation about REST. Pham Van Vung - Arthur. www.online-emark.com Grad Std. Politecnico di Milano. ________________________________ From: Jan Algermissen <algermissen1971@...> To: troels knak-nielsen <troelskn@...> Cc: rest-discuss@yahoogroups.com Sent: Wednesday, April 22, 2009 6:47:31 PM Subject: Re: [rest-discuss] REST is used for ??? On Apr 22, 2009, at 1:12 PM, troels knak-nielsen wrote: > On Wed, Apr 22, 2009 at 1:07 PM, Solomon Duskis <sduskis@gmail. com> > wrote: >> >> >> REST is a distributed programming architecture style. It's used to >> exchange >> data between two applications. If you're using HTTP based REST, >> then you > > REST is always based on HTTP. You could say that REST is a set of > best-practises for working with HTTP. Umm, no. REST is an architectural style and you could implement this style as something completely different than HTTP. Jan > > > On Wed, Apr 22, 2009 at 2:17 AM, cule_barca <vantu.ituns@ gmail.com> > wrote: >> >> >> I have just start with REST so there are some difficuties for me in >> my >> researching. >> >> I know that REST is an architecture style. what can be design by >> REST? I >> only find that REST used to design webservice. Are there anything >> that can >> be design by REST, such as a win application, or somthing else?? >> >> Would you please tell me some thing about this. Thank you very very >> much? > > http://www.infoq. com/articles/ rest-introductio n > > -- > troels > > > ------------ --------- --------- ------ > > Yahoo! Groups Links > > >
On Wed, Apr 22, 2009 at 6:19 PM, Solomon Duskis <sduskis@...> wrote: > troels, "REST is always based on HTTP"? I think you just opened a can of > worms... Roy Fielding specifically stays away from HTTP specific > implementation details :). There are plenty of implementation that use > RESTful ideas without HTTP. Judging from the rest of the replies, so it seems. I hadn't really though about it that way. Do you have any examples of rest outside of http? Nice list you got here, by the way. -- troels
At the moment the rest-based architecture we've implemented (or would-be rest, when I have time to implement a complete hateos) is being used with several connectors, namelly: HTTP, IMAP, JMS, JCR, intra-VM and others will follow as needed... On Apr 23, 2009 9:40am, troels knak-nielsen <troelskn@...> wrote: > On Wed, Apr 22, 2009 at 6:19 PM, Solomon Duskis sduskis@...> wrote: > > troels, "REST is always based on HTTP"? I think you just opened a can of > > worms... Roy Fielding specifically stays away from HTTP specific > > implementation details :). There are plenty of implementation that use > > RESTful ideas without HTTP. > Judging from the rest of the replies, so it seems. I hadn't really > though about it that way. Do you have any examples of rest outside of > http? > Nice list you got here, by the way. > -- > troels >
Hello,
I am quite new to REST, but I am very intrigued by it and seriously
considering utilizing it as architectural guidelines for my future
enterprise solutions. Our current service architecture is all SOAP. My
peers posed a question about how a SOAP API would map to REST. I believe
I have a good solution for it, but I would like your guys' opinions.
SOAP API:
bool IsVirusFree(byte[] documentBytes)
Hopefully it is obvious that this is a method that checks the document
for viruses and returns true/false. I understand that the representation
you POST/PUT should be the same you GET, but obviously I don't want the
document back, just whether or not the document is virus free or not.
The following is my idea on how it could be implemented using REST.
REST conversation:
Request: POST /document/
Response: Status 201
Location: /document/[random_file_name]
Request: POST /viruscheckrequest
<virusCheckRequest><documentUri>/document/[random_file_name]</documentUr\
i><status/><isVirusFree/></virusCheckRequest>
Response: Status 201
Location: /viruscheckrequest/[id]
Request: GET /viruscheckrequest/[id]
Response: Status 200
<virusCheckRequest><documentUri>/document/[random_file_name]</documentUr\
i><status>pending</status><isVirusFree>unknown</isVirusFree></virusCheck\
Request>
Request: GET /viruscheckrequest/[id]
Response: Status 200
<virusCheckRequest><documentUri>/document/[random_file_name]</documentUr\
i><status>complete</status><isVirusFree>true</isVirusFree></virusCheckRe\
quest>
// Would it be OK if when the check completes that it automatically
// DELETEs the document resource? Should that be a boolean element in
// the virusCheckRequest?
I also thought about this alternative:
REST conversation:
Request: POST /document/
Response: Status 201
Location: /document/[random_file_name]
<link rel="virusCheck"
href="/document/[random_file_name]/viruscheck" />
// Is it a rule that POSTs must return the new resource if it returns
// anything in the body? It would be good to return the virusCheck
// link in the response for HATEOS, but I don't want the whole
// document back.
Request: GET /document/[random_file_name]/virusCheck
Response: Status 200
<virusCheck><status>pending</status><isVirusFree>unknown</isVirusFree></\
virusCheck>
Request: GET /document/[random_file_name]/virusCheck
Response: Status 200
<virusCheck><status>complete</status><isVirusFree>true</isVirusFree></vi\
rusCheck>
What I like about the first alternative is that it allows for submitting
any URI to the virus checker, assuming it can access it. What I like
about the second alternative is that it seems more streamlined and
discoverable. You'll notice that I also have a couple of questions
inline. Any thoughts?
Thanks,
Mark
What's wrong with GET /virus_check?uri=http://example.org/someuri (properly escaped, of course)? Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ On 25.04.2009, at 01:22, Mark Waddle wrote: > > > Hello, > > I am quite new to REST, but I am very intrigued by it and seriously > considering utilizing it as architectural guidelines for my future > enterprise solutions. Our current service architecture is all SOAP. > My peers posed a question about how a SOAP API would map to REST. I > believe I have a good solution for it, but I would like your guys' > opinions. > > SOAP API: > bool IsVirusFree(byte[] documentBytes) > > Hopefully it is obvious that this is a method that checks the > document for viruses and returns true/false. I understand that the > representation you POST/PUT should be the same you GET, but > obviously I don't want the document back, just whether or not the > document is virus free or not. The following is my idea on how it > could be implemented using REST. > > REST conversation: > Request: POST /document/ > Response: Status 201 > Location: /document/[random_file_name] > Request: POST /viruscheckrequest > <virusCheckRequest><documentUri>/document/ > [random_file_name]</documentUri><status/><isVirusFree/></ > virusCheckRequest> > Response: Status 201 > Location: /viruscheckrequest/[id] > Request: GET /viruscheckrequest/[id] > Response: Status 200 > <virusCheckRequest><documentUri>/document/ > [random_file_name]</documentUri><status>pending</ > status><isVirusFree>unknown</isVirusFree></virusCheckRequest> > Request: GET /viruscheckrequest/[id] > Response: Status 200 > <virusCheckRequest><documentUri>/document/ > [random_file_name]</documentUri><status>complete</ > status><isVirusFree>true</isVirusFree></virusCheckRequest> > > // Would it be OK if when the check completes that it automatically > // DELETEs the document resource? Should that be a boolean element in > // the virusCheckRequest? > > I also thought about this alternative: > > REST conversation: > Request: POST /document/ > Response: Status 201 > Location: /document/[random_file_name] > <link rel="virusCheck" href="/document/ > [random_file_name]/viruscheck" /> > // Is it a rule that POSTs must return the new resource if it returns > // anything in the body? It would be good to return the virusCheck > // link in the response for HATEOS, but I don't want the whole > // document back. > Request: GET /document/[random_file_name]/virusCheck > Response: Status 200 > <virusCheck><status>pending</ > status><isVirusFree>unknown</isVirusFree></virusCheck> > Request: GET /document/[random_file_name]/virusCheck > Response: Status 200 > <virusCheck><status>complete</ > status><isVirusFree>true</isVirusFree></virusCheck> > > What I like about the first alternative is that it allows for > submitting any URI to the virus checker, assuming it can access it. > What I like about the second alternative is that it seems more > streamlined and discoverable. You'll notice that I also have a > couple of questions inline. Any thoughts? > > Thanks, > Mark > >
basic auth + SSL digest + SSL client certs + SSL OAuth (but this is a specific aggregation usecase) In one my presentations, somebody asked about how could you encrypt message bodies to support untrusted intermediaries. I thought of the idea of using a special Content-Encoding for this scenario. I hope others can add to this list. jsarava wrote: > > > > Hello, > > Could you please explain me what is the best way to secure REST based > services? Is SSL only way? > > Expecting your expert adviceon this. Thank you! > > With regards, > Saravan. > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill Burke wrote: > basic auth + SSL > digest + SSL > client certs + SSL > > OAuth (but this is a specific aggregation usecase) > (What do you mean by a specific aggregation usecase? AFAIK there is no specific use case for OAuth; it was definitely intended to be usable for general RESTful resources.) > > In one my presentations, somebody asked about how could you encrypt > message bodies to support untrusted intermediaries. I thought of the > idea of using a special Content-Encoding for this scenario. > > I hope others can add to this list. > > jsarava wrote: > >> >> Hello, >> >> Could you please explain me what is the best way to secure REST based >> services? Is SSL only way? >> >> Expecting your expert adviceon this. Thank you! >> >> With regards, >> Saravan. >> >> >> > >
On Apr 25, 2009, at 4:37 PM, John Panzer wrote: > (What do you mean by a specific aggregation usecase? AFAIK there is > no specific use case for OAuth; it was definitely intended to be > usable for general RESTful resources.) Well - isn't the key use case for the OAuth protocol is to let a user authorize one application to access data from another application? Subbu
When thinking about the Web (browsers) and Roy's thesis, do links and linkability really represent what HATEOAS is? I don't think so. Links aggregate information. They usually never change resource state when they are followed. HTML forms on the other hand are the real "Engine of Application State" and are usually responsible for resource state changes. So, aren't HTML forms a better example of HATEOAS than links? -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Sun, Apr 26, 2009 at 12:18 PM, Bill Burke <bburke@...> wrote: > When thinking about the Web (browsers) and Roy's thesis, do links and > linkability really represent what HATEOAS is? I don't think so. Links > aggregate information. They usually never change resource state when > they are followed. > The answer is right there in your own words. "Application state" != "resource state" So, the HTML page in your browser has a bunch of ordinary <a> links in it. That represents the state of a user's interaction with the application. The <a> links on the page tell you how to get from this state, to all the allowed next states. So even if you never encounter an HTML form, and never change any resource state, the clients running the application each has his/her/its own application state. > HTML forms on the other hand are the real "Engine of Application State" > and are usually responsible for resource state changes. > > So, aren't HTML forms a better example of HATEOAS than links? > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > ------------------------------------ > > Yahoo! Groups Links > > > > -- Hugh Winkler, CEO Wellstorm Development 31900 Ranch Road 12 Suite 206 Dripping Springs, TX 78620 USA http://www.wellstorm.com/ +1 512 264 3998 x801
I find it useful to view application state as the sum of client state
and those parts of resource the client cares about. In other words:
The intersection of the application state of many parallel clients is
the servers resource state. I've successfully used this mental model
to explain things a few times, even though I'm pretty sure there's no
official blessing for it.
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
On 26.04.2009, at 19:26, Hugh Winkler wrote:
>
>
> On Sun, Apr 26, 2009 at 12:18 PM, Bill Burke <bburke@...>
> wrote:
> > When thinking about the Web (browsers) and Roy's thesis, do links
> and
> > linkability really represent what HATEOAS is? I don't think so.
> Links
> > aggregate information. They usually never change resource state
> when
> > they are followed.
> >
>
> The answer is right there in your own words. "Application state" !=
> "resource state"
>
> So, the HTML page in your browser has a bunch of ordinary <a> links in
> it. That represents the state of a user's interaction with the
> application. The <a> links on the page tell you how to get from this
> state, to all the allowed next states.
>
> So even if you never encounter an HTML form, and never change any
> resource state, the clients running the application each has
> his/her/its own application state.
>
> > HTML forms on the other hand are the real "Engine of Application
> State"
> > and are usually responsible for resource state changes.
> >
>
> > So, aren't HTML forms a better example of HATEOAS than links?
> > --
> > Bill Burke
> > JBoss, a division of Red Hat
> > http://bill.burkecentral.com
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
> >
>
> --
> Hugh Winkler, CEO
> Wellstorm Development
> 31900 Ranch Road 12
> Suite 206
> Dripping Springs, TX 78620
> USA
> http://www.wellstorm.com/
> +1 512 264 3998 x801
>
> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%; font-
> weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; } #ygrp-
> mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!-- #ygrp-
> sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%; line-
> height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom: 10px;
> padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px; font-family:
> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> input, textarea {font:99% arial,helvetica,clean,sans-serif;} #ygrp-
> mlmsg pre, code {font:115% monospace;*font-size:100%;} #ygrp-mlmsg *
> {line-height:1.22em;} #ygrp-text{ font-family: Georgia; } #ygrp-
> text p{ margin: 0 0 1em 0; } dd.last p a { font-family: Verdana;
> font-weight: bold; } #ygrp-vitnav{ padding-top: 10px; font-family:
> Verdana; font-size: 77%; margin: 0; } #ygrp-vitnav a{ padding: 0
> 1px; } #ygrp-mlmsg #logo{ padding-bottom: 10px; } #ygrp-reco
> { margin-bottom: 20px; padding: 0px; } #ygrp-reco #reco-head { font-
> weight: bold; color: #ff7900; } #reco-category{ font-size: 77%; }
> #reco-desc{ font-size: 77%; } #ygrp-vital a{ text-decoration:
> none; } #ygrp-vital a:hover{ text-decoration: underline; } #ygrp-
> sponsor #ov ul{ padding: 0 0 0 8px; margin: 0; } #ygrp-sponsor #ov
> li{ list-style-type: square; padding: 6px 0; font-size: 77%; } #ygrp-
> sponsor #ov li a{ text-decoration: none; font-size: 130%; } #ygrp-
> sponsor #nc{ background-color: #eee; margin-bottom: 20px;
> padding: 0 8px; } #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-
> sponsor .ad #hd1{ font-family: Arial; font-weight: bold; color:
> #628c2a; font-size: 100%; line-height: 122%; } #ygrp-sponsor .ad
> a{ text-decoration: none; } #ygrp-sponsor .ad a:hover{ text-
> decoration: underline; } #ygrp-sponsor .ad p{ margin: 0; font-
> weight: normal; color: #000000; } o{font-size:
> 0; } .MsoNormal{ margin: 0 0 0 0; } #ygrp-text tt{ font-size:
> 120%; } blockquote{margin: 0 0 0 4px;} .replbq{margin:4} dd.last p
> span { margin-right: 10px; font-family: Verdana; font-weight:
> bold; } dd.last p span.yshortcuts { margin-right: 0; } div.photo-
> title a, div.photo-title a:active, div.photo-title a:hover,
> div.photo-title a:visited { text-decoration: none; } div.file-title
> a, div.file-title a:active, div.file-title a:hover, div.file-title
> a:visited { text-decoration: none; } #ygrp-msg p#attach-count
> { clear: both; padding: 15px 0 3px 0; overflow: hidden; } #ygrp-msg
> p#attach-count span { color: #1E66AE; font-weight: bold; } div#ygrp-
> mlmsg #ygrp-msg p a span.yshortcuts { font-family: Verdana; font-
> size: 10px; font-weight: normal; } #ygrp-msg p a { font-family:
> Verdana; font-size: 10px; } #ygrp-mlmsg a { color: #1E66AE; }
> div.attach-table div div a { text-decoration: none; } div.attach-
> table { width: 400px; } -->
Hugh Winkler wrote: > On Sun, Apr 26, 2009 at 12:18 PM, Bill Burke <bburke@...> wrote: >> When thinking about the Web (browsers) and Roy's thesis, do links and >> linkability really represent what HATEOAS is? I don't think so. Links >> aggregate information. They usually never change resource state when >> they are followed. >> > > The answer is right there in your own words. "Application state" != > "resource state" > > So, the HTML page in your browser has a bunch of ordinary <a> links in > it. That represents the state of a user's interaction with the > application. The <a> links on the page tell you how to get from this > state, to all the allowed next states. > > So even if you never encounter an HTML form, and never change any > resource state, the clients running the application each has > his/her/its own application state. > So the "application" in Engine of Application State is really your browser. Links provide a way to change the state of your browser. Forms provide a way to change the state of your resource. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Hugh Winkler wrote: > On Sun, Apr 26, 2009 at 12:18 PM, Bill Burke <bburke@...> wrote: >> When thinking about the Web (browsers) and Roy's thesis, do links and >> linkability really represent what HATEOAS is? I don't think so. Links >> aggregate information. They usually never change resource state when >> they are followed. >> > > The answer is right there in your own words. "Application state" != > "resource state" > > So, the HTML page in your browser has a bunch of ordinary <a> links in > it. That represents the state of a user's interaction with the > application. The <a> links on the page tell you how to get from this > state, to all the allowed next states. > > So even if you never encounter an HTML form, and never change any > resource state, the clients running the application each has > his/her/its own application state. > So the "application" in Engine of Application State is really your browser. Links provide a way to change the state of your browser. Forms provide a way to change the state of your resource. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
No the application is the site/webapp/whatever ... The idea being to tell the client what to do next in the application via Hypermedia instead of previously shared knowledge - to reduce coupling. This is HATEOAS Cheers Devdatta 2009/4/26 Bill Burke <bburke@...>: > > > > > Hugh Winkler wrote: >> On Sun, Apr 26, 2009 at 12:18 PM, Bill Burke <bburke@...> wrote: >>> When thinking about the Web (browsers) and Roy's thesis, do links and >>> linkability really represent what HATEOAS is? I don't think so. Links >>> aggregate information. They usually never change resource state when >>> they are followed. >>> >> >> The answer is right there in your own words. "Application state" != >> "resource state" >> >> So, the HTML page in your browser has a bunch of ordinary <a> links in >> it. That represents the state of a user's interaction with the >> application. The <a> links on the page tell you how to get from this >> state, to all the allowed next states. >> >> So even if you never encounter an HTML form, and never change any >> resource state, the clients running the application each has >> his/her/its own application state. >> > > So the "application" in Engine of Application State is really your > browser. Links provide a way to change the state of your browser. > Forms provide a way to change the state of your resource. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
The problem I see is that the virus check can take an indeterminate amount of time depending on the server load and the connectivity to example.org. To me it seems necessary to go asynchronous. Am I over architecting? Mark --- In rest-discuss@yahoogroups.com, Stefan Tilkov <stefan.tilkov@...> wrote: > > What's wrong with > > GET /virus_check?uri=http://example.org/someuri > > (properly escaped, of course)? > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > On 25.04.2009, at 01:22, Mark Waddle wrote: > > > > > > > Hello, > > > > I am quite new to REST, but I am very intrigued by it and seriously > > considering utilizing it as architectural guidelines for my future > > enterprise solutions. Our current service architecture is all SOAP. > > My peers posed a question about how a SOAP API would map to REST. I > > believe I have a good solution for it, but I would like your guys' > > opinions. > > > > SOAP API: > > bool IsVirusFree(byte[] documentBytes) > > > > Hopefully it is obvious that this is a method that checks the > > document for viruses and returns true/false. I understand that the > > representation you POST/PUT should be the same you GET, but > > obviously I don't want the document back, just whether or not the > > document is virus free or not. The following is my idea on how it > > could be implemented using REST. > > > > REST conversation: > > Request: POST /document/ > > Response: Status 201 > > Location: /document/[random_file_name] > > Request: POST /viruscheckrequest > > <virusCheckRequest><documentUri>/document/ > > [random_file_name]</documentUri><status/><isVirusFree/></ > > virusCheckRequest> > > Response: Status 201 > > Location: /viruscheckrequest/[id] > > Request: GET /viruscheckrequest/[id] > > Response: Status 200 > > <virusCheckRequest><documentUri>/document/ > > [random_file_name]</documentUri><status>pending</ > > status><isVirusFree>unknown</isVirusFree></virusCheckRequest> > > Request: GET /viruscheckrequest/[id] > > Response: Status 200 > > <virusCheckRequest><documentUri>/document/ > > [random_file_name]</documentUri><status>complete</ > > status><isVirusFree>true</isVirusFree></virusCheckRequest> > > > > // Would it be OK if when the check completes that it automatically > > // DELETEs the document resource? Should that be a boolean element in > > // the virusCheckRequest? > > > > I also thought about this alternative: > > > > REST conversation: > > Request: POST /document/ > > Response: Status 201 > > Location: /document/[random_file_name] > > <link rel="virusCheck" href="/document/ > > [random_file_name]/viruscheck" /> > > // Is it a rule that POSTs must return the new resource if it returns > > // anything in the body? It would be good to return the virusCheck > > // link in the response for HATEOS, but I don't want the whole > > // document back. > > Request: GET /document/[random_file_name]/virusCheck > > Response: Status 200 > > <virusCheck><status>pending</ > > status><isVirusFree>unknown</isVirusFree></virusCheck> > > Request: GET /document/[random_file_name]/virusCheck > > Response: Status 200 > > <virusCheck><status>complete</ > > status><isVirusFree>true</isVirusFree></virusCheck> > > > > What I like about the first alternative is that it allows for > > submitting any URI to the virus checker, assuming it can access it. > > What I like about the second alternative is that it seems more > > streamlined and discoverable. You'll notice that I also have a > > couple of questions inline. Any thoughts? > > > > Thanks, > > Mark > > > > >
Still, I think an HTML form is an excellent illustration of HATEOAS on the Web. It is a self-describing *interaction* between the client and server where a link is just a transition (on the WEB) to different information. Devdatta wrote: > No the application is the site/webapp/whatever ... > > The idea being to tell the client what to do next in the application > via Hypermedia instead of previously shared knowledge - to reduce > coupling. This is HATEOAS > > Cheers > Devdatta > > > 2009/4/26 Bill Burke <bburke@...>: >> >> >> >> Hugh Winkler wrote: >>> On Sun, Apr 26, 2009 at 12:18 PM, Bill Burke <bburke@...> wrote: >>>> When thinking about the Web (browsers) and Roy's thesis, do links and >>>> linkability really represent what HATEOAS is? I don't think so. Links >>>> aggregate information. They usually never change resource state when >>>> they are followed. >>>> >>> The answer is right there in your own words. "Application state" != >>> "resource state" >>> >>> So, the HTML page in your browser has a bunch of ordinary <a> links in >>> it. That represents the state of a user's interaction with the >>> application. The <a> links on the page tell you how to get from this >>> state, to all the allowed next states. >>> >>> So even if you never encounter an HTML form, and never change any >>> resource state, the clients running the application each has >>> his/her/its own application state. >>> >> So the "application" in Engine of Application State is really your >> browser. Links provide a way to change the state of your browser. >> Forms provide a way to change the state of your resource. >> >> -- >> Bill Burke >> JBoss, a division of Red Hat >> http://bill.burkecentral.com >> -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Right... The hypertext constraint specifies that potential "workflows" are captured through linking or other hypermedia descriptors. Roy Fielding called the current set of workflows/links as a client "workspace." I would say that forms are are part of the hypertext constraint (and workspaces) as well. For example, take search functionality. Search requires a form because it requires a "search term" parameter as part of the request URL. The server tells the client that search is one potential workflow by embedding a form in the media. A search form that updates "application state" (and not resource state) seems to me to be a consistent implementation of the hypertext constraint. A form that updates resource state isn't that much different. Those forms generally also updates application state. IMHO, forms are an important REST component, which is unfortunately underused in "REST Services." -Solomon On Sun, Apr 26, 2009 at 1:57 PM, Devdatta <dev.akhawe@...> wrote: > > > No the application is the site/webapp/whatever ... > > The idea being to tell the client what to do next in the application > via Hypermedia instead of previously shared knowledge - to reduce > coupling. This is HATEOAS > > Cheers > Devdatta > > 2009/4/26 Bill Burke <bburke@... <bburke%40redhat.com>>: > > > > > > > > > > > Hugh Winkler wrote: > >> On Sun, Apr 26, 2009 at 12:18 PM, Bill Burke <bburke@...<bburke%40redhat.com>> > wrote: > >>> When thinking about the Web (browsers) and Roy's thesis, do links and > >>> linkability really represent what HATEOAS is? I don't think so. Links > >>> aggregate information. They usually never change resource state when > >>> they are followed. > >>> > >> > >> The answer is right there in your own words. "Application state" != > >> "resource state" > >> > >> So, the HTML page in your browser has a bunch of ordinary <a> links in > >> it. That represents the state of a user's interaction with the > >> application. The <a> links on the page tell you how to get from this > >> state, to all the allowed next states. > >> > >> So even if you never encounter an HTML form, and never change any > >> resource state, the clients running the application each has > >> his/her/its own application state. > >> > > > > So the "application" in Engine of Application State is really your > > browser. Links provide a way to change the state of your browser. > > Forms provide a way to change the state of your resource. > > > > -- > > Bill Burke > > JBoss, a division of Red Hat > > http://bill.burkecentral.com > > > >
There's no reason why a RESTful applicaiton can't be asynchronous if the requirements demand it. It can either be done with polling (and a location header) or with a URL callback (as another query/form parameter)... and a 202/Accepted status. -Solomon On Sun, Apr 26, 2009 at 2:17 PM, Mark Waddle <mark@...> wrote: > > > The problem I see is that the virus check can take an indeterminate amount > of time depending on the server load and the connectivity to example.org. > To me it seems necessary to go asynchronous. Am I over architecting? > > Mark > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > Stefan Tilkov <stefan.tilkov@...> wrote: > > > > What's wrong with > > > > GET /virus_check?uri=http://example.org/someuri > > > > (properly escaped, of course)? > > > > Stefan > > -- > > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > > On 25.04.2009, at 01:22, Mark Waddle wrote: > > > > > > > > > > > Hello, > > > > > > I am quite new to REST, but I am very intrigued by it and seriously > > > considering utilizing it as architectural guidelines for my future > > > enterprise solutions. Our current service architecture is all SOAP. > > > My peers posed a question about how a SOAP API would map to REST. I > > > believe I have a good solution for it, but I would like your guys' > > > opinions. > > > > > > SOAP API: > > > bool IsVirusFree(byte[] documentBytes) > > > > > > Hopefully it is obvious that this is a method that checks the > > > document for viruses and returns true/false. I understand that the > > > representation you POST/PUT should be the same you GET, but > > > obviously I don't want the document back, just whether or not the > > > document is virus free or not. The following is my idea on how it > > > could be implemented using REST. > > > > > > REST conversation: > > > Request: POST /document/ > > > Response: Status 201 > > > Location: /document/[random_file_name] > > > Request: POST /viruscheckrequest > > > <virusCheckRequest><documentUri>/document/ > > > [random_file_name]</documentUri><status/><isVirusFree/></ > > > virusCheckRequest> > > > Response: Status 201 > > > Location: /viruscheckrequest/[id] > > > Request: GET /viruscheckrequest/[id] > > > Response: Status 200 > > > <virusCheckRequest><documentUri>/document/ > > > [random_file_name]</documentUri><status>pending</ > > > status><isVirusFree>unknown</isVirusFree></virusCheckRequest> > > > Request: GET /viruscheckrequest/[id] > > > Response: Status 200 > > > <virusCheckRequest><documentUri>/document/ > > > [random_file_name]</documentUri><status>complete</ > > > status><isVirusFree>true</isVirusFree></virusCheckRequest> > > > > > > // Would it be OK if when the check completes that it automatically > > > // DELETEs the document resource? Should that be a boolean element in > > > // the virusCheckRequest? > > > > > > I also thought about this alternative: > > > > > > REST conversation: > > > Request: POST /document/ > > > Response: Status 201 > > > Location: /document/[random_file_name] > > > <link rel="virusCheck" href="/document/ > > > [random_file_name]/viruscheck" /> > > > // Is it a rule that POSTs must return the new resource if it returns > > > // anything in the body? It would be good to return the virusCheck > > > // link in the response for HATEOS, but I don't want the whole > > > // document back. > > > Request: GET /document/[random_file_name]/virusCheck > > > Response: Status 200 > > > <virusCheck><status>pending</ > > > status><isVirusFree>unknown</isVirusFree></virusCheck> > > > Request: GET /document/[random_file_name]/virusCheck > > > Response: Status 200 > > > <virusCheck><status>complete</ > > > status><isVirusFree>true</isVirusFree></virusCheck> > > > > > > What I like about the first alternative is that it allows for > > > submitting any URI to the virus checker, assuming it can access it. > > > What I like about the second alternative is that it seems more > > > streamlined and discoverable. You'll notice that I also have a > > > couple of questions inline. Any thoughts? > > > > > > Thanks, > > > Mark > > > > > > > > > > >
On Apr 26, 2009, at 10:38 AM, Bill Burke wrote: > So the "application" in Engine of Application State is really your > browser. Links provide a way to change the state of your browser. > Forms provide a way to change the state of your resource. > Most of the time, with some caveats: The "application" is what the user is trying to accomplish, such as "buy a book" or "transfer money from one account to another" or "watch some monty python episode". The browser is just the software that presents and operates upon the application state. Forms usually change the state of the browser as well. Links and forms are specific UI mechanisms in HTML that teach the browser how to construct the request upon activation. A more elaborate media type could have more elaborate mechanisms, and non-browser clients are even less restricted in how they interact with media. Although GET requests are not requesting a state change, it is still possible for some resource states to change in response to a GET. For example, there may be some other resource that counts the number of GETs, or the most recent user agent. ....Roy
On Fri, Apr 24, 2009 at 6:22 PM, Mark Waddle <mark@...> wrote: > > > Hello, > > I am quite new to REST, but I am very intrigued by it and seriously > considering utilizing it as architectural guidelines for my future > enterprise solutions. Our current service architecture is all SOAP. My peers > posed a question about how a SOAP API would map to REST. I believe I have a > good solution for it, but I would like your guys' opinions. > > SOAP API: > bool IsVirusFree(byte[] documentBytes) > > Hopefully it is obvious that this is a method that checks the document for > viruses and returns true/false. I understand that the representation you > POST/PUT should be the same you GET, but obviously I don't want the document > back, just whether or not the document is virus free or not. The following > is my idea on how it could be implemented using REST. > > REST conversation: > Request: POST /document/ > Response: Status 201 > Location: /document/[random_file_name] > Request: POST /viruscheckrequest > > <virusCheckRequest><documentUri>/document/[random_file_name]</documentUri><status/><isVirusFree/></virusCheckRequest> > Response: Status 201 > Location: /viruscheckrequest/[id] > Request: GET /viruscheckrequest/[id] > Response: Status 200 > > <virusCheckRequest><documentUri>/document/[random_file_name]</documentUri><status>pending</status><isVirusFree>unknown</isVirusFree></virusCheckRequest> > Request: GET /viruscheckrequest/[id] > Response: Status 200 > > <virusCheckRequest><documentUri>/document/[random_file_name]</documentUri><status>complete</status><isVirusFree>true</isVirusFree></virusCheckRequest> > > // Would it be OK if when the check completes that it automatically > // DELETEs the document resource? Should that be a boolean element in > // the virusCheckRequest? > > I also thought about this alternative: > > REST conversation: > Request: POST /document/ > Response: Status 201 > Location: /document/[random_file_name] > <link rel="virusCheck" > href="/document/[random_file_name]/viruscheck" /> > // Is it a rule that POSTs must return the new resource if it returns > // anything in the body? It would be good to return the virusCheck > // link in the response for HATEOS, but I don't want the whole > // document back. > Request: GET /document/[random_file_name]/virusCheck > Response: Status 200 > > <virusCheck><status>pending</status><isVirusFree>unknown</isVirusFree></virusCheck> > Request: GET /document/[random_file_name]/virusCheck > Response: Status 200 > > <virusCheck><status>complete</status><isVirusFree>true</isVirusFree></virusCheck> > > What I like about the first alternative is that it allows for submitting any > URI to the virus checker, assuming it can access it. What I like about the > second alternative is that it seems more streamlined and discoverable. > You'll notice that I also have a couple of questions inline. Any thoughts? > > Thanks, > Mark > > > If you're OK with alternative 2, consider doing it like this: http://hughw.blogspot.com/2008/06/asynchronous-http-post.html in which client POSTs the document, and server redirects to URI of the virus check report , which itself returns 202 in response to GET until it is finally finished, when it finally returns 200. This technique is a little stretch on most people's interpretation of 202 because you usually see that in response to POST, not GET. But it works great, because browsers just treat 202 like 200, and "202 aware" clients can know to "try again later". Hugh
> > > So the "application" in Engine of Application State is really your > browser. Links provide a way to change the state of your browser. > Forms provide a way to change the state of your resource. > The browser, or any other user-agent, holds a "representation" of a resource, and as so a specific state of a application, taken as the functionality provided by a bunch of resources. Links provide a way to change the state of the application, by, for example, making the browser receive a representation of other resource. Changes to the resource state will usually be made by methods besides GET (post, put, delete, ...), which also changed the application state. At least, this is how I would explain it...
I grok what you're saying, but I've always thought of Application as the overall set of states provided by the server (or set of interlinked servers), rather than what the client does. I've understood "Application State" as a specific "node" of functionality/information that the server provides, and HATEOAS being a constraint that all client selections of the next application state must come through an interaction with server provided unique references/keys representing that state (URLs, URNs and etc). Taking a step back, if "Application" defines what the user is trying to accomplish (something I've thought of as "workflow" -- yet another overloaded term), what would you call what the overall set of states that the server is providing? -Solomon On Sun, Apr 26, 2009 at 2:57 PM, Roy T. Fielding <fielding@...> wrote: > > > On Apr 26, 2009, at 10:38 AM, Bill Burke wrote: > > So the "application" in Engine of Application State is really your > > browser. Links provide a way to change the state of your browser. > > Forms provide a way to change the state of your resource. > > > > Most of the time, with some caveats: > > The "application" is what the user is trying to accomplish, > such as "buy a book" or "transfer money from one account > to another" or "watch some monty python episode". The browser > is just the software that presents and operates upon the > application state. > > Forms usually change the state of the browser as well. > > Links and forms are specific UI mechanisms in HTML that teach > the browser how to construct the request upon activation. > A more elaborate media type could have more elaborate > mechanisms, and non-browser clients are even less restricted > in how they interact with media. > > Although GET requests are not requesting a state change, it is > still possible for some resource states to change in response > to a GET. For example, there may be some other resource that > counts the number of GETs, or the most recent user agent. > > ....Roy > >
> > Taking a step back, if "Application" defines what the user is trying to > accomplish (something I've thought of as "workflow" -- yet another > overloaded term), what would you call what the overall set of states that > the server is providing? > Ideally, why would a server provide* a state that is not part of what the user is trying to achieve? imho, I think this definitions game would just end us up in circles ... *provide := keep visible to the user Cheers Devdatta 2009/4/27 Solomon Duskis <sduskis@gmail.com>: > > > I grok what you're saying, but I've always thought of Application as the > overall set of states provided by the server (or set of interlinked > servers), rather than what the client does. I've understood "Application > State" as a specific "node" of functionality/information that the server > provides, and HATEOAS being a constraint that all client selections of the > next application state must come through an interaction with server provided > unique references/keys representing that state (URLs, URNs and etc). > > Taking a step back, if "Application" defines what the user is trying to > accomplish (something I've thought of as "workflow" -- yet another > overloaded term), what would you call what the overall set of states that > the server is providing? > > -Solomon > > On Sun, Apr 26, 2009 at 2:57 PM, Roy T. Fielding <fielding@...> wrote: >> >> >> On Apr 26, 2009, at 10:38 AM, Bill Burke wrote: >> > So the "application" in Engine of Application State is really your >> > browser. Links provide a way to change the state of your browser. >> > Forms provide a way to change the state of your resource. >> > >> >> Most of the time, with some caveats: >> >> The "application" is what the user is trying to accomplish, >> such as "buy a book" or "transfer money from one account >> to another" or "watch some monty python episode". The browser >> is just the software that presents and operates upon the >> application state. >> >> Forms usually change the state of the browser as well. >> >> Links and forms are specific UI mechanisms in HTML that teach >> the browser how to construct the request upon activation. >> A more elaborate media type could have more elaborate >> mechanisms, and non-browser clients are even less restricted >> in how they interact with media. >> >> Although GET requests are not requesting a state change, it is >> still possible for some resource states to change in response >> to a GET. For example, there may be some other resource that >> counts the number of GETs, or the most recent user agent. >> >> ....Roy > >
There are different types of clients for a given server. Each client may use a slice of the overall set of states. For example, admin vs. end user, or User with task A (perform money transfer) vs. user with B (year end tax information retrieval). The server therefore has to provide different "states" that individual users will not see. Definition games may end up driving us in circles, but IMHO, a big part of defining REST relies on defining the use of overloaded, complex and misunderstood terminology such as "application state" and "stateless communication." REST has been defined by Roy in a 9 year old PHd dissertation and a few recent follow up blogs (including a couple written in frustration). The core ideas have been interpreted and re-interpreted, muddled and muddied. The experts don't agree an a complete formal meaning of REST. Experts, from whom I've learned REST, have admitted that they don't have all of the answers related to REST, nor do they admit to having a complete set of best practices. Even supporting technologies, including HTTP are under review right now for more clarity based on new uses. The more the developer community push what can be done with REST, the more questions and about REST arise. As those questions arise, the more RESTful terminology surfaces with a greater need for discourse and definitions. -Solomon On Sun, Apr 26, 2009 at 3:38 PM, Devdatta <dev.akhawe@...> wrote: > > > > Taking a step back, if "Application" defines what the user is trying to > > accomplish (something I've thought of as "workflow" -- yet another > > overloaded term), what would you call what the overall set of states that > > the server is providing? > > > > Ideally, why would a server provide* a state that is not part of what > the user is trying to achieve? > > imho, I think this definitions game would just end us up in circles ... > > *provide := keep visible to the user > > Cheers > Devdatta > > > 2009/4/27 Solomon Duskis <sduskis@...>: > > > > > > I grok what you're saying, but I've always thought of Application as the > > overall set of states provided by the server (or set of interlinked > > servers), rather than what the client does. I've understood "Application > > State" as a specific "node" of functionality/information that the server > > provides, and HATEOAS being a constraint that all client selections of > the > > next application state must come through an interaction with server > provided > > unique references/keys representing that state (URLs, URNs and etc). > > > > Taking a step back, if "Application" defines what the user is trying to > > accomplish (something I've thought of as "workflow" -- yet another > > overloaded term), what would you call what the overall set of states that > > the server is providing? > > > > -Solomon > > > > On Sun, Apr 26, 2009 at 2:57 PM, Roy T. Fielding <fielding@...> > wrote: > >> > >> > >> On Apr 26, 2009, at 10:38 AM, Bill Burke wrote: > >> > So the "application" in Engine of Application State is really your > >> > browser. Links provide a way to change the state of your browser. > >> > Forms provide a way to change the state of your resource. > >> > > >> > >> Most of the time, with some caveats: > >> > >> The "application" is what the user is trying to accomplish, > >> such as "buy a book" or "transfer money from one account > >> to another" or "watch some monty python episode". The browser > >> is just the software that presents and operates upon the > >> application state. > >> > >> Forms usually change the state of the browser as well. > >> > >> Links and forms are specific UI mechanisms in HTML that teach > >> the browser how to construct the request upon activation. > >> A more elaborate media type could have more elaborate > >> mechanisms, and non-browser clients are even less restricted > >> in how they interact with media. > >> > >> Although GET requests are not requesting a state change, it is > >> still possible for some resource states to change in response > >> to a GET. For example, there may be some other resource that > >> counts the number of GETs, or the most recent user agent. > >> > >> ....Roy > > > > >
A common pattern is to post the file for virus check (either by passing the
URI of the file or by passing the file as a payload) and return 200 w/ a
Location that points to the resource that represents the state of the virus
check for this particular file.
*** request
POST /vcheck/
Content-Type: multipart/form-data
Length: XXXX
... binary payload here
'** response
HTTP/1.1 200 OK
Location: /vcheck/file01
The actual representation of the resource at the /vcheck/file01 URI can
change over time. For example, when first visited the resource might return
a simple message: "Job in queue awaiting processing." After a while,
(re)visiting this URI could return a progress message ("50% completed") and
finally a detailed report on the results of the virus scan. Another option
would be to offer a response animated by client-side scripts that show a
progress bar, dancing bears, etc.
The advantage of this approach is that the client can submit several files
for processing without waiting for the results of any single file before
submitting the next one. Also the results of the scan will be available
later for clients that must disconnect before the scan is complete or
clients that want to re-vist the history of the file scanning including
search bots that collect the results to help un-related clients view the
virus scans of commonly used files.
mca
http://amundsen.com/blog/
On Sun, Apr 26, 2009 at 15:03, Hugh Winkler <hughw@...> wrote:
> On Fri, Apr 24, 2009 at 6:22 PM, Mark Waddle <mark@...> wrote:
> >
> >
> > Hello,
> >
> > I am quite new to REST, but I am very intrigued by it and seriously
> > considering utilizing it as architectural guidelines for my future
> > enterprise solutions. Our current service architecture is all SOAP. My
> peers
> > posed a question about how a SOAP API would map to REST. I believe I have
> a
> > good solution for it, but I would like your guys' opinions.
> >
> > SOAP API:
> > bool IsVirusFree(byte[] documentBytes)
> >
> > Hopefully it is obvious that this is a method that checks the document
> for
> > viruses and returns true/false. I understand that the representation you
> > POST/PUT should be the same you GET, but obviously I don't want the
> document
> > back, just whether or not the document is virus free or not. The
> following
> > is my idea on how it could be implemented using REST.
> >
> > REST conversation:
> > Request: POST /document/
> > Response: Status 201
> > Location: /document/[random_file_name]
> > Request: POST /viruscheckrequest
> >
> >
> <virusCheckRequest><documentUri>/document/[random_file_name]</documentUri><status/><isVirusFree/></virusCheckRequest>
> > Response: Status 201
> > Location: /viruscheckrequest/[id]
> > Request: GET /viruscheckrequest/[id]
> > Response: Status 200
> >
> >
> <virusCheckRequest><documentUri>/document/[random_file_name]</documentUri><status>pending</status><isVirusFree>unknown</isVirusFree></virusCheckRequest>
> > Request: GET /viruscheckrequest/[id]
> > Response: Status 200
> >
> >
> <virusCheckRequest><documentUri>/document/[random_file_name]</documentUri><status>complete</status><isVirusFree>true</isVirusFree></virusCheckRequest>
> >
> > // Would it be OK if when the check completes that it automatically
> > // DELETEs the document resource? Should that be a boolean element in
> > // the virusCheckRequest?
> >
> > I also thought about this alternative:
> >
> > REST conversation:
> > Request: POST /document/
> > Response: Status 201
> > Location: /document/[random_file_name]
> > <link rel="virusCheck"
> > href="/document/[random_file_name]/viruscheck" />
> > // Is it a rule that POSTs must return the new resource if it returns
> > // anything in the body? It would be good to return the virusCheck
> > // link in the response for HATEOS, but I don't want the whole
> > // document back.
> > Request: GET /document/[random_file_name]/virusCheck
> > Response: Status 200
> >
> >
> <virusCheck><status>pending</status><isVirusFree>unknown</isVirusFree></virusCheck>
> > Request: GET /document/[random_file_name]/virusCheck
> > Response: Status 200
> >
> >
> <virusCheck><status>complete</status><isVirusFree>true</isVirusFree></virusCheck>
> >
> > What I like about the first alternative is that it allows for submitting
> any
> > URI to the virus checker, assuming it can access it. What I like about
> the
> > second alternative is that it seems more streamlined and discoverable.
> > You'll notice that I also have a couple of questions inline. Any
> thoughts?
> >
> > Thanks,
> > Mark
> >
> >
> >
>
> If you're OK with alternative 2, consider doing it like this:
>
> http://hughw.blogspot.com/2008/06/asynchronous-http-post.html
>
> in which client POSTs the document, and server redirects to URI of the
> virus check report , which itself returns 202 in response to GET until
> it is finally finished, when it finally returns 200. This technique is
> a little stretch on most people's interpretation of 202 because you
> usually see that in response to POST, not GET. But it works great,
> because browsers just treat 202 like 200, and "202 aware" clients can
> know to "try again later".
>
> Hugh
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Sun, Apr 26, 2009 at 5:53 PM, mike amundsen <mamund@...> wrote:
>
>
> A common pattern is to post the file for virus check (either by passing the
> URI of the file or by passing the file as a payload) and return 200 w/ a
> Location that points to the resource that represents the state of the virus
> check for this particular file.
>
> *** request
> POST /vcheck/
> Content-Type: multipart/form-data
> Length: XXXX
> ... binary payload here
>
> '** response
> HTTP/1.1 200 OK
> Location: /vcheck/file01
>
But a browser won't follow that Location header. At a minimum you'd
want to return hypertext in the response with a link to that status
URI. And maybe instead of 200, 202 would be better, to indicate
asynchronicity. And in that case you come back to the logic I outlined
:) where I suggest 303 rather than 202.
> The actual representation of the resource at the /vcheck/file01 URI can
> change over time. For example, when first visited the resource might return
> a simple message: "Job in queue awaiting processing." After a while,
> (re)visiting this URI could return a progress message ("50% completed") and
> finally a detailed report on the results of the virus scan. Another option
> would be to offer a response animated by client-side scripts that show a
> progress bar, dancing bears, etc.
>
> The advantage of this approach is that the client can submit several files
> for processing without waiting for the results of any single file before
> submitting the next one. Also the results of the scan will be available
> later for clients that must disconnect before the scan is complete or
> clients that want to re-vist the history of the file scanning including
> search bots that collect the results to help un-related clients view the
> virus scans of commonly used files.
Also the status/result page is cacheable, whether you come to it by
clicking a hyperlink or by a 303 redirect.
>
> mca
> http://amundsen.com/blog/
>
> On Sun, Apr 26, 2009 at 15:03, Hugh Winkler <hughw@...> wrote:
>>
>> On Fri, Apr 24, 2009 at 6:22 PM, Mark Waddle <mark@...> wrote:
>> >
>> >
>> > Hello,
>> >
>> > I am quite new to REST, but I am very intrigued by it and seriously
>> > considering utilizing it as architectural guidelines for my future
>> > enterprise solutions. Our current service architecture is all SOAP. My
>> > peers
>> > posed a question about how a SOAP API would map to REST. I believe I
>> > have a
>> > good solution for it, but I would like your guys' opinions.
>> >
>> > SOAP API:
>> > bool IsVirusFree(byte[] documentBytes)
>> >
>> > Hopefully it is obvious that this is a method that checks the document
>> > for
>> > viruses and returns true/false. I understand that the representation you
>> > POST/PUT should be the same you GET, but obviously I don't want the
>> > document
>> > back, just whether or not the document is virus free or not. The
>> > following
>> > is my idea on how it could be implemented using REST.
>> >
>> > REST conversation:
>> > Request: POST /document/
>> > Response: Status 201
>> > Location: /document/[random_file_name]
>> > Request: POST /viruscheckrequest
>> >
>> >
>> > <virusCheckRequest><documentUri>/document/[random_file_name]</documentUri><status/><isVirusFree/></virusCheckRequest>
>> > Response: Status 201
>> > Location: /viruscheckrequest/[id]
>> > Request: GET /viruscheckrequest/[id]
>> > Response: Status 200
>> >
>> >
>> > <virusCheckRequest><documentUri>/document/[random_file_name]</documentUri><status>pending</status><isVirusFree>unknown</isVirusFree></virusCheckRequest>
>> > Request: GET /viruscheckrequest/[id]
>> > Response: Status 200
>> >
>> >
>> > <virusCheckRequest><documentUri>/document/[random_file_name]</documentUri><status>complete</status><isVirusFree>true</isVirusFree></virusCheckRequest>
>> >
>> > // Would it be OK if when the check completes that it automatically
>> > // DELETEs the document resource? Should that be a boolean element in
>> > // the virusCheckRequest?
>> >
>> > I also thought about this alternative:
>> >
>> > REST conversation:
>> > Request: POST /document/
>> > Response: Status 201
>> > Location: /document/[random_file_name]
>> > <link rel="virusCheck"
>> > href="/document/[random_file_name]/viruscheck" />
>> > // Is it a rule that POSTs must return the new resource if it returns
>> > // anything in the body? It would be good to return the virusCheck
>> > // link in the response for HATEOS, but I don't want the whole
>> > // document back.
>> > Request: GET /document/[random_file_name]/virusCheck
>> > Response: Status 200
>> >
>> >
>> > <virusCheck><status>pending</status><isVirusFree>unknown</isVirusFree></virusCheck>
>> > Request: GET /document/[random_file_name]/virusCheck
>> > Response: Status 200
>> >
>> >
>> > <virusCheck><status>complete</status><isVirusFree>true</isVirusFree></virusCheck>
>> >
>> > What I like about the first alternative is that it allows for submitting
>> > any
>> > URI to the virus checker, assuming it can access it. What I like about
>> > the
>> > second alternative is that it seems more streamlined and discoverable.
>> > You'll notice that I also have a couple of questions inline. Any
>> > thoughts?
>> >
>> > Thanks,
>> > Mark
>> >
>> >
>> >
>>
>> If you're OK with alternative 2, consider doing it like this:
>>
>> http://hughw.blogspot.com/2008/06/asynchronous-http-post.html
>>
>> in which client POSTs the document, and server redirects to URI of the
>> virus check report , which itself returns 202 in response to GET until
>> it is finally finished, when it finally returns 200. This technique is
>> a little stretch on most people's interpretation of 202 because you
>> usually see that in response to POST, not GET. But it works great,
>> because browsers just treat 202 like 200, and "202 aware" clients can
>> know to "try again later".
>>
>> Hugh
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
>
>
>
On Mon, Apr 27, 2009 at 04:52, Solomon Duskis <sduskis@...> wrote: > I would say that forms are are part of the hypertext constraint (and > workspaces) as well. To make things a bit clearer, forms are simply links with a UI attached to help people write the links, nothing more. They are just links. I can search Google by typing in ?q=some%20stuff%20here at the end of the link to their main website. It's just a link. Forms are part of HATEOAS since links are part of HATEOAS. Just had to mention that. :) Regards, Alex -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
I do agree that <forms> fundamentally serve the role of fulfilling HATEOAS. However, I disagree that they are "simply links with UI attached." I'd put it a bit differently. Forms define the exact nature of client interaction. Note that "Client" doesn't necessarily mean UI; it can also mean programmatic client. Forms can be used to create hackable GET URL?<queryParam>=<value>, but that's not the default behavior of <forms>; The default is a POST. Forms, IMHO, are severely underused in RESTful service environments. The concept of a server provided request template is a potentially powerful mechanism of providing runtime meta-data about conversation semantics. In other words, forms can provide the same function as WSDL or WADL. The difference between forms and the W*DLs is that forms are generated to define the semantics of complex interaction specifically related to the current application state's workspace needs (meaning the accessible application states that are most likely to be traversed by the client) rather than a definite of an complete system definition. WSDL/WADL define the details of all application states, and forms define applicable details of applicable application states. -Solomon On Sun, Apr 26, 2009 at 8:02 PM, Alexander Johannesen < alexander.johannesen@...> wrote: > On Mon, Apr 27, 2009 at 04:52, Solomon Duskis <sduskis@...> wrote: > > I would say that forms are are part of the hypertext constraint (and > > workspaces) as well. > > To make things a bit clearer, forms are simply links with a UI > attached to help people write the links, nothing more. They are just > links. I can search Google by typing in ?q=some%20stuff%20here at the > end of the link to their main website. It's just a link. Forms are > part of HATEOAS since links are part of HATEOAS. > > Just had to mention that. :) > > > Regards, > > Alex > -- > --------------------------------------------------------------------------- > Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps > ------------------------------------------ http://shelter.nu/blog/-------- >
2009/4/27 Solomon Duskis <sduskis@...>: > There are different types of clients for a given server. Each client may > use a slice of the overall set of states. For example, admin vs. end user, > or User with task A (perform money transfer) vs. user with B (year end tax > information retrieval). The server therefore has to provide different > "states" that individual users will not see. I think these would be 2 different applications. As Roy, said "Application is what the user is trying to accomplish". Now I know , we can argue that this is actually a single application (say a Financial Application with sub modules for doing tasks A and B), but this is the kind of granularity in defining an application that helps me understand HATEOAS. I completely agree that REST terms do need a clearer definition , I am just not sure that the term "application" can be both succinctly and completely defined at the same time. Cheers Devdatta > > Definition games may end up driving us in circles, but IMHO, a big part of > defining REST relies on defining the use of overloaded, complex and > misunderstood terminology such as "application state" and "stateless > communication." REST has been defined by Roy in a 9 year old PHd > dissertation and a few recent follow up blogs (including a couple written in > frustration). The core ideas have been interpreted and re-interpreted, > muddled and muddied. > > The experts don't agree an a complete formal meaning of REST. Experts, from > whom I've learned REST, have admitted that they don't have all of the > answers related to REST, nor do they admit to having a complete set of best > practices. Even supporting technologies, including HTTP are under review > right now for more clarity based on new uses. The more the developer > community push what can be done with REST, the more questions and about REST > arise. As those questions arise, the more RESTful terminology surfaces with > a greater need for discourse and definitions. > > -Solomon > > > > On Sun, Apr 26, 2009 at 3:38 PM, Devdatta <dev.akhawe@gmail.com> wrote: >> >> > >> > Taking a step back, if "Application" defines what the user is trying to >> > accomplish (something I've thought of as "workflow" -- yet another >> > overloaded term), what would you call what the overall set of states >> > that >> > the server is providing? >> > >> >> Ideally, why would a server provide* a state that is not part of what >> the user is trying to achieve? >> >> imho, I think this definitions game would just end us up in circles ... >> >> *provide := keep visible to the user >> >> Cheers >> Devdatta >> >> >> 2009/4/27 Solomon Duskis <sduskis@...>: >> > >> > >> > I grok what you're saying, but I've always thought of Application as the >> > overall set of states provided by the server (or set of interlinked >> > servers), rather than what the client does. I've understood >> > "Application >> > State" as a specific "node" of functionality/information that the server >> > provides, and HATEOAS being a constraint that all client selections of >> > the >> > next application state must come through an interaction with server >> > provided >> > unique references/keys representing that state (URLs, URNs and etc). >> > >> > Taking a step back, if "Application" defines what the user is trying to >> > accomplish (something I've thought of as "workflow" -- yet another >> > overloaded term), what would you call what the overall set of states >> > that >> > the server is providing? >> > >> > -Solomon >> > >> > On Sun, Apr 26, 2009 at 2:57 PM, Roy T. Fielding <fielding@...> >> > wrote: >> >> >> >> >> >> On Apr 26, 2009, at 10:38 AM, Bill Burke wrote: >> >> > So the "application" in Engine of Application State is really your >> >> > browser. Links provide a way to change the state of your browser. >> >> > Forms provide a way to change the state of your resource. >> >> > >> >> >> >> Most of the time, with some caveats: >> >> >> >> The "application" is what the user is trying to accomplish, >> >> such as "buy a book" or "transfer money from one account >> >> to another" or "watch some monty python episode". The browser >> >> is just the software that presents and operates upon the >> >> application state. >> >> >> >> Forms usually change the state of the browser as well. >> >> >> >> Links and forms are specific UI mechanisms in HTML that teach >> >> the browser how to construct the request upon activation. >> >> A more elaborate media type could have more elaborate >> >> mechanisms, and non-browser clients are even less restricted >> >> in how they interact with media. >> >> >> >> Although GET requests are not requesting a state change, it is >> >> still possible for some resource states to change in response >> >> to a GET. For example, there may be some other resource that >> >> counts the number of GETs, or the most recent user agent. >> >> >> >> ....Roy >> > >> > > >
Solomon Duskis wrote: > > > > I do agree that <forms> fundamentally serve the role of fulfilling > HATEOAS. However, I disagree that they are "simply links with UI > attached." I'd put it a bit differently. Forms define the exact nature > of client interaction. Note that "Client" doesn't necessarily mean UI; > it can also mean programmatic client. Forms can be used to create > hackable GET URL?<queryParam>=<value>, but that's not the default > behavior of <forms>; The default is a POST. > > Forms, IMHO, are severely underused in RESTful service environments. > The concept of a server provided request template is a potentially > powerful mechanism of providing runtime meta-data about conversation > semantics. Forms are just another media type. An HTML form is *exactly* the same as XML. In XML your media type is "application/xml" and your template is XSD. For HTML forms, your media type is application/x-www-form-urlencoded and your template is form markup. That being said, the whole point of this thread by me was to say that forms should be mentioned when explaining HATEOAS and that they may be a better analogy than mentioning links. This is because when non-rest people think of links they think of surfing information. When they think of forms they think of actually interacting with a server. I guess what I'm saying is that I was trying to find better ways to sell HATEOAS as a viable way to model web services. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
List, I am creating a RESTful API for my service. I have a launching customer testing the API from their perspective. Just now a developer of theirs ran into my office. He wanted to catch (4xx and 5xx) errors. Now my application can return a 403 response code for several reasons. In the body of the response I have a human readable error message explaining why things went awry. Here's the problem, he wants to act different on different types of 403 response codes. If only to notify the user in different ways. My error message can not (always) be copied 1-to-1 to their frontend. What is he to do? Match on the possible changing human readable error message? Obviously this is very brittle. Pass my error message to their frontend? More often than not this is not desirable. Or should I include an error identifier in the response body? So that he may match on that. This introduces a fair bit of coupling which I don't like at all and feels very unRESTful. The only semi-helpful post I found was this one: http://www.onlamp.com/pub/wlg/4009 I would very much appreciate your help. With kind regards, Harm
RFC 2616 says: HTTP status codes are extensible. HTTP applications are not required to understand the meaning of all registered status codes, though such understanding is obviously desirable. However, applications MUST understand the class of any status code, as indicated by the first digit, and treat any unrecognized response as being equivalent to the x00 status code of that class, with the exception that an unrecognized response MUST NOT be cached. For example, if an unrecognized status code of 431 is received by the client, it can safely assume that there was something wrong with its request and treat the response as if it had received a 400 status code. In such cases, user agents SHOULD present to the user the entity returned with the response, since that entity is likely to include human- readable information which will explain the unusual status. _______________________________________________ Melhores cumprimentos / Beir beannacht / Best regards António Manuel dos Santos Mota _______________________________________________ 2009/4/27 harmaarts <harmaarts@...>: > > > List, > > I am creating a RESTful API for my service. I have a launching customer > testing the API from their perspective. Just now a developer of theirs ran > into my office. He wanted to catch (4xx and 5xx) errors. > > Now my application can return a 403 response code for several reasons. In > the body of the response I have a human readable error message explaining > why things went awry. > > Here's the problem, he wants to act different on different types of 403 > response codes. If only to notify the user in different ways. My error > message can not (always) be copied 1-to-1 to their frontend. > > What is he to do? Match on the possible changing human readable error > message? Obviously this is very brittle. > Pass my error message to their frontend? More often than not this is not > desirable. > Or should I include an error identifier in the response body? So that he may > match on that. This introduces a fair bit of coupling which I don't like at > all and feels very unRESTful. > > The only semi-helpful post I found was this one: > http://www.onlamp.com/pub/wlg/4009 > > I would very much appreciate your help. > > With kind regards, > Harm > >
Why not send back a particular error representation format with your 403 response? harmaarts wrote: > > > > List, > > I am creating a RESTful API for my service. I have a launching customer > testing the API from their perspective. Just now a developer of theirs > ran into my office. He wanted to catch (4xx and 5xx) errors. > > Now my application can return a 403 response code for several reasons. > In the body of the response I have a human readable error message > explaining why things went awry. > > Here's the problem, he wants to act different on different types of 403 > response codes. If only to notify the user in different ways. My error > message can not (always) be copied 1-to-1 to their frontend. > > What is he to do? Match on the possible changing human readable error > message? Obviously this is very brittle. > Pass my error message to their frontend? More often than not this is not > desirable. > Or should I include an error identifier in the response body? So that he > may match on that. This introduces a fair bit of coupling which I don't > like at all and feels very unRESTful. > > The only semi-helpful post I found was this one: > http://www.onlamp.com/pub/wlg/4009 <http://www.onlamp.com/pub/wlg/4009> > > I would very much appreciate your help. > > With kind regards, > Harm > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
I've just started with REST. So there are some difficuties for me to identify if a system/website is restul or not? Are there any methods(the way) to do it clearly/basicly ? Example: is gmail restful system? How to know it? Please help me !
Jan:
I approach writing REST-ful clients as state machines. Some are very
limited, of course. Basically, I build clients that can handle a set of
media-types (I'm one of those who favors application/vnd.**** media-types).
That means, once a server publishes a media-type,changes like the one you
illustrate are not allowed since it will break the client. However, the
server might issue a new media-type to handle breaking changes like the one
you show.
As for URIs, the server maintains at least one 'entry point' URI that the
client must know in advance. After that each representation sent from the
server can have one or more link elements (links, forms, etc.) that contain
viable URIs for the next step(s) the client can take to advance the app
state for that client session. To make this easier to handle for
machine-to-machine communication, I also rely heavily on the "rel" value as
a decoration on the link elements. This also means the client must have a
vocabulary of understood rel values as a way to select navigation options
(or inform humans of the same).
This approach frees the client app builder from commiting tight-binding
errors by assuming the workflow themselves (hence the state machine and the
decoarated hypermedia links in the representations). This also allows the
server to safely modify the workflow of the app w/o risking problems w/
tightly-bound clients.
When the number of media types is small and the workflow otoins (rel)
limited, apps like this are relatively easy to build. As the number of
media-types and rel options increases, so does the client coding challenge.
For these reasons, the Web browser has had great sucess by limiting it's
media-type support to a handlful of powerful options (HTML,CSS,JS,binaries);
recognizing only a slight few rel-types (rel="stylesheet", etc.) and relying
on humans to do the heavying lifting.
mca
http://amundsen.com/blog/
On Fri, Apr 17, 2009 at 21:59, Jan Vincent Liwanag <jvliwanag@...>wrote:
>
>
> On 4/17/09 11:59 PM, Peter Williams wrote:
>
> On Fri, Apr 17, 2009 at 1:31 AM, jv.liwanag <jvliwanag@...> <jvliwanag@...> wrote:
>
>
> My concerns are:
> 1 - Let as assume another server app consuming a RESTful service. At
> start-up, it gets the links useful from the root URL then traverses them as
> necessary. Assuming there are elements, such as forms, these are probably
> stored as well. However, when the RESTful service evolves, say, changes the
> URIs, etc, the consumer's data would be outdated. How is this best handled?
>
> I could opt to always start each request with the root URL all the time,
> then follow the necessary links all the time. Of course, it'll be best to
> take advantage of caching and/or conditional GETs here.
>
>
> Starting at the top and working through the hypermedia is my preferred
> approach. With basic caching and conditional requests acceptable
> performance is quite easy to maintain.
>
>
>
> 2 - What's a good guideline on what stuff to watch out in the
> representations? I wouldn't want my representations to always adhere to a
> specific schema so as not to hinder its evolution. But some things have to
> be kept constant for older REST clients on the same service working right?
> What's a good guideline for those? (i.e. a specific XPath will always point
> to a specific information regardless of whatever revisions the service goes
> through.)
>
>
> I have not built any clients that use XML base services, but for
> clients that use JSON representations i have used a very similar
> approach. Basically, creating domain objects by making requests and
> extracting each individual piece of the data i wanted by name, or
> path, and storing them in instance variables in the object. In XML,
> using XPath would be equivalent so i expect that would work pretty
> well.
>
>
> My concern about using XPath though (or traversing objects using '.' in
> JSON) is that I can't freely change my representation. Say, if I wanted to
> change from
>
> {'first_name':'jv', 'last_name':'liwanag'}
>
> to
>
> {'name':{'first':'jv', 'last_name':'liwanag'}}
>
> on a system that is already deployed.
>
> I was wondering if there are good guidelines/tools my clients can use so
> that it can handle that type of change. I was looking recently at WADL and
> it does offer a good solution to changing URLs and request parameters. I was
> wondering if there is a good tool to anticipate changing representations as
> well.
>
> In XML, a (possibly bad) idea I can think of is to give the users a fixed
> schema then have stylesheets ready to transform the XML if a change is
> present. Maybe create a workable standard which defines the stylesheets for
> the resources that changed.
>
> --
> Peter Williamshttp://barelyenough.org
>
> Jan Vincent Liwanag
>
>
>
>
[ oops, this and another one from Mike was caught as spam but I missed it - sorry, MB ] I favor signaling version details via the media-type. First, if multiple versions exists, then the client must have a way to indicate the desired version. This can be done in the metadata (HTTP Headers) or the URI, but not the body. Versioning via URI runs the risk of breaking existing clients that need to stay with an older version. Versioning via URI also greatly complicates hypermedia sent with each resource representation. In effect, a change in version threatens to create an entire URI namespace, even if only a few URI need to change; even if only a few resource representations need to change. Versioning via metadata is much less disruptive since it allows clients that are not concerned w/ versioning changes (i.e. simple GET clients negotiating for HTML representations) can continue to use the URIs they may have stored in the past (think about a link list posted to a blog or forum). Versioning via media-type has the benefit of using existing well-known tech already in place for clients and servers (no custom header needed) and allows clients to negotiate for the exact version they need *per URI* if that is appropriate for that client. It also allows the server to clearly indicate supported versions for each URI (via the OPTIONS method). It also makes it easier to roll out minor modifications or customizations in the workflow as only a few resource representations may need new hypermedia links (I've had many cases where only a singles resource needed new hypermedia links). Finally, versioning via media-type eases demands on caching intermediaries since it does not balloon the number if URIs that might be cacheable. It also reduces frustration within organizations that make use of reverse DNS and other security proxies that use URI space as a way to limit access to external clients (Just this week I wrestled with a major int'l company that *refused* to allow a new URI for a version update since it would take additional time and money to properly provision the proxies). mca http://amundsen.com/blog/ On Wed, Apr 15, 2009 at 09:48, Peter Williams <pezra@...>wrote: > --- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: > > > > Peter Williams wrote: > > > --- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@> wrote: > > > > > >> Why should a client or server care about explicit versioning? What > does it > > >> buy you? You say that "representations are highly likely to have > changed." > > >> Shouldn't a RESTful interaction inherently handle those changes > gracefully? > > >> > > > > > > REST does provide a rather graceful way to handle versioning. By > exposing the application semantics explicitly in the representations, the > application semantics can be change just by changing the representations. > Of course, this has the potential to break clients if such changes are > implemented unilaterally. > > > > > > HTTP's content negotiations provides a powerful implementation of this > approach. The client and server get to negotiate which of the available > flavor of representations, and there by application semantics, to use. The > server can prevent breakage by supporting multiple flavors/versions of the > representations simultaneously. > > > > > > > Would you do this with custom versioned media types, or with standard > > media types and an extra version header? > > > > Application specific media types that change as the API is versioned are > definitely the way to go. For example, > "application/vnd.mycompany.fancyapp-v1+json". > > Sticking the version in a parameter is not acceptable because parameters > are not allowed to change the basic meaning of a media type. I don't like > the version header because it is non-standard, and therefore ignored by the > content negotiation support that exists in both http client and servers > infrastructure. > > Peter > http://barelyenough.org > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Bill Burke wrote: > > > > Why not send back a particular error representation format with your 403 > response? > ... Such as in <http://greenbytes.de/tech/webdav/rfc4918.html#rfc.section.16>... Note that the response could be sent with an XSLT PI, so browers could still display something meaningful BR, Julian
Interesting question. Here are some thoughts, definitely incomplete,
for checking whether something uses HTTP and other Web standards
RESTfully:
- Do URIs identify things (resources)? Never mind the characters
they're made of, but check whether they identify e.g. a customer, an
email, an account, a step in a process, a shopping cart … or is there
just one URIs (or a small number of URIs) and the identification
contained in the message?
- Can one apply more than one method to these resources? There might
be a few you can only GET, but probably most of those supporting POST,
PUT or DELETE should also support GET
- Is a GET used in an "unsafe" way, i.e. can a client follow links via
GET without fear of regret?
- Do I need a description of the URI structure, or can I navigate from
resource to resource via hyperlinks contained in resource
representations?
- Is what I can do to a resource at any particular point in time
determined by what links are available in the representation?
- Are at least some resources cacheable and provide appropriate Cache-
control, Expires and/or ETag headers?
- Is a conditional GET supported, based on ETags or last modification
date?
- Similarly, are conditional PUT or DELETE requests supported?
- Are the different HTTP response codes used in line with their
meaning in the spec? Or is everything "tunneled" though 200 and 500?
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
On 29.04.2009, at 04:57, cule_barca wrote:
>
>
> I've just started with REST. So there are some difficuties for me to
> identify if a system/website is restul or not?
> Are there any methods(the way) to do it clearly/basicly ?
> Example: is gmail restful system? How to know it?
>
> Please help me !
>
>
> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%; font-
> weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; } #ygrp-
> mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!-- #ygrp-
> sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%; line-
> height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom: 10px;
> padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px; font-family:
> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> input, textarea {font:99% arial,helvetica,clean,sans-serif;} #ygrp-
> mlmsg pre, code {font:115% monospace;*font-size:100%;} #ygrp-mlmsg *
> {line-height:1.22em;} #ygrp-text{ font-family: Georgia; } #ygrp-
> text p{ margin: 0 0 1em 0; } dd.last p a { font-family: Verdana;
> font-weight: bold; } #ygrp-vitnav{ padding-top: 10px; font-family:
> Verdana; font-size: 77%; margin: 0; } #ygrp-vitnav a{ padding: 0
> 1px; } #ygrp-mlmsg #logo{ padding-bottom: 10px; } #ygrp-reco
> { margin-bottom: 20px; padding: 0px; } #ygrp-reco #reco-head { font-
> weight: bold; color: #ff7900; } #reco-category{ font-size: 77%; }
> #reco-desc{ font-size: 77%; } #ygrp-vital a{ text-decoration:
> none; } #ygrp-vital a:hover{ text-decoration: underline; } #ygrp-
> sponsor #ov ul{ padding: 0 0 0 8px; margin: 0; } #ygrp-sponsor #ov
> li{ list-style-type: square; padding: 6px 0; font-size: 77%; } #ygrp-
> sponsor #ov li a{ text-decoration: none; font-size: 130%; } #ygrp-
> sponsor #nc{ background-color: #eee; margin-bottom: 20px;
> padding: 0 8px; } #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-
> sponsor .ad #hd1{ font-family: Arial; font-weight: bold; color:
> #628c2a; font-size: 100%; line-height: 122%; } #ygrp-sponsor .ad
> a{ text-decoration: none; } #ygrp-sponsor .ad a:hover{ text-
> decoration: underline; } #ygrp-sponsor .ad p{ margin: 0; font-
> weight: normal; color: #000000; } o{font-size:
> 0; } .MsoNormal{ margin: 0 0 0 0; } #ygrp-text tt{ font-size:
> 120%; } blockquote{margin: 0 0 0 4px;} .replbq{margin:4} dd.last p
> span { margin-right: 10px; font-family: Verdana; font-weight:
> bold; } dd.last p span.yshortcuts { margin-right: 0; } div.photo-
> title a, div.photo-title a:active, div.photo-title a:hover,
> div.photo-title a:visited { text-decoration: none; } div.file-title
> a, div.file-title a:active, div.file-title a:hover, div.file-title
> a:visited { text-decoration: none; } #ygrp-msg p#attach-count
> { clear: both; padding: 15px 0 3px 0; overflow: hidden; } #ygrp-msg
> p#attach-count span { color: #1E66AE; font-weight: bold; } div#ygrp-
> mlmsg #ygrp-msg p a span.yshortcuts { font-family: Verdana; font-
> size: 10px; font-weight: normal; } #ygrp-msg p a { font-family:
> Verdana; font-size: 10px; } #ygrp-mlmsg a { color: #1E66AE; }
> div.attach-table div div a { text-decoration: none; } div.attach-
> table { width: 400px; } -->
It's for sure more easy to identify that a system/website *is not* restfull
than identify it *is* restfull......
On Apr 29, 2009 9:11am, Stefan Tilkov <stefan.tilkov@innoq.com> wrote:
> Interesting question. Here are some thoughts, definitely incomplete,
> for checking whether something uses HTTP and other Web standards
> RESTfully:
> - Do URIs identify things (resources)? Never mind the characters
> they're made of, but check whether they identify eg a customer, an
> email, an account, a step in a process, a shopping cart … or is there
> just one URIs (or a small number of URIs) and the identification
> contained in the message?
> - Can one apply more than one method to these resources? There might
> be a few you can only GET, but probably most of those supporting POST,
> PUT or DELETE should also support GET
> - Is a GET used in an "unsafe" way, ie can a client follow links via
> GET without fear of regret?
> - Do I need a description of the URI structure, or can I navigate from
> resource to resource via hyperlinks contained in resource
> representations?
> - Is what I can do to a resource at any particular point in time
> determined by what links are available in the representation?
> - Are at least some resources cacheable and provide appropriate Cache-
> control, Expires and/or ETag headers?
> - Is a conditional GET supported, based on ETags or last modification
> date?
> - Similarly, are conditional PUT or DELETE requests supported?
> - Are the different HTTP response codes used in line with their
> meaning in the spec? Or is everything "tunneled" though 200 and 500?
> Stefan
> --
> Stefan Tilkov, http://www.innoq.com/blog/st/
> On 29.04.2009, at 04:57, cule_barca wrote:
> >
> >
> > I've just started with REST. So there are some difficuties for me to
> > identify if a system/website is restul or not?
> > Are there any methods(the way) to do it clearly/basicly ?
> > Example: is gmail restful system? How to know it?
> >
> > Please help me !
> >
> >
> >
> > margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> > solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%; font-
> > weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> > #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; } #ygrp-
> > mkp .ad a{ color: #0000ff; text-decoration: none; } -->
> > sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> > #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%; line-
> > height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom: 10px;
> > padding: 0 0; } -->
> > arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> > #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> > input, textarea {font:99% arial,helvetica,clean,sans-serif;} #ygrp-
> > mlmsg pre, code {font:115% monospace;*font-size:100%;} #ygrp-mlmsg *
> > {line-height:1.22em;} #ygrp-text{ font-family: Georgia; } #ygrp-
> > text p{ margin: 0 0 1em 0; } dd.last pa { font-family: Verdana;
> > font-weight: bold; } #ygrp-vitnav{ padding-top: 10px; font-family:
> > Verdana; font-size: 77%; margin: 0; } #ygrp-vitnav a{ padding: 0
> > 1px; } #ygrp-mlmsg #logo{ padding-bottom: 10px; } #ygrp-reco
> > { margin-bottom: 20px; padding: 0px; } #ygrp-reco #reco-head { font-
> > weight: bold; color: #ff7900; } #reco-category{ font-size: 77%; }
> > #reco-desc{ font-size: 77%; } #ygrp-vital a{ text-decoration:
> > none; } #ygrp-vital a:hover{ text-decoration: underline; } #ygrp-
> > sponsor #ov ul{ padding: 0 0 0 8px; margin: 0; } #ygrp-sponsor #ov
> > li{ list-style-type: square; padding: 6px 0; font-size: 77%; } #ygrp-
> > sponsor #ov li a{ text-decoration: none; font-size: 130%; } #ygrp-
> > sponsor #nc{ background-color: #eee; margin-bottom: 20px;
> > padding: 0 8px; } #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-
> > sponsor .ad #hd1{ font-family: Arial; font-weight: bold; color:
> > #628c2a; font-size: 100%; line-height: 122%; } #ygrp-sponsor .ad
> > a{ text-decoration: none; } #ygrp-sponsor .ad a:hover{ text-
> > decoration: underline; } #ygrp-sponsor .ad p{ margin: 0; font-
> > weight: normal; color: #000000; } o{font-size:
> > 0; } .MsoNormal{ margin: 0 0 0 0; } #ygrp-text tt{ font-size:
> > 120%; } blockquote{margin: 0 0 0 4px;} .replbq{margin:4} dd.last p
> > span { margin-right: 10px; font-family: Verdana; font-weight:
> > bold; } dd.last p span.yshortcuts { margin-right: 0; } div.photo-
> > title a, div.photo-title a:active, div.photo-title a:hover,
> > div.photo-title a:visited { text-decoration: none; } div.file-title
> > a, div.file-title a:active, div.file-title a:hover, div.file-title
> > a:visited { text-decoration: none; } #ygrp-msg p#attach-count
> > { clear: both; padding: 15px 0 3px 0; overflow: hidden; } #ygrp-msg
> > p#attach-count span { color: #1E66AE; font-weight: bold; } div#ygrp-
> > mlmsg #ygrp-msg pa span.yshortcuts { font-family: Verdana; font-
> > size: 10px; font-weight: normal; } #ygrp-msg pa { font-family:
> > Verdana; font-size: 10px; } #ygrp-mlmsg a { color: #1E66AE; }
> > div.attach-table div div a { text-decoration: none; } div.attach-
> > table { width: 400px; } -->
> ------------------------------------
> Yahoo! Groups Links
> To visit your group on the web, go to:
> http://groups.yahoo.com/group/rest-discuss/
> Your email settings:
> Individual Email | Traditional
> To change settings online go to:
> http://groups.yahoo.com/group/rest-discuss/join
> (Yahoo! ID required)
> To change settings via email:
> mailto:rest-discuss-digest@yahoogroups.com
> mailto:rest-discuss-fullfeatured@yahoogroups.com
> To unsubscribe from this group, send an email to:
> rest-discuss-unsubscribe@yahoogroups.com
> Your use of Yahoo! Groups is subject to:
> http://docs.yahoo.com/info/terms/
2009/4/29 cule_barca <vantu.ituns@...>: > Would you please tell me the way to identify that a system is not restful ? > I don't think I'm the correct person to answer that, but I can tell you want I *think* about it. Rest is characterized by a number of achitectural constraints: Client/server model Stateless protocols Caching Uniform Interface Layering Optional Code-on-demand and by a number of interface constraints Identification of resources Manipulation of resources through representations Self-descriptive messages Hypermedia as the engine of application state. (I hope I got those right) So you can start at the first and check if the system complies with that and so on... If it fails one of those, is not Restfull. If it complies with all *maybe* you can call it Restfull :) But remember that "being restfull" is not like the Holy Grail... _______________________________________________ Melhores cumprimentos / Beir beannacht / Best regards António Manuel dos Santos Mota _______________________________________________
I was reading some articles about Asynchronous Web, and the question popped on my mind, what are the implications of Asynchronous Web on a REST-based architecture? And I mean that from the architectural point-of-view, not on implementations of asynchronous notifications from resources, or other similar implementation "tricks". What will be the implications on the architectural constraints and interface constraints of REST? Client/server model Stateless protocols Caching Uniform Interface Layering Optional Code-on-demand Identification of resources Manipulation of resources through representations Self-descriptive messages Hypermedia as the engine of application state. _______________________________________________ Melhores cumprimentos / Beir beannacht / Best regards António Manuel dos Santos Mota _______________________________________________
Check out Rohit Khare's[1] "Asynchronous, Routed REST with Estimates and decentralized Decision functions (ARRESTED)[2]" mca http://amundsen.com/blog/ [1] - http://www.ics.uci.edu/~rohit/ [2] - http://www.ics.uci.edu/~rohit/ARRESTED-ICSE.pdf 2009/4/30 António Mota <amsmota@...> > I was reading some articles about Asynchronous Web, and the question > popped on my mind, what are the implications of Asynchronous Web on a > REST-based architecture? > > And I mean that from the architectural point-of-view, not on > implementations of asynchronous notifications from resources, or other > similar implementation "tricks". > > What will be the implications on the architectural constraints and > interface constraints of REST? > > Client/server model > Stateless protocols > Caching > Uniform Interface > Layering > Optional Code-on-demand > > Identification of resources > Manipulation of resources through representations > Self-descriptive messages > Hypermedia as the engine of application state. > > _______________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > António Manuel dos Santos Mota > > _______________________________________________ > > > ------------------------------------ > > Yahoo! Groups Links > > > >
I have yet to read that paper (thanks for the link), but for now everything (not much) I found seems to treat some kind of "Asynchronous REST" as a extension of the simple request/response scheme, by allowing some kind of "deferred" response, or "notifications". I was thinking more in terms of push technology, in which a request can be started by the server instead of the client. What does that to our constraints? Client/server model -> breaks (in the sense that client/server being equivalent to request/response) Stateless protocols -> breaks? probably not Caching -> breaks? Uniform Interface -> doesn't necessarily break, but it could Layering -> doesn't necessarily break Optional Code-on-demand -> definitely doesn't break Identification of resources -> doesn't break Manipulation of resources through representations -> doesn't break, although the term "manipulation" here may not be the best Self-descriptive messages -> doesn't break ? Hypermedia as the engine of application state. -> it could break, and it probably will break However, if it break some of the constraints you can't call it REST, so basically the question will be: can REST be the architecture style of a Asynchronous Web application, or do we need a "revised" REST" or even a complete new style? On May 1, 2009 2:25am, mike amundsen <mamund@yahoo.com> wrote: > Check out Rohit Khare's[1] "Asynchronous, Routed REST with Estimates and > decentralized Decision functions (ARRESTED)[2]" > mca > http://amundsen.com/blog/ > [1] - http://www.ics.uci.edu/~rohit/ > [2] - http://www.ics.uci.edu/~rohit/ARRESTED-ICSE.pdf > 2009/4/30 António Mota amsmota@gmail.com> > I was reading some articles about Asynchronous Web, and the question > popped on my mind, what are the implications of Asynchronous Web on a > REST-based architecture? > And I mean that from the architectural point-of-view, not on > implementations of asynchronous notifications from resources, or other > similar implementation "tricks". > What will be the implications on the architectural constraints and > interface constraints of REST? > Client/server model > Stateless protocols > Caching > Uniform Interface > Layering > Optional Code-on-demand > Identification of resources > Manipulation of resources through representations > Self-descriptive messages > Hypermedia as the engine of application state. > _______________________________________________ > Melhores cumprimentos / Beir beannacht / Best regards > António Manuel dos Santos Mota > _______________________________________________ > ------------------------------------ > Yahoo! Groups Links > To visit your group on the web, go to: > http://groups.yahoo.com/group/rest-discuss/ > Your email settings: > Individual Email | Traditional > To change settings online go to: > http://groups.yahoo.com/group/rest-discuss/join > (Yahoo! ID required) > To change settings via email: > mailto:rest-discuss-digest@yahoogroups.com > mailto:rest-discuss-fullfeatured@yahoogroups.com > To unsubscribe from this group, send an email to: > rest-discuss-unsubscribe@yahoogroups.com > Your use of Yahoo! Groups is subject to: > http://docs.yahoo.com/info/terms/ >
If "Asynchronous Web" means Servlet 3.0 and async request processing then:
IMO, nothing should change for the client. It should be using the HTTP
APIs that come with the language/platform.
It should be in some kind of while loop:
while(true) {
response = http.get() // probably post since we're consuming messages
doEvent(response)
}
The whole thing about HTTP is to save threads that are blocking. This
is the real performance benefit of async HTTP. Comet protocols tunnel
themselves through HTTP by taking advantage of keep alive semantics.
With these APIs, a client library is required and you're not really
using HTTP. You're only using it as a connection setup mechanism.
There are some small performance benefits to this. Once a connection is
set up, the server doesn't have to re-route incoming requests and can
just stream messages to the client.
BTW, I have done some async HTTP abstractions:
http://www.jboss.org/file-access/default/members/resteasy/freezone/docs/1.1-RC2/userguide/html/Asynchronous_HTTP_Request_Processing.html
amsmota@... wrote:
>
>
>
> I have yet to read that paper (thanks for the link), but for now
> everything (not much) I found seems to treat some kind of "Asynchronous
> REST" as a extension of the simple request/response scheme, by allowing
> some kind of "deferred" response, or "notifications".
>
> I was thinking more in terms of push technology, in which a request can
> be started by the server instead of the client. What does that to our
> constraints?
>
> Client/server model -> breaks (in the sense that client/server being
> equivalent to request/response)
> Stateless protocols -> breaks? probably not
> Caching -> breaks?
> Uniform Interface -> doesn't necessarily break, but it could
> Layering -> doesn't necessarily break
> Optional Code-on-demand -> definitely doesn't break
>
> Identification of resources -> doesn't break
> Manipulation of resources through representations -> doesn't break,
> although the term "manipulation" here may not be the best
> Self-descriptive messages -> doesn't break ?
> Hypermedia as the engine of application state. -> it could break, and it
> probably will break
>
>
> However, if it break some of the constraints you can't call it REST, so
> basically the question will be: can REST be the architecture style of a
> Asynchronous Web application, or do we need a "revised" REST" or even a
> complete new style?
>
>
> On May 1, 2009 2:25am, mike amundsen <mamund@...> wrote:
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Check out Rohit Khare's[1] "Asynchronous, Routed REST with Estimates
> and decentralized Decision functions (ARRESTED)[2]"
> >
> >
> > mca
> > http://amundsen.com/blog/
> >
> > [1] - http://www.ics.uci.edu/~rohit/
> > [2] - http://www.ics.uci.edu/~rohit/ARRESTED-ICSE.pdf
> >
> >
> >
> >
> >
> > 2009/4/30 António Mota amsmota@...>
> >
> > I was reading some articles about Asynchronous Web, and the question
> >
> > popped on my mind, what are the implications of Asynchronous Web on a
> >
> > REST-based architecture?
> >
> >
> >
> > And I mean that from the architectural point-of-view, not on
> >
> > implementations of asynchronous notifications from resources, or other
> >
> > similar implementation "tricks".
> >
> >
> >
> > What will be the implications on the architectural constraints and
> >
> > interface constraints of REST?
> >
> >
> >
> > Client/server model
> >
> > Stateless protocols
> >
> > Caching
> >
> > Uniform Interface
> >
> > Layering
> >
> > Optional Code-on-demand
> >
> >
> >
> > Identification of resources
> >
> > Manipulation of resources through representations
> >
> > Self-descriptive messages
> >
> > Hypermedia as the engine of application state.
> >
> >
> >
> > _______________________________________________
> >
> >
> >
> > Melhores cumprimentos / Beir beannacht / Best regards
> >
> >
> >
> > António Manuel dos Santos Mota
> >
> >
> >
> > _______________________________________________
> >
> >
> >
> >
> >
> > ------------------------------------
> >
> >
> >
> > Yahoo! Groups Links
> >
> >
> >
> > To visit your group on the web, go to:
> >
> > http://groups.yahoo.com/group/rest-discuss/
> >
> >
> >
> > Your email settings:
> >
> > Individual Email | Traditional
> >
> >
> >
> > To change settings online go to:
> >
> > http://groups.yahoo.com/group/rest-discuss/join
> >
> > (Yahoo! ID required)
> >
> >
> >
> > To change settings via email:
> >
> > mailto:rest-discuss-digest@yahoogroups.com
> >
> > mailto:rest-discuss-fullfeatured@yahoogroups.com
> >
> >
> >
> > To unsubscribe from this group, send an email to:
> >
> > rest-discuss-unsubscribe@yahoogroups.com
> >
> >
> >
> > Your use of Yahoo! Groups is subject to:
> >
> > http://docs.yahoo.com/info/terms/
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
>
>
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
Whoops. I thought this was a different (Java) list. Sorry for the Java
spin on this.
I guess I should comment more generically then. I don't think an
Asynchronous web would:
* break constraints. The constraints would be different. send/receive
instead of put/post/get
* break HATEOAS. In fact, IMO HATEOAS would thrive just as well in an
asynchronous environment.
Caching would seem to break, but you're not really doing read operations
that are cachable in an asynchronous environment. For example, are
emails cacheable?
Bill Burke wrote:
>
>
>
> If "Asynchronous Web" means Servlet 3.0 and async request processing then:
>
> IMO, nothing should change for the client. It should be using the HTTP
> APIs that come with the language/platform.
>
> It should be in some kind of while loop:
>
> while(true) {
>
> response = http.get() // probably post since we're consuming messages
> doEvent(response)
>
> }
>
> The whole thing about HTTP is to save threads that are blocking. This
> is the real performance benefit of async HTTP. Comet protocols tunnel
> themselves through HTTP by taking advantage of keep alive semantics.
> With these APIs, a client library is required and you're not really
> using HTTP. You're only using it as a connection setup mechanism.
> There are some small performance benefits to this. Once a connection is
> set up, the server doesn't have to re-route incoming requests and can
> just stream messages to the client.
>
> BTW, I have done some async HTTP abstractions:
>
> http://www.jboss.org/file-access/default/members/resteasy/freezone/docs/1.1-RC2/userguide/html/Asynchronous_HTTP_Request_Processing.html
> <http://www.jboss.org/file-access/default/members/resteasy/freezone/docs/1.1-RC2/userguide/html/Asynchronous_HTTP_Request_Processing.html>
>
> amsmota@... <mailto:amsmota%40gmail.com> wrote:
> >
> >
> >
> > I have yet to read that paper (thanks for the link), but for now
> > everything (not much) I found seems to treat some kind of "Asynchronous
> > REST" as a extension of the simple request/response scheme, by allowing
> > some kind of "deferred" response, or "notifications".
> >
> > I was thinking more in terms of push technology, in which a request can
> > be started by the server instead of the client. What does that to our
> > constraints?
> >
> > Client/server model -> breaks (in the sense that client/server being
> > equivalent to request/response)
> > Stateless protocols -> breaks? probably not
> > Caching -> breaks?
> > Uniform Interface -> doesn't necessarily break, but it could
> > Layering -> doesn't necessarily break
> > Optional Code-on-demand -> definitely doesn't break
> >
> > Identification of resources -> doesn't break
> > Manipulation of resources through representations -> doesn't break,
> > although the term "manipulation" here may not be the best
> > Self-descriptive messages -> doesn't break ?
> > Hypermedia as the engine of application state. -> it could break, and it
> > probably will break
> >
> >
> > However, if it break some of the constraints you can't call it REST, so
> > basically the question will be: can REST be the architecture style of a
> > Asynchronous Web application, or do we need a "revised" REST" or even a
> > complete new style?
> >
> >
> > On May 1, 2009 2:25am, mike amundsen <mamund@...m
> <mailto:mamund%40yahoo.com>> wrote:
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > Check out Rohit Khare's[1] "Asynchronous, Routed REST with Estimates
> > and decentralized Decision functions (ARRESTED)[2]"
> > >
> > >
> > > mca
> > > http://amundsen.com/blog/ <http://amundsen.com/blog/>
> > >
> > > [1] - http://www.ics.uci.edu/~rohit/ <http://www.ics.uci.edu/~rohit/>
> > > [2] - http://www.ics.uci.edu/~rohit/ARRESTED-ICSE.pdf
> <http://www.ics.uci.edu/~rohit/ARRESTED-ICSE.pdf>
> > >
> > >
> > >
> > >
> > >
> > > 2009/4/30 António Mota amsmota@... <mailto:amsmota%40gmail.com>>
> > >
> > > I was reading some articles about Asynchronous Web, and the question
> > >
> > > popped on my mind, what are the implications of Asynchronous Web on a
> > >
> > > REST-based architecture?
> > >
> > >
> > >
> > > And I mean that from the architectural point-of-view, not on
> > >
> > > implementations of asynchronous notifications from resources, or other
> > >
> > > similar implementation "tricks".
> > >
> > >
> > >
> > > What will be the implications on the architectural constraints and
> > >
> > > interface constraints of REST?
> > >
> > >
> > >
> > > Client/server model
> > >
> > > Stateless protocols
> > >
> > > Caching
> > >
> > > Uniform Interface
> > >
> > > Layering
> > >
> > > Optional Code-on-demand
> > >
> > >
> > >
> > > Identification of resources
> > >
> > > Manipulation of resources through representations
> > >
> > > Self-descriptive messages
> > >
> > > Hypermedia as the engine of application state.
> > >
> > >
> > >
> > > _______________________________________________
> > >
> > >
> > >
> > > Melhores cumprimentos / Beir beannacht / Best regards
> > >
> > >
> > >
> > > António Manuel dos Santos Mota
> > >
> > >
> > >
> > > _______________________________________________
> > >
> > >
> > >
> > >
> > >
> > > ------------------------------------
> > >
> > >
> > >
> > > Yahoo! Groups Links
> > >
> > >
> > >
> > > To visit your group on the web, go to:
> > >
> > > http://groups.yahoo.com/group/rest-discuss/
> <http://groups.yahoo.com/group/rest-discuss/>
> > >
> > >
> > >
> > > Your email settings:
> > >
> > > Individual Email | Traditional
> > >
> > >
> > >
> > > To change settings online go to:
> > >
> > > http://groups.yahoo.com/group/rest-discuss/join
> <http://groups.yahoo.com/group/rest-discuss/join>
> > >
> > > (Yahoo! ID required)
> > >
> > >
> > >
> > > To change settings via email:
> > >
> > > mailto:rest-discuss-digest@yahoogroups.com
> <mailto:rest-discuss-digest%40yahoogroups.com>
> > >
> > > mailto:rest-discuss-fullfeatured@yahoogroups.com
> <mailto:rest-discuss-fullfeatured%40yahoogroups.com>
> > >
> > >
> > >
> > > To unsubscribe from this group, send an email to:
> > >
> > > rest-discuss-unsubscribe@yahoogroups.com
> <mailto:rest-discuss-unsubscribe%40yahoogroups.com>
> > >
> > >
> > >
> > > Your use of Yahoo! Groups is subject to:
> > >
> > > http://docs.yahoo.com/info/terms/ <http://docs.yahoo.com/info/terms/>
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> >
> >
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com <http://bill.burkecentral.com>
>
>
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
I wonder why nobody has picked up using email, smtp/pop3, as an asynchronous protocol for the Internet. It has scaled pretty well. Has a constrained interface. Has a strong infrastrutre base of tools. Is representation oriented and media-type aware. Pretty ubiquitous. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
You mean, current email infrastructure has not scaled, does not have strong tools, and is not media type aware? Subbu On May 1, 2009, at 5:34 AM, Bill Burke wrote: > I wonder why nobody has picked up using email, smtp/pop3, as an > asynchronous protocol for the Internet. It has scaled pretty well. > Has > a constrained interface. Has a strong infrastrutre base of tools. Is > representation oriented and media-type aware. Pretty ubiquitous. > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com --- http://subbu.org
Harm (as “harmaartsâ€) wrote (in <http://tech.dir.groups.yahoo.com/group/rest-discuss/message/12509>): > [My service] can return a [response code of “403â€] for several > reasons. I am very curious about those reasons. I have never encountered or imagined a situation in which a response code of “403†would be a good choice. (I don’t claim that there are no such situations.) The solution to your problems might be to use other response codes. (RFC 1945, “Hypertext Transfer Protocol -- HTTP/1.0†[<http://www.rfc-editor.org/rfc/rfc1945.txt>] defines response code “403†in section 9.4, “Client Error 4xx†[<http://tools.ietf.org/html/rfc1945#section-9.4>]. RFC 2068, “Hypertext Transfer Protocol -- HTTP/1.1†[<http://www.rfc-editor.org/rfc/rfc2068.txt>] defines response code “403†in section 10.4.4, “403 Forbidden†[<http://tools.ietf.org/html/rfc2068#section-10.4.4>]. RFC 2616, “Hypertext Transfer Protocol -- HTTP/1.1†[<http://www.rfc-editor.org/rfc/rfc2616.txt>] defines response code “403†in section 10.4.4, “403 Forbidden†[<http://tools.ietf.org/html/rfc2616#section-10.4.4>, <http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.4>]. The effort to produce a successor to RFC 2616 in the “httpbis†(sic) working group [<http://tools.ietf.org/wg/httpbis/>] includes the Internet-Draft draft-ietf-httpbis-p2-semantics, “HTTP/1.1, part 2: Message Semantics†[<http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics>], whose current revision defines response code “403†in section 8.4.4, “403 Forbidden†[<http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-06#section-8.4.4>].) > In the body of the response I have a human readable error message > explaining why things went awry. You should do so and you have done so, which puts you ahead of the pack. Bravo! > [The intermediary user, a developer of a service that uses the service > in question,] wants to act [differently] on different types of > [“403â€] response codes[,] [if] only to notify the [end] user in > different ways. My error message can not (always) be copied directly > to their [front end]. Examples of the error messages that your service offers would help. Examples of the error messages that the intermediary service offers would help. > What is he to do? Match on the [possibly] changing [humanâ€readable] > error message? Obviously this is very brittle. Yes, it is brittle and, thus, inadvisable. > Pass my error message to their [front end]? More often than not this is > not desirable. In the absence of examples of the error messages that the respective services offer, I’m left to guess why your service’s error messages are so undesirable to the end user. > Or should I include an error identifier in the response body? If you decide to keep to a single response code for several categories of error, then you should include clear identifiers of the particular errors in the entity of the responses. Given that the entity metadata can carry the error identifiers, including those error identifiers in the entity body is not necessary. The optimal choice of one, the other, or both depends heavily on your choice of content types. > This introduces a fair bit of coupling[,] which I don't like at all and > [which] feels very unRESTful. I admit that you would be walking a fine line at the edge of REST’s uniform interface, but declining to use specific response codes leaves you at that edge. -- Please do not include my address in public replies. I will read public replies on the list.
Thanks, David. This sounds very interesting - what technologies/frameworks do you use for such scenarios? Thanks On Tue, Apr 21, 2009 at 5:57 PM, David Hodge <david.hodge@...>wrote: > Anand, > I have a Case Study/example for you. I am just wrapping up a project that > had disparate applications, one with MySQL and the other with SQL Server. > We used Semantic ReSTful Web Services to generate Web Feeds (ATOM and RSS) > to indicate the changes to resources made in one database. We used the > feeds, which had RDFa markup to tell our program what kind of data was in > our feeds. The program was then able to generate SQL to insert into the > other database. > I can see this technique applied on other projects. The same ReSTFul Web > Services were also used to display html. Multiple representations of a > resource is one benefit of the ReST style. The main benefit to ReSTful is > it allows for scalability because it does not rely on maintaining state and > makes caching easier. > > One more thing, ReSTful Web Services are much easier on the programmer to > develop. > > David Yuctan Hodge, Partner > Lucid Technics, LLC - Think Clear. Think Lucid. > www.lucidtechnics.com > Phone 703.798.9067 > Fax 703.563.6279 > > > > On Tue, Apr 21, 2009 at 3:18 PM, rcanand <rcanand@...> wrote: > >> >> >> Hi, >> >> I wondered if anyone had links to case studies/examples of enterprises >> that use REST to build services and what benefits they gained from it. >> >> Thanks >> Anand >> >> >> > >
Hi, Can anyone share their experiences with designing REST APIs using MVC style frameworks (such as Rails, Django,PHP MVC, etc.)? There seem to be two ways to design such APIs when accompanied with UI - 1) APIs and UIs in separate spaces (having a separate URI path for API versus UI for the same resource) 2) Same URI for each resource, but using some form of content negotiation to return UI formats like html or API formats like XML. Do you have any thoughts on the advantages and disadvantages of using either approach? Thanks Anand
On Fri, May 1, 2009 at 2:16 PM, rcanand <rcanand@...> wrote:
>
>
> Hi,
>
> Can anyone share their experiences with designing REST APIs using MVC style
> frameworks (such as Rails, Django,PHP MVC, etc.)?
>
> There seem to be two ways to design such APIs when accompanied with UI -
> 1) APIs and UIs in separate spaces (having a separate URI path for API
> versus UI for the same resource)
> 2) Same URI for each resource, but using some form of content negotiation to
> return UI formats like html or API formats like XML.
>
> Do you have any thoughts on the advantages and disadvantages of using either
> approach?
>
Hi Anand-
I went back and forth on this very thing, and ultimately decide to mix
everything together. The URI path only supplies the "model" (actually
controller, but those generally map to domain models), and the
controller itself dispatches on method (GET,PUT,POST,DELETE), resource
(we use URI regex-based routing w/ in each controller) and format.
Formats are specified by extension (.json,.html,.atom).
So the "widgets" controller has a routing map:
$routes = array(
'/' => 'widgets', // http://myapp/widgets
'/{widget_id}' => 'widget' // http://myapp/widget/23
);
And defines functions by a convention: {method}{resource}{format} for example:
function getWidgetJson($request) {
}
function putWidgetAtom($request) {
$widget_id = $request->get('widget_id');
etc....
}
function getWidgetsHtml($request) {
create HTML displayin list of widgets
}
function postToWidgets($request) {
$posted = $request->getBody();
$mime_type = $request->getMediaType();
dispatch here based on mime_type of posted resource (usually atom)
etc....
}
I have found this organizational structure to work well. The main
problem (which I did not show here) is the need to define
authentication type either per handler or (as is usuallly the case)
per method. We use three auth types: HTTP basic, a cookie-based
Auth, and no auth. Other than that, it works well.
--peter keane
> Thanks
> Anand
>
>
António Mota wrote (in <http://tech.dir.groups.yahoo.com/group/rest-discuss/message/12519>): > [REST] is characterized by a number of achitectural constraints: > > Client/server model[,] > Stateless protocols[,] > Caching[,] > Uniform Interface[,] > Layering[,] > Optional Code-on-demand[;] > > and by a number of interface constraints[:] > > Identification of resources[,] > Manipulation of resources through representations[,] > Self-descriptive messages[, and] > Hypermedia as the engine of application state. What you list as “achitectural constraints†are architectural styles. “An architectural style is a coordinated set of architectural constraints that restricts the roles/features of architectural elements and the allowed relationships among those elements within any architecture that conforms to that style.†(as defined at the beginning of section 1.5, “Styles†[<http://www.ics.uci.edu/~fielding/pubs/dissertation/software_arch.htm#sec_1_5>], of ASATDONBSA). What you list as “interface constraints†are architechtural constraints. > If [a system] complies with all [of the constraints of REST, then maybe] you can call it [RESTful.] Why did you emphasize the word “maybe� There are no circumstances in which a system that complies with the contsraints of REST is not RESTful, yet you imply that there are such circumstances. -- Please do not include my address in public replies. I will read public replies on the list.
Anand Ramanathan (as “rcanandâ€) wrote (in <http://tech.dir.groups.yahoo.com/group/rest-discuss/message/12529>): > There seem to be two ways to design such APIs when accompanied with UI[:] > 1) APIs and UIs in separate spaces (having a separate URI path for API versus UI for the same resource) > 2) Same URI for each resource, but using some form of content negotiation to return UI formats like html or API formats like XML. If the separation of a user interface from an applicationâ€programming interface entails the disjunction of the respective sets of resource identifiers, and if the user interface offers representations that consistently differ from the representations that the applicationâ€programming interface offers (the difference presumably being the main thrust of the separation), then the set of resources behind the user interface is disjoint from the set of resources behind the applicationâ€programming interface. In other words, it ain’t “the same resourceâ€. The wisdom of the separation depends on context. What makes HTML a format suitable for user interfaces, (some flavor of) XML a format suitable for applicationâ€programming interfaces, and not vice versa? Where does XHTML fall? Practical obstacles to successful content negotiation aside, exposing only one resource identifier as the sole point of access to representations that are contemporaneous and that differ substantially from each other seems to me like a violation of the uniform interface in the Representational State Transfer. I suggest a design that you did not mention: let the user interface be the applicationâ€programming interface and let the applicationâ€programming interface be the user interface. -- Please do not include my address in public replies. I will read public replies on the list.
Thanks, Peter.
That was very useful. Is your API built with this model public? It
would be useful to play with it to get a first hand experience.
Thanks much
Anand
On Fri, May 1, 2009 at 1:14 PM, Peter Keane <pkeane@mail.utexas.edu> wrote:
> On Fri, May 1, 2009 at 2:16 PM, rcanand <rcanand@gmail.com> wrote:
>>
>>
>> Hi,
>>
>> Can anyone share their experiences with designing REST APIs using MVC style
>> frameworks (such as Rails, Django,PHP MVC, etc.)?
>>
>> There seem to be two ways to design such APIs when accompanied with UI -
>> 1) APIs and UIs in separate spaces (having a separate URI path for API
>> versus UI for the same resource)
>> 2) Same URI for each resource, but using some form of content negotiation to
>> return UI formats like html or API formats like XML.
>>
>> Do you have any thoughts on the advantages and disadvantages of using either
>> approach?
>>
>
> Hi Anand-
>
> I went back and forth on this very thing, and ultimately decide to mix
> everything together. The URI path only supplies the "model" (actually
> controller, but those generally map to domain models), and the
> controller itself dispatches on method (GET,PUT,POST,DELETE), resource
> (we use URI regex-based routing w/ in each controller) and format.
> Formats are specified by extension (.json,.html,.atom).
>
> So the "widgets" controller has a routing map:
>
> $routes = array(
> '/' => 'widgets', // http://myapp/widgets
> '/{widget_id}' => 'widget' // http://myapp/widget/23
> );
>
> And defines functions by a convention: {method}{resource}{format} for example:
>
> function getWidgetJson($request) {
>
> }
>
> function putWidgetAtom($request) {
> $widget_id = $request->get('widget_id');
> etc....
> }
>
> function getWidgetsHtml($request) {
> create HTML displayin list of widgets
> }
>
> function postToWidgets($request) {
> $posted = $request->getBody();
> $mime_type = $request->getMediaType();
> dispatch here based on mime_type of posted resource (usually atom)
> etc....
> }
>
> I have found this organizational structure to work well. The main
> problem (which I did not show here) is the need to define
> authentication type either per handler or (as is usuallly the case)
> per method. We use three auth types: HTTP basic, a cookie-based
> Auth, and no auth. Other than that, it works well.
>
> --peter keane
>
>
>
>> Thanks
>> Anand
>>
>>
>
In other words, if you use resource.<format> in the URL it is a separate resource, if you use the same URI with different content types, it is the same resource with different representations. I think that is your point, and I agree. I believe you are also suggesting that using the latter is preferable, having a single way to access resources. Part of my original question was as to why some people choose other approaches (such as the former one above) and what advantages they see in doing so. Thanks Anand On Fri, May 1, 2009 at 2:39 PM, Etan Wexler <yahoo.com@...> wrote: > > > Anand Ramanathan (as $B!H(Brcanand$B!I(B) wrote (in > <http://tech.dir.groups.yahoo.com/group/rest-discuss/message/12529>): > > > There seem to be two ways to design such APIs when accompanied with UI[:] > > > 1) APIs and UIs in separate spaces (having a separate URI path for API > versus UI for the same resource) > > 2) Same URI for each resource, but using some form of content negotiation > to return UI formats like html or API formats like XML. > > If the separation of a user interface from an application$B!>(Bprogramming > interface entails the disjunction of the respective sets of resource > identifiers, and if the user interface offers representations that > consistently differ from the representations that the > application$B!>(Bprogramming interface offers (the difference presumably > being the main thrust of the separation), then the set of resources > behind the user interface is disjoint from the set of resources behind > the application$B!>(Bprogramming interface. In other words, it ain$B!G(Bt > $B!H(Bthe same resource$B!I(B. The wisdom of the separation depends on > context. > > What makes HTML a format suitable for user interfaces, (some flavor of) > XML a format suitable for application$B!>(Bprogramming interfaces, and not > vice versa? Where does XHTML fall? > > Practical obstacles to successful content negotiation aside, exposing > only one resource identifier as the sole point of access to > representations that are contemporaneous and that differ substantially > from each other seems to me like a violation of the uniform interface in > the Representational State Transfer. > > I suggest a design that you did not mention: let the user interface be > the application$B!>(Bprogramming interface and let the > application$B!>(Bprogramming interface be the user interface. > > -- > Please do not include my address in public replies. I will read public > replies on the list. > > >
Anand Ramanathan wrote (in <http://tech.dir.groups.yahoo.com/group/rest-discuss/message/12534>): > In other words, if you use resource.<format> in the URL it is a separate > resource, if you use the same URI with different content types, it is the > same resource with different representations. I think that is your point, > and I agree. What you roughly describe here is one point of several that I made. > I believe you are also suggesting that [exposing a single > resource identifier for substantially different series of > representations is preferable.] I am suggesting quite the opposite. You described two approaches, of which I most dislike the approach that amounts to hiding resources, forcing them to share a given identifier. I described an approach that I prefer over the two that you described. I’ll rephrase: On the World Wide Web, don’t separate user interfaces from applicationâ€programming interfaces. At least consider what such separation would achieve that you can’t achieve with a combined interface. -- Please do not include my address in public replies. I will read public replies on the list.
On May 1, 2009, at 3:34 PM, Etan Wexler wrote: > On the World Wide Web, don’t separate user interfaces from > applicationâ€programming interfaces. At least consider what such > separation would achieve that you can’t achieve with a combined > interface. Do you have any examples of apps built without such separation? Thanks Subbu --- http://subbu.org
I was trying to answer a question that was directed to me, not writing a treaty, and I did so in an expeditious manner, from the top of my head. I didn't know that for post in here one has to be so "purist" with terminology, maybe I have to read the entire REST dissertation before I post something. What you call misunderstanding is simply a imprecise use of terminology. I wrote "architectural constraints" instead of "sets of architectural constraints" and I said “interface constraints” instead of "architectural constraints of the uniform interface". Damn these simplifications.... However it strikes me is why you were so quick to point such terrible faults, and what did you said in response to the original question? Nothing!!! Gee, I thought this list was about to try to help other people with their questions, even if , as I said in the original post, maybe I'm not the best person to do it... But then again, is much easier to point to other's mistakes than point to correct answers... _______________________________________________ Melhores cumprimentos / Beir beannacht / Best regards António Manuel dos Santos Mota _______________________________________________ 2009/5/1 Etan Wexler <yahoo.com@...>: > > > António Mota wrote (in > <http://tech.dir.groups.yahoo.com/group/rest-discuss/message/12519>): > >> [REST] is characterized by a number of achitectural constraints: >> >> Client/server model[,] >> Stateless protocols[,] >> Caching[,] >> Uniform Interface[,] >> Layering[,] >> Optional Code-on-demand[;] >> >> and by a number of interface constraints[:] >> >> Identification of resources[,] >> Manipulation of resources through representations[,] >> Self-descriptive messages[, and] >> Hypermedia as the engine of application state. > > What you list as “achitectural constraints” are architectural > styles. “An architectural style is a coordinated set of architectural > constraints that restricts the roles/features of architectural elements > and the allowed relationships among those elements within any > architecture that conforms to that style.” (as defined at the > beginning of section 1.5, “Styles” > [<http://www.ics.uci.edu/~fielding/pubs/dissertation/software_arch.htm#sec_1_5>], > of ASATDONBSA). What you list as “interface constraints” are > architechtural constraints. > >> If [a system] complies with all [of the constraints of REST, then maybe] >> you can call it [RESTful.] > > Why did you emphasize the word “maybe”? There are no circumstances > in which a system that complies with the contsraints of REST is not > RESTful, yet you imply that there are such circumstances. > > -- > Please do not include my address in public replies. I will read public > replies on the list. > >
António: Está sempre bem-vindos aqui. mca http://amundsen.com/blog/ 2009/5/1 António Mota <amsmota@...> > I was trying to answer a question that was directed to me, not writing > a treaty, and I did so in an expeditious manner, from the top of my > head. I didn't know that for post in here one has to be so "purist" > with terminology, maybe I have to read the entire REST dissertation > before I post something. > > What you call misunderstanding is simply a imprecise use of > terminology. I wrote "architectural constraints" instead of "sets of > architectural constraints" and I said “interface constraints” instead > of "architectural constraints of the uniform interface". Damn these > simplifications.... > > However it strikes me is why you were so quick to point such terrible > faults, and what did you said in response to the original question? > Nothing!!! > > Gee, I thought this list was about to try to help other people with > their questions, even if , as I said in the original post, maybe I'm > not the best person to do it... But then again, is much easier to > point to other's mistakes than point to correct answers... > > _______________________________________________ > > Melhores cumprimentos / Beir beannacht / Best regards > > António Manuel dos Santos Mota > > _______________________________________________ > > > > > 2009/5/1 Etan Wexler <yahoo.com@...>: > > > > > > António Mota wrote (in > > <http://tech.dir.groups.yahoo.com/group/rest-discuss/message/12519>): > > > >> [REST] is characterized by a number of achitectural constraints: > >> > >> Client/server model[,] > >> Stateless protocols[,] > >> Caching[,] > >> Uniform Interface[,] > >> Layering[,] > >> Optional Code-on-demand[;] > >> > >> and by a number of interface constraints[:] > >> > >> Identification of resources[,] > >> Manipulation of resources through representations[,] > >> Self-descriptive messages[, and] > >> Hypermedia as the engine of application state. > > > > What you list as “achitectural constraints” are architectural > > styles. “An architectural style is a coordinated set of architectural > > constraints that restricts the roles/features of architectural elements > > and the allowed relationships among those elements within any > > architecture that conforms to that style.” (as defined at the > > beginning of section 1.5, “Styles” > > [< > http://www.ics.uci.edu/~fielding/pubs/dissertation/software_arch.htm#sec_1_5<http://www.ics.uci.edu/%7Efielding/pubs/dissertation/software_arch.htm#sec_1_5> > >], > > of ASATDONBSA). What you list as “interface constraints” are > > architechtural constraints. > > > >> If [a system] complies with all [of the constraints of REST, then maybe] > >> you can call it [RESTful.] > > > > Why did you emphasize the word “maybe”? There are no circumstances > > in which a system that complies with the contsraints of REST is not > > RESTful, yet you imply that there are such circumstances. > > > > -- > > Please do not include my address in public replies. I will read public > > replies on the list. > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Fri, May 1, 2009 at 6:11 PM, Subbu Allamaraju <subbu@...> wrote: > > > > On May 1, 2009, at 3:34 PM, Etan Wexler wrote: > >> On the World Wide Web, don’t separate user interfaces from >> applicationâ€programming interfaces. At least consider what such >> separation would achieve that you can’t achieve with a combined >> interface. > > Do you have any examples of apps built without such separation? > I'm not sure it this is exactly what you mean, but in our app a handler (i.e., controller) services any request, whether for a web page, json data, POST/PUT/DELETE, etc. (i.e., API & WebSite are the same thing) http://code.google.com/p/dase/source/browse/trunk/lib/Dase/Handler/Item.php (looks like getItem() is the only function returning HTML in this example) --peter keane > Thanks > Subbu > --- > http://subbu.org > >
Thanks for the link. Just wondering how far one could take that since human-machine interactions and machine-machine interactions differ significantly in practice. Subbu On May 1, 2009, at 6:36 PM, Peter Keane wrote: >> Do you have any examples of apps built without such separation? >> > > I'm not sure it this is exactly what you mean, but in our app a > handler (i.e., controller) services any request, whether for a web > page, json data, POST/PUT/DELETE, etc. (i.e., API & WebSite are the > same thing) > > http://code.google.com/p/dase/source/browse/trunk/lib/Dase/Handler/Item.php > > (looks like getItem() is the only function returning HTML in this > example) --- http://subbu.org
On 02.05.2009, at 04:22, Subbu Allamaraju wrote: > Just wondering how far one could take that since > human-machine interactions and machine-machine interactions differ > significantly in practice. I don't think they differ as much as you seem to believe they do, especially if the machine-to-machine interface is designed following HATEOAS. Of course there are practical problems, such as the fact that HTML supports only GET and POST, browsers don't support explicit setting of Accept headers or lack a logout option for HTTP Auth, but these restrictions are not restrictions of REST. E.g. if I'm writing an application client, say built using Java/Swing, that is driven by hypermedia contained in representations returned from the server – would you expect that there'd have to be a second server API for other clients? I don't think so. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Thoughts: a) The fact that SOAP has an SMTP binding possibly taints the whole idea with the aftertaste of SOAP. REST people view "protocol independence" as over-engineering, and try to stick with HTTP. b) With email the only only "verb" is to send a message. The "real" verb will end up inside the message body, thus making it an RPC. (Unless you intend to make a uniform interface out of the SMTP commands themselves, like HELO and DATA.. but at that low of a level SMTP is not actually asynchronous so why use it?) On Fri, May 1, 2009 at 8:34 AM, Bill Burke <bburke@...> wrote: > > > I wonder why nobody has picked up using email, smtp/pop3, as an > asynchronous protocol for the Internet. It has scaled pretty well. Has > a constrained interface. Has a strong infrastrutre base of tools. Is > representation oriented and media-type aware. Pretty ubiquitous. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > >
Random thing: http://www.rabbitmq.com/ - an Erlang-based implementation of AMQP message (queueing, etc) protocol states that it has an experimental POP3 binding... Ben On Fri, 2009-05-01 at 08:34 -0400, Bill Burke wrote: > > > I wonder why nobody has picked up using email, smtp/pop3, as an > asynchronous protocol for the Internet. It has scaled pretty well. > Has > a constrained interface. Has a strong infrastrutre base of tools. Is > representation oriented and media-type aware. Pretty ubiquitous. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > >
Anand, I have an open source framework called Hannibal. It generates code based on Java domain classes. ReSTful Web Services are central to the framework. It uses convention over configuration and routes the HTTP method to a default handler or custom handler if available. It can generate Semantic Web Feeds as well. Take a look: http://code.google.com/p/hannibalcodegenerator/ Would love any feedback. David Yuctan Hodge, Partner Lucid Technics, LLC - Think Clear. Think Lucid. www.lucidtechnics.com Phone 703.798.9067 Fax 703.563.6279 On Fri, May 1, 2009 at 3:07 PM, Anand Ramanathan <rcanand@...> wrote: > Thanks, David. > > This sounds very interesting - what technologies/frameworks do you use for > such scenarios? > > Thanks > > > > On Tue, Apr 21, 2009 at 5:57 PM, David Hodge < > david.hodge@lucidtechnics.com> wrote: > >> Anand, >> I have a Case Study/example for you. I am just wrapping up a project that >> had disparate applications, one with MySQL and the other with SQL Server. >> We used Semantic ReSTful Web Services to generate Web Feeds (ATOM and RSS) >> to indicate the changes to resources made in one database. We used the >> feeds, which had RDFa markup to tell our program what kind of data was in >> our feeds. The program was then able to generate SQL to insert into the >> other database. >> I can see this technique applied on other projects. The same ReSTFul Web >> Services were also used to display html. Multiple representations of a >> resource is one benefit of the ReST style. The main benefit to ReSTful is >> it allows for scalability because it does not rely on maintaining state and >> makes caching easier. >> >> One more thing, ReSTful Web Services are much easier on the programmer to >> develop. >> >> David Yuctan Hodge, Partner >> Lucid Technics, LLC - Think Clear. Think Lucid. >> www.lucidtechnics.com >> Phone 703.798.9067 >> Fax 703.563.6279 >> >> >> >> On Tue, Apr 21, 2009 at 3:18 PM, rcanand <rcanand@...> wrote: >> >>> >>> >>> Hi, >>> >>> I wondered if anyone had links to case studies/examples of enterprises >>> that use REST to build services and what benefits they gained from it. >>> >>> Thanks >>> Anand >>> >>> >>> >> >> >
2009/5/1 Jeff Robertson <jeff.robertson@...>: > > > Thoughts: > > a) The fact that SOAP has an SMTP binding possibly taints the whole idea > with the aftertaste of SOAP. REST people view "protocol independence" as > over-engineering, and try to stick with HTTP. I don't know about that. In the REST infrastructure that I've been working we have a HTTP connector but also a JMS, a IMAP, a intraVM and a JCR connector, and others will be implemented as needed. > b) With email the only only "verb" is to send a message. The "real" verb > will end up inside the message body, thus making it an RPC. Since the majority of tools and libraries and even expertise on REST is within the HTTP environment, and also because that was the first connector we developed, it is true that the others are "modelled" around HTTP, and so we have to put the GET etc... in other places convenient for each connector. But I don't think that makes it RPC, as all of the connectors connects to the same "abstract resource" (the class where the "services" each individual resource provides are injected, Java based), so all the processing is the same for all the connectors. Actually, if it wasn't like this we could not have a uniform interface in all the connectors, that is what it's important for us (let the client choose the way it connects to the resources maintaining the same interface).
2009/5/2 mike amundsen <mamund@...>: > António: > > Está sempre bem-vindos aqui. > > mca > http://amundsen.com/blog/ Hmmm, está bem, obrigado. Acho eu..... :)))))) > > > 2009/5/1 António Mota <amsmota@gmail.com> >> >> I was trying to answer a question that was directed to me, not writing >> a treaty, and I did so in an expeditious manner, from the top of my >> head. I didn't know that for post in here one has to be so "purist" >> with terminology, maybe I have to read the entire REST dissertation >> before I post something. >> >> What you call misunderstanding is simply a imprecise use of >> terminology. I wrote "architectural constraints" instead of "sets of >> architectural constraints" and I said “interface constraints” instead >> of "architectural constraints of the uniform interface". Damn these >> simplifications.... >> >> However it strikes me is why you were so quick to point such terrible >> faults, and what did you said in response to the original question? >> Nothing!!! >> >> Gee, I thought this list was about to try to help other people with >> their questions, even if , as I said in the original post, maybe I'm >> not the best person to do it... But then again, is much easier to >> point to other's mistakes than point to correct answers... >> >> _______________________________________________ >> >> Melhores cumprimentos / Beir beannacht / Best regards >> >> António Manuel dos Santos Mota >> >> _______________________________________________ >> >> >> >> >> 2009/5/1 Etan Wexler <yahoo.com@...>: >> > >> > >> > António Mota wrote (in >> > <http://tech.dir.groups.yahoo.com/group/rest-discuss/message/12519>): >> > >> >> [REST] is characterized by a number of achitectural constraints: >> >> >> >> Client/server model[,] >> >> Stateless protocols[,] >> >> Caching[,] >> >> Uniform Interface[,] >> >> Layering[,] >> >> Optional Code-on-demand[;] >> >> >> >> and by a number of interface constraints[:] >> >> >> >> Identification of resources[,] >> >> Manipulation of resources through representations[,] >> >> Self-descriptive messages[, and] >> >> Hypermedia as the engine of application state. >> > >> > What you list as “achitectural constraints” are architectural >> > styles. “An architectural style is a coordinated set of architectural >> > constraints that restricts the roles/features of architectural elements >> > and the allowed relationships among those elements within any >> > architecture that conforms to that style.” (as defined at the >> > beginning of section 1.5, “Styles” >> > >> > [<http://www.ics.uci.edu/~fielding/pubs/dissertation/software_arch.htm#sec_1_5>], >> > of ASATDONBSA). What you list as “interface constraints” are >> > architechtural constraints. >> > >> >> If [a system] complies with all [of the constraints of REST, then >> >> maybe] >> >> you can call it [RESTful.] >> > >> > Why did you emphasize the word “maybe”? There are no circumstances >> > in which a system that complies with the contsraints of REST is not >> > RESTful, yet you imply that there are such circumstances. >> > >> > -- >> > Please do not include my address in public replies. I will read public >> > replies on the list. >> > >> > >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > >
2009/5/1 Bill Burke <bburke@...>: > Whoops. I thought this was a different (Java) list. Sorry for the Java > spin on this. > > I guess I should comment more generically then. I don't think an > Asynchronous web would: > > * break constraints. The constraints would be different. send/receive > instead of put/post/get If the constraints are different then it's not REST. It can be similar to REST, even be "constructed" following the methodology used by Mr. Fielding of applying successively sets of constraints to it. But it will not be REST. Let's look at the Client/Server style, for instance: "A server component, offering a set of services, listens for requests upon those services. A client component, desiring that a service be performed, sends a request to the server via a connector. The server either rejects or performs the request and sends a response back to the client. (...) A client is a triggering process; a server is a reactive process. Clients make requests that trigger reactions from servers. Thus, a client initiates activity at times of its choosing; it often then delays until its request has been serviced. On the other hand, a server waits for requests to be made and then reacts to them." This is clear *not* the case of a Asynchronous Web application. But then again, "Separation of concerns is the principle behind the client-server constraints." And "separation of concerns" should be a valid concern in such application... So, it's my interpretation of the client/server style not being applicable to a asynchronous app somehow wrong, or do we need another style that also implements "separation of concerns" using different constraints? And probably for the other sets of constraints, specially to the Uniform Interface constraints, similar considerations can be made? > > * break HATEOAS. In fact, IMO HATEOAS would thrive just as well in an > asynchronous environment. >
Lol. Ok, you got me there. I guess IT sucks at *every* company. But how is it not media type aware? mail messages have content-type headers. You could send json, xml, or whatever. Subbu Allamaraju wrote: > You mean, current email infrastructure has not scaled, does not have > strong tools, and is not media type aware? > > Subbu > > On May 1, 2009, at 5:34 AM, Bill Burke wrote: > >> I wonder why nobody has picked up using email, smtp/pop3, as an >> asynchronous protocol for the Internet. It has scaled pretty well. Has >> a constrained interface. Has a strong infrastrutre base of tools. Is >> representation oriented and media-type aware. Pretty ubiquitous. >> >> >> -- >> Bill Burke >> JBoss, a division of Red Hat >> http://bill.burkecentral.com > > --- > http://subbu.org > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Jeff Robertson wrote: > > > > Thoughts: > > a) The fact that SOAP has an SMTP binding possibly taints the whole idea > with the aftertaste of SOAP. REST people view "protocol independence" as > over-engineering, and try to stick with HTTP. > It just seems to me that trying to use HTTP asynchronously is like putting a round peg in a square whole. On the internet, email is used for asynchronous communication. Why not use it for web services? Another thought is SMS, but not sure how viable it is on a non-cell network. > b) With email the only only "verb" is to send a message. The "real" verb > will end up inside the message body, thus making it an RPC. (Unless you > intend to make a uniform interface out of the SMTP commands themselves, > like HELO and DATA.. but at that low of a level SMTP is not actually > asynchronous so why use it?) > I guess there would be a logical constrained interface of "send" and "receive". -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
> But how is it not media type aware? mail messages have content-type > headers. You could send json, xml, or whatever. Yes, it is media type aware (cough ... mime cough ...) > > Subbu Allamaraju wrote: >> You mean, current email infrastructure has not scaled, does not >> have strong tools, and is not media type aware? >> Subbu >> On May 1, 2009, at 5:34 AM, Bill Burke wrote: >>> I wonder why nobody has picked up using email, smtp/pop3, as an >>> asynchronous protocol for the Internet. It has scaled pretty >>> well. Has >>> a constrained interface. Has a strong infrastrutre base of >>> tools. Is >>> representation oriented and media-type aware. Pretty ubiquitous. >>> >>> >>> -- >>> Bill Burke >>> JBoss, a division of Red Hat >>> http://bill.burkecentral.com >> --- >> http://subbu.org > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com
You might also want to look at Django-piston: http://bitbucket.org/jespern/django-piston/wiki/Home http://bitbucket.org/jespern/django-piston/wiki/Documentation
Bill,
I'm not sure what you're suggesting. Using email protocols such as
SMTP/IMAP/POP3 for machine-to-machine communication? Sure, why not, as
long as the unreliability (and lack of idempotent methods) is handled
in the application layer, this is certainly doable. (One problem is
that on the public Internet, a huge number of messages from the same
source or to the same destination are very easily mistaken for spam.)
But unless you mean that one could define an architectural style
common to all types of email architectures, and give it a nice name,
it's got nothing to do with REST. What am I missing?
Stefan
On 03.05.2009, at 00:49, Bill Burke wrote:
>
>
>
>
> Jeff Robertson wrote:
> >
> >
> >
> > Thoughts:
> >
> > a) The fact that SOAP has an SMTP binding possibly taints the
> whole idea
> > with the aftertaste of SOAP. REST people view "protocol
> independence" as
> > over-engineering, and try to stick with HTTP.
> >
>
> It just seems to me that trying to use HTTP asynchronously is like
> putting a round peg in a square whole. On the internet, email is used
> for asynchronous communication. Why not use it for web services?
>
> Another thought is SMS, but not sure how viable it is on a non-cell
> network.
>
> > b) With email the only only "verb" is to send a message. The
> "real" verb
> > will end up inside the message body, thus making it an RPC.
> (Unless you
> > intend to make a uniform interface out of the SMTP commands
> themselves,
> > like HELO and DATA.. but at that low of a level SMTP is not actually
> > asynchronous so why use it?)
> >
>
> I guess there would be a logical constrained interface of "send" and
> "receive".
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com
>
> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%; font-
> weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; } #ygrp-
> mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!-- #ygrp-
> sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%; line-
> height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom: 10px;
> padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px; font-family:
> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> input, textarea {font:99% arial,helvetica,clean,sans-serif;} #ygrp-
> mlmsg pre, code {font:115% monospace;*font-size:100%;} #ygrp-mlmsg *
> {line-height:1.22em;} #ygrp-text{ font-family: Georgia; } #ygrp-
> text p{ margin: 0 0 1em 0; } dd.last p a { font-family: Verdana;
> font-weight: bold; } #ygrp-vitnav{ padding-top: 10px; font-family:
> Verdana; font-size: 77%; margin: 0; } #ygrp-vitnav a{ padding: 0
> 1px; } #ygrp-mlmsg #logo{ padding-bottom: 10px; } #ygrp-reco
> { margin-bottom: 20px; padding: 0px; } #ygrp-reco #reco-head { font-
> weight: bold; color: #ff7900; } #reco-category{ font-size: 77%; }
> #reco-desc{ font-size: 77%; } #ygrp-vital a{ text-decoration:
> none; } #ygrp-vital a:hover{ text-decoration: underline; } #ygrp-
> sponsor #ov ul{ padding: 0 0 0 8px; margin: 0; } #ygrp-sponsor #ov
> li{ list-style-type: square; padding: 6px 0; font-size: 77%; } #ygrp-
> sponsor #ov li a{ text-decoration: none; font-size: 130%; } #ygrp-
> sponsor #nc{ background-color: #eee; margin-bottom: 20px; padding: 0
> 8px; } #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-sponsor .ad
> #hd1{ font-family: Arial; font-weight: bold; color: #628c2a; font-
> size: 100%; line-height: 122%; } #ygrp-sponsor .ad a{ text-
> decoration: none; } #ygrp-sponsor .ad a:hover{ text-decoration:
> underline; } #ygrp-sponsor .ad p{ margin: 0; font-weight: normal;
> color: #000000; } o{font-size: 0; } .MsoNormal{ margin: 0 0 0 0; }
> #ygrp-text tt{ font-size: 120%; } blockquote{margin: 0 0 0
> 4px;} .replbq{margin:4} dd.last p span { margin-right: 10px; font-
> family: Verdana; font-weight: bold; } dd.last p span.yshortcuts
> { margin-right: 0; } div.photo-title a, div.photo-title a:active,
> div.photo-title a:hover, div.photo-title a:visited { text-
> decoration: none; } div.file-title a, div.file-title a:active,
> div.file-title a:hover, div.file-title a:visited { text-decoration:
> none; } #ygrp-msg p#attach-count { clear: both; padding: 15px 0 3px
> 0; overflow: hidden; } #ygrp-msg p#attach-count span { color:
> #1E66AE; font-weight: bold; } div#ygrp-mlmsg #ygrp-msg p a
> span.yshortcuts { font-family: Verdana; font-size: 10px; font-
> weight: normal; } #ygrp-msg p a { font-family: Verdana; font-size:
> 10px; } #ygrp-mlmsg a { color: #1E66AE; } div.attach-table div div a
> { text-decoration: none; } div.attach-table { width: 400px; } -->
A work-in-progress in this area is a collection of new Zend_Rest_* classes in the Zend Framework for PHP.
http://framework.zend.com/wiki/display/ZFPROP/Zend_Controller_Router_Route_Rest+-+Luke+Crouch
We are using this kind of MVC design + RESTful architecture on our API and I'm enjoying it. Of course, I wrote it so I'm 100% biased. :) But hopefully it can give you some additional ideas.
-L
--- In rest-discuss@yahoogroups.com, Anand Ramanathan <rcanand@...> wrote:
>
> Thanks, Peter.
>
> That was very useful. Is your API built with this model public? It
> would be useful to play with it to get a first hand experience.
>
> Thanks much
> Anand
>
> On Fri, May 1, 2009 at 1:14 PM, Peter Keane <pkeane@...> wrote:
> > On Fri, May 1, 2009 at 2:16 PM, rcanand <rcanand@...> wrote:
> >>
> >>
> >> Hi,
> >>
> >> Can anyone share their experiences with designing REST APIs using MVC style
> >> frameworks (such as Rails, Django,PHP MVC, etc.)?
> >>
> >> There seem to be two ways to design such APIs when accompanied with UI -
> >> 1) APIs and UIs in separate spaces (having a separate URI path for API
> >> versus UI for the same resource)
> >> 2) Same URI for each resource, but using some form of content negotiation to
> >> return UI formats like html or API formats like XML.
> >>
> >> Do you have any thoughts on the advantages and disadvantages of using either
> >> approach?
> >>
> >
> > Hi Anand-
> >
> > I went back and forth on this very thing, and ultimately decide to mix
> > everything together. The URI path only supplies the "model" (actually
> > controller, but those generally map to domain models), and the
> > controller itself dispatches on method (GET,PUT,POST,DELETE), resource
> > (we use URI regex-based routing w/ in each controller) and format.
> > Formats are specified by extension (.json,.html,.atom).
> >
> > So the "widgets" controller has a routing map:
> >
> > $routes = array(
> > � � '/' => 'widgets', � � � � � � � � � � �// http://myapp/widgets
> > � � '/{widget_id}' => 'widget' � � � �// http://myapp/widget/23
> > �);
> >
> > And defines functions by a convention: {method}{resource}{format} for example:
> >
> > function getWidgetJson($request) {
> >
> > }
> >
> > function putWidgetAtom($request) {
> > � � � � � � $widget_id = $request->get('widget_id');
> > � � � � � �etc....
> > }
> >
> > function getWidgetsHtml($request) {
> > � � � � �create HTML displayin list of widgets
> > }
> >
> > function postToWidgets($request) {
> > � � � � � $posted = $request->getBody();
> > � � � � � $mime_type = $request->getMediaType();
> > � � � � � dispatch here based on mime_type of posted resource (usually atom)
> > � � � � � etc....
> > }
> >
> > I have found this organizational structure to work well. �The main
> > problem (which I did not show here) is the need to define
> > authentication type either per handler or (as is usuallly the case)
> > per method. �We use three auth types: �HTTP basic, a cookie-based
> > Auth, and no auth. �Other than that, it works well.
> >
> > --peter keane
> >
> >
> >
> >> Thanks
> >> Anand
> >>
> >>
> >
>
And in the .net world you have OpenRasta
http://trac.caffeine-it.com/openrasta
-----Original Message-----
From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On
Behalf Of groovepapa82
Sent: 03 May 2009 22:50
To: rest-discuss@yahoogroups.com
Subject: [rest-discuss] Re: MVC style REST scenarios
A work-in-progress in this area is a collection of new Zend_Rest_* classes
in the Zend Framework for PHP.
http://framework.zend.com/wiki/display/ZFPROP/Zend_Controller_Router_Route_R
est+-+Luke+Crouch
We are using this kind of MVC design + RESTful architecture on our API and
I'm enjoying it. Of course, I wrote it so I'm 100% biased. :) But hopefully
it can give you some additional ideas.
-L
--- In rest-discuss@...m, Anand Ramanathan <rcanand@...> wrote:
>
> Thanks, Peter.
>
> That was very useful. Is your API built with this model public? It
> would be useful to play with it to get a first hand experience.
>
> Thanks much
> Anand
>
> On Fri, May 1, 2009 at 1:14 PM, Peter Keane <pkeane@...> wrote:
> > On Fri, May 1, 2009 at 2:16 PM, rcanand <rcanand@...> wrote:
> >>
> >>
> >> Hi,
> >>
> >> Can anyone share their experiences with designing REST APIs using MVC
style
> >> frameworks (such as Rails, Django,PHP MVC, etc.)?
> >>
> >> There seem to be two ways to design such APIs when accompanied with UI
-
> >> 1) APIs and UIs in separate spaces (having a separate URI path for API
> >> versus UI for the same resource)
> >> 2) Same URI for each resource, but using some form of content
negotiation to
> >> return UI formats like html or API formats like XML.
> >>
> >> Do you have any thoughts on the advantages and disadvantages of using
either
> >> approach?
> >>
> >
> > Hi Anand-
> >
> > I went back and forth on this very thing, and ultimately decide to mix
> > everything together. The URI path only supplies the "model" (actually
> > controller, but those generally map to domain models), and the
> > controller itself dispatches on method (GET,PUT,POST,DELETE), resource
> > (we use URI regex-based routing w/ in each controller) and format.
> > Formats are specified by extension (.json,.html,.atom).
> >
> > So the "widgets" controller has a routing map:
> >
> > $routes = array(
> > � � '/' => 'widgets', � � � � � � � � � � �//
http://myapp/widgets
> > � � '/{widget_id}' => 'widget' � � � �//
http://myapp/widget/23
> > �);
> >
> > And defines functions by a convention: {method}{resource}{format} for
example:
> >
> > function getWidgetJson($request) {
> >
> > }
> >
> > function putWidgetAtom($request) {
> > � � � � � � $widget_id = $request->get('widget_id');
> > � � � � � �etc....
> > }
> >
> > function getWidgetsHtml($request) {
> > � � � � �create HTML displayin list of widgets
> > }
> >
> > function postToWidgets($request) {
> > � � � � � $posted = $request->getBody();
> > � � � � � $mime_type = $request->getMediaType();
> > � � � � � dispatch here based on mime_type of posted resource
(usually atom)
> > � � � � � etc....
> > }
> >
> > I have found this organizational structure to work well. �The main
> > problem (which I did not show here) is the need to define
> > authentication type either per handler or (as is usuallly the case)
> > per method. �We use three auth types: �HTTP basic, a cookie-based
> > Auth, and no auth. �Other than that, it works well.
> >
> > --peter keane
> >
> >
> >
> >> Thanks
> >> Anand
> >>
> >>
> >
>
------------------------------------
Yahoo! Groups Links
Hi! I'm writing a new application for GTD workflows, and wanted to see if I can apply the REST principles to the web API. I have had much good input from the discussions here so far, but one thing I need help with. Basically, I want the application to use Command and Query separation at its root. This means that clients call queries to get state/views out, then perform commands on that which are sent back to the server. In other words, clients never ever send state back, only commands. So far I have resources in my URI structure for the queries, which can be GET, and that works quite ok, but then I also have the commands in my URI structure, such as: /user/123/inbox/createtask which when GET returns an empty JSON structure or HTML form, which can then be filled in and POST'ed back. There is a domain model on the server which interprets and executes this and all the domain logic around it. But from my reading of the "RESTful web services" this corresponds to the REST/RPC hybrid architecture. It is difficult, at best, to do caching of resources, since there is no POST/PUT/DELETE which explicitly could be used to invalidate resource caches, such as that of /user/123/inbox. Using lastmodified/etags for caching works though. Does anyone have experience building CQS-systems that have a more RESTful approach? How are others dealing with this? Thanks, Rickard
>>>>> "Rickard" == Rickard Öberg <rickardoberg@...> writes:
Rickard> Does anyone have experience building CQS-systems that have
Rickard> a more RESTful approach? How are others dealing with this?
CommandQuery doesn't work for distributed systems. As Eiffel programmer
I use it all the time, but it doesn't fit distributed systems. The
overhead is already twice the pure REST model.
And note that REST knows command/query separation by use of GET versus
the other verbs. So it already separates them. They're just different
paradigms, don't mix local programming models with distributed
ones. That's a very different case.
--
Cheers,
Berend de Boer
The paper "FOAF+SSL: RESTful Authentication for the Social Web" was accepted for the spot2009 track of the European Semantic Web Conference http://www.eswc2009.org/ It will be available soon at http://spot2009.semanticweb.org/papers soon. In the mean time it is available here: http://bblfish.net/tmp/2009/05/spot2009_submission_15.pdf This should be of interest to this mailing list. Henry Story Social Web Architect Sun Microsystems Blog: http://blogs.sun.com/bblfish
Berend de Boer wrote: >>>>>> "Rickard" == Rickard Öberg <rickardoberg@...> writes: > > Rickard> Does anyone have experience building CQS-systems that have > Rickard> a more RESTful approach? How are others dealing with this? > > CommandQuery doesn't work for distributed systems. As Eiffel programmer > I use it all the time, but it doesn't fit distributed systems. The > overhead is already twice the pure REST model. What creates this overhead? For example, if I have one command create a thousand objects, wouldn't that have to be replaced with a thousand PUT's if I didn't use commands? Or what kind of overhead are we talking about? > And note that REST knows command/query separation by use of GET versus > the other verbs. So it already separates them. They're just different > paradigms, don't mix local programming models with distributed > ones. That's a very different case. I am using CQS specifically because of its supposed virtues in distributed programming. See: http://www.udidahan.com/2008/08/11/command-query-separation-and-soa/ Also, if I didn't encapsulate the domain logic to be executed in the command on the server, wouldn't it have to be duplicated in the client? And if so, doesn't that pose a maintenance and security problem? /Rickard
Berend de Boer wrote: >>>>>> "Rickard" == Rickard Öberg <rickardoberg@...> writes: > > >> CommandQuery doesn't work for distributed systems. As Eiffel > >> programmer I use it all the time, but it doesn't fit > >> distributed systems. The overhead is already twice the pure > >> REST model. > > Rickard> What creates this overhead? > > You need two calls: one for the action, one to retrieve the result. But with CQS I don't have cases like that. I only have commands that do stuff and queries that gets stuff. No combos. If I want a combo I'll just make a redirect from the command to the query, and the client is free to choose whether to follow or not. > For distributed systems you simply want one call with all results > returned. > > And if you need to read stuff in a transaction it becomes even more > complicated, especially since HTTP is meant to be stateless. > > > Simple example: getting a new invoice number. If you break this up in > a POST to make the new invoice number available and a GET to retrieve > it, you need twice the overhead. And some significant programming > effort to get this right. > > You're far better of to POST the details and get back the invoice number. But this seems to imply that the client is supposed to be doing stuff. Why would a dumb client want an invoice number? If you have a process to create a new invoice, which needs it, then that should be on the server, not the client. So, in the CQS setup the client only assembles the data needed to perform the command, and the work is then done on the server. > Rickard> For example, if I have one command create a thousand > Rickard> objects, wouldn't that have to be replaced with a > Rickard> thousand PUT's if I didn't use commands? Or what kind of > Rickard> overhead are we talking about? > > Why? You just POST your 1000 objects. Right. So how is 1000 calls more efficient than 1? > Rickard> I am using CQS specifically because of its supposed > Rickard> virtues in distributed programming. See: > Rickard> http://www.udidahan.com/2008/08/11/command-query-separation-and-soa/ > > I see. I can only suggest that not everything on the Internet is > helpful. And this example is clearly about RPC: > > NServiceBus is not designed to be used for any and all types of > communication in a given architecture. In the examples above, > nServiceBus handles the publish/subscribe but leaves the synchronous > RPC to existing solutions like WCF. Not only that, but synchronous > RPC does have its place in architecture, just not across service > boundaries. In all cases, data is served to users from a store > different from that which transaction processing logic uses. > > Which, whatever it merits, is frankly freaking complex. The above doesn't really have anything to do with CQS and REST, so not relevant. > Rickard> Also, if I didn't encapsulate the domain logic to be > Rickard> executed in the command on the server, wouldn't it have > Rickard> to be duplicated in the client? And if so, doesn't that > Rickard> pose a maintenance and security problem? > > No, that's fine. But you need to use HTTP as it is meant to be, > i.e. use REST. Right. So again, doesn't that then presume that all domain logic is in the client? I.e. the clients only get/put/post state, not commands? And if so, won't that have maintenance and security problems (e.g. malicious clients posting states that are invalid from a domain point of view)? /Rickard
>>>>> "Rickard" == Rickard Öberg <rickardoberg@...> writes:
Rickard> Right. So again, doesn't that then presume that all domain
Rickard> logic is in the client? I.e. the clients only get/put/post
Rickard> state, not commands? And if so, won't that have
Rickard> maintenance and security problems (e.g. malicious clients
Rickard> posting states that are invalid from a domain point of
Rickard> view)?
Clients hold the state, that's correct. Obviously you can store state on
the server on behalf of the client, but the interaction with the server
is determined by the client which must know what it is doing.
It's the server's duty to reject invalid commands down from invalid
verbs to invalid queries, state changes, etc.
So I don't see that "maintenance and security" problem. You always have
to validate external input, the web is no different.
--
Cheers,
Berend de Boer
Berend de Boer wrote: > Clients hold the state, that's correct. Obviously you can store state on > the server on behalf of the client, but the interaction with the server > is determined by the client which must know what it is doing. > > It's the server's duty to reject invalid commands down from invalid > verbs to invalid queries, state changes, etc. But if the logic is in the client, then how could the server know if a state change is good or not? The server wouldn't know how the client came to its conclusion about the suggested new state, and so cannot enforce validaton rules about the new state. > So I don't see that "maintenance and security" problem. You always have > to validate external input, the web is no different. The maintenance issue is that if I put all logic in the client I *have* to ensure that all clients are up-to-date, or else I will have clients with different versions accessing the same server, creating all sorts of inconsistent state. With the security I will have trouble knowing whether the client sends state changes that are ok from a security perspective as I don't know *how* they arrived at their state changes, and I will also have to send more data to the client than I want, because the logic is done on the client (if it was on the server then the client would only need the minimal data needed to create the command, which is easier to secure). Naaah... doesn't sound very appealing. So again, has anyone used CQS with REST? Or are they incompatible? Will I have to stick with being a REST/RPC hybrid? /Rickard
Hi,
Any ideas on how to get a WS client to point to a completely different app while
at the same time giving access to the XML section with minimal impact to the
client? I am trying to map SOAP messages to RESTful URIs on the client prior to any message being issued.
Thanks,
Sean.
PS I am trying to come up with a way of calling an application (on the client) which will be able to access the XML section of a SOAP message and then map that to a RESTful URI, with minimal impact on the client. I was hoping that changing the WSDL URI might work (i.e. no change to client code) but I don't think that will work as I would then be tied to the operations/parameters in the WSDL (which does not suit).
>>>>> "Rickard" == Rickard Öberg <rickardoberg@...> writes:
Rickard> But if the logic is in the client, then how could the
Rickard> server know if a state change is good or not? The server
Rickard> wouldn't know how the client came to its conclusion about
Rickard> the suggested new state, and so cannot enforce validaton
Rickard> rules about the new state.
I don't get you. You are saying you rely on the client not to do the
wrong thing?
>> So I don't see that "maintenance and security" problem. You
>> always have to validate external input, the web is no different.
Rickard> The maintenance issue is that if I put all logic in the
Rickard> client I *have* to ensure that all clients are up-to-date,
Rickard> or else I will have clients with different versions
Rickard> accessing the same server, creating all sorts of
Rickard> inconsistent state. With the security I will have trouble
Rickard> knowing whether the client sends state changes that are ok
Rickard> from a security perspective as I don't know *how* they
Rickard> arrived at their state changes, and I will also have to
Rickard> send more data to the client than I want, because the logic
Rickard> is done on the client (if it was on the server then the
Rickard> client would only need the minimal data needed to create
Rickard> the command, which is easier to secure).
Rickard> Naaah... doesn't sound very appealing. So again, has anyone
Rickard> used CQS with REST? Or are they incompatible? Will I have
Rickard> to stick with being a REST/RPC hybrid?
Rickard> /Rickard
--
Cheers,
Berend de Boer
Berend de Boer wrote: >>>>>> "Rickard" == Rickard Öberg <rickardoberg@...> writes: > > Rickard> But if the logic is in the client, then how could the > Rickard> server know if a state change is good or not? The server > Rickard> wouldn't know how the client came to its conclusion about > Rickard> the suggested new state, and so cannot enforce validaton > Rickard> rules about the new state. > > I don't get you. You are saying you rely on the client not to do the > wrong thing? Sort of: I don't want to rely on the client doing the right thing. There's going to be all sorts of domain logic that will be rapidly updated, and so by not keeping that in the client will make it easier to ensure that the right code is executed. But this leads me to commands being invoked to update state, and that leads me away from using a resource-oriented view for updates. Instead the client will query for resources, construct commands, and send that back which will update a bunch of different resources as a result. Initially I considered keeping the logic for how to implement the commands in the client (which would then lead to only state being sent to the server), but this became too messy, insecure and unmaintainable, as outlined. But now I'm trying to figure out whether what I'm doing is simply not compatible with REST, or if there is any way I can construct my application to still be RESTful, while doing commands. My reading of Webbers "How to GET a cup of coffee"(http://www.infoq.com/articles/webber-rest-workflow) leads me to believe it is possible, as his "next" workflow steps is pretty much exactly what I want to accomplish. A set of commands in a workflow explicitly exposed as resources that I can GET (to get the form) and POST (to execute). If that's RESTful, then I'm happy! /Rickard
>>>>> "Rickard" == Rickard Öberg <rickardoberg@...> writes:
Rickard> My reading of Webbers "How to GET a cup of
Rickard> coffee"(http://www.infoq.com/articles/webber-rest-workflow)
Rickard> leads me to believe it is possible, as his "next" workflow
Rickard> steps is pretty much exactly what I want to accomplish. A
Rickard> set of commands in a workflow explicitly exposed as
Rickard> resources that I can GET (to get the form) and POST (to
Rickard> execute). If that's RESTful, then I'm happy!
That is indeed a good article, but your description of it doesn't sound
at rest at all. Don't think commands, think operations on resources.
If you don't think resources and don't consider that you are transferring
representations of those resources, you probably will misuse the HTTP
protocol and keep fighting it instead of using it.
--
Cheers,
Berend de Boer
Berend de Boer wrote: > That is indeed a good article, but your description of it doesn't sound > at rest at all. Don't think commands, think operations on resources. I am! It's just that the resources don't correspond directly to the underlying domain model. Instead they correspond to views of the model. Example: GET /user/123/inbox -> return list of Tasks as JSON - Client decides to Complete a Task GET /user/123/inbox/complete -> return form for completing Task - Client fills in form POST /user/123/inbox/complete with form GET /user/123/inbox -> return list of Tasks as JSON where previous Task is missing The inbox is a resource, and the "complete" is a resource, and it is only available since the Task is referenced from the Inbox (i.e. you can't "complete" something else). > If you don't think resources and don't consider that you are transferring > representations of those resources, you probably will misuse the HTTP > protocol and keep fighting it instead of using it. I am thinking resources and I am thinking transferring representations, but in application form rather than "raw" domain model form. /Rickard
>>>>> "Rickard" == Rickard Öberg <rickardoberg@...> writes:
Rickard> I am! It's just that the resources don't correspond
Rickard> directly to the underlying domain model. Instead they
Rickard> correspond to views of the model.
Rickard> Example: GET /user/123/inbox
-> return list of Tasks as JSON
Rickard> - Client decides to Complete a Task GET
Rickard> /user/123/inbox/complete
-> return form for completing Task
Rickard> - Client fills in form POST /user/123/inbox/complete with
Rickard> form GET /user/123/inbox
-> return list of Tasks as JSON where previous Task is missing
Rickard> The inbox is a resource, and the "complete" is a resource,
Rickard> and it is only available since the Task is referenced from
Rickard> the Inbox (i.e. you can't "complete" something else).
Right. This is getting closer, but to make it REST, make sure everyTHING
has a URL.
So a task would be:
/user/123/inbox/<taskid>
Completing a task is simply:
DELETE /user/123/inbox/<taskid>
--
Cheers,
Berend de Boer
> So a task would be: > > /user/123/inbox/<taskid> > > Completing a task is simply: > > DELETE /user/123/inbox/<taskid> That's certainly one way of doing it (and probably the way I'd do it too), but it's not RESTful since there's no hypermedia :-) (Flame away!) Leonard Richardson would describe that as a level 2 service in his (excellent) taxonomy. Such services embrace URIs and HTTP but lack hypermedia. RESTful services are level three which is level 2 + hypermedia. To do this RESTfully, you need the representation to tell you that you should complete it with a DELETE on a particular URI, so: ... <link rel="complete.me" href="/user/123/inbox" .../> ... I'm not sure whether I'd put a helpful verb in that link or not (it might not be honoured), or whether I'd use OPTIONS. But still it's at level 3 now because there's hypermedia present*. Whether it _needs_ to be RESTful however, is another point entirely. There's value to the Web outside of being RESTful, it's simply that you get more benefits (typically) from following REST. Jim * OK, so to be properly useful the representation would need to be encoded as a hypermedia-aware format.
On May 5, 2009 2:11pm, Jim Webber <jim@...> wrote: > Leonard Richardson would describe that as a level 2 service in his > (excellent) taxonomy. Such services embrace URIs and HTTP but lack > hypermedia. RESTful services are level three which is level 2 + > hypermedia. That classification seems interesting, can you provide some references?
On 5 May 2009, at 14:49, amsmota@... wrote: > On May 5, 2009 2:11pm, Jim Webber <jim@...> wrote: > >> Leonard Richardson would describe that as a level 2 service in his > >> (excellent) taxonomy. Such services embrace URIs and HTTP but lack > >> hypermedia. RESTful services are level three which is level 2 + > >> hypermedia. > > > That classification seems interesting, can you provide some > references? http://qconsf.com/sf2008/file?path=/qcon-sanfran-2008/slides//LeonardRichardson.pdf We (Ian, Savas, and me) have embraced it as a key part of the narrative of our book. Jim
>>>>> "Jim" == Jim Webber <jim@...> writes:
Jim> http://qconsf.com/sf2008/file?path=/qcon-sanfran-2008/slides//LeonardRichardson.pdf
Thanks!
--
Cheers,
Berend de Boer
Rickard Öberg wrote: > > > > Berend de Boer wrote: > > Clients hold the state, that's correct. Obviously you can store state on > > the server on behalf of the client, but the interaction with the server > > is determined by the client which must know what it is doing. > > > > It's the server's duty to reject invalid commands down from invalid > > verbs to invalid queries, state changes, etc. > > But if the logic is in the client, then how could the server know if a > state change is good or not? The server wouldn't know how the client > came to its conclusion about the suggested new state, and so cannot > enforce validaton rules about the new state. > > > So I don't see that "maintenance and security" problem. You always have > > to validate external input, the web is no different. > > The maintenance issue is that if I put all logic in the client I *have* > to ensure that all clients are up-to-date, or else I will have clients > with different versions accessing the same server, creating all sorts of > inconsistent state. With the security I will have trouble knowing > whether the client sends state changes that are ok from a security > perspective as I don't know *how* they arrived at their state changes, > and I will also have to send more data to the client than I want, > because the logic is done on the client (if it was on the server then > the client would only need the minimal data needed to create the > command, which is easier to secure). > Doesn't HATEOAS solve this problem to some degree? From a versioning perspective you can use conneg and versioned media types to know what types of clients are sending you data and what version of the format they are sending. From a security perspective, since the Server inserts the traversable states, it is controlling what can be executed next. Craig mentioned this as a key advantage to HATEOAS in the "WHy HATEOAS" thread a couple of weeks ago. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Another thing:
Why can't the "command" be its own data format? Consider a Bank
Account. When somebody goes to an ATM and withdraws money, they aren't
interacting with the account directly. Instead they are creating debit
or credit transactions and posting this data to the underlying system to
be processed.
So your URI might be:
/resource/{id}
and you post state changes with:
/resource/{id}/commands
where commands is some application/vnd.command+json or something like
that. That kind of structure gives you a lot of flexibility as you can
have URIs that point directly to the command executed. YOu can view
queued commands. View histories, stuff like that.
Bill Burke wrote:
>
>
> Rickard Öberg wrote:
>>
>>
>>
>> Berend de Boer wrote:
>> > Clients hold the state, that's correct. Obviously you can store
>> state on
>> > the server on behalf of the client, but the interaction with the
>> server
>> > is determined by the client which must know what it is doing.
>> >
>> > It's the server's duty to reject invalid commands down from invalid
>> > verbs to invalid queries, state changes, etc.
>>
>> But if the logic is in the client, then how could the server know if a
>> state change is good or not? The server wouldn't know how the client
>> came to its conclusion about the suggested new state, and so cannot
>> enforce validaton rules about the new state.
>>
>> > So I don't see that "maintenance and security" problem. You always
>> have
>> > to validate external input, the web is no different.
>>
>> The maintenance issue is that if I put all logic in the client I *have*
>> to ensure that all clients are up-to-date, or else I will have clients
>> with different versions accessing the same server, creating all sorts of
>> inconsistent state. With the security I will have trouble knowing
>> whether the client sends state changes that are ok from a security
>> perspective as I don't know *how* they arrived at their state changes,
>> and I will also have to send more data to the client than I want,
>> because the logic is done on the client (if it was on the server then
>> the client would only need the minimal data needed to create the
>> command, which is easier to secure).
>>
>
> Doesn't HATEOAS solve this problem to some degree? From a versioning
> perspective you can use conneg and versioned media types to know what
> types of clients are sending you data and what version of the format
> they are sending.
>
> From a security perspective, since the Server inserts the traversable
> states, it is controlling what can be executed next. Craig mentioned
> this as a key advantage to HATEOAS in the "WHy HATEOAS" thread a couple
> of weeks ago.
>
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
Email is in fact used in this way in some applications, for example posting a new entry on a blog or a group, getting messages from a group. I think HTTP is more powerful than email protocols for general applications. If "async" is emphasized, then MQ's are the thing to be compared with. Dong On Fri, May 1, 2009 at 6:34 AM, Bill Burke <bburke@...> wrote: > > > I wonder why nobody has picked up using email, smtp/pop3, as an > asynchronous protocol for the Internet. It has scaled pretty well. Has > a constrained interface. Has a strong infrastrutre base of tools. Is > representation oriented and media-type aware. Pretty ubiquitous. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > -- http://dongnotes.blogspot.com/
It seems to me that email is pretty darn RESTful in its SMTP and POP forms. SMTP: put a message. POP: get a message, get meta data about the collection, delete a message. HTTP isn't in it. It's a RESTful architectural style already. What am I missing here? -Randy Fischer
Hello, Bill Burke wrote: > I wonder why nobody has picked up using email, smtp/pop3, as an > asynchronous protocol for the Internet. It has scaled pretty well. Has > a constrained interface. Has a strong infrastrutre base of tools. Is > representation oriented and media-type aware. Pretty ubiquitous. I think someone picked up on that a while ago [*] (although it probably doesn't quite match all your requirements.) Perhaps something could be done on this basis. The Subject line could be the first line (say "GET /..." or "HTTP/1.1 200 ..."). The other headers could be added to the e-mail headers, perhaps prefixed (say "X-HttpGateway-Host: ", ...), and matching which response corresponds to which request could use "In-Reply-To: ". That would only be for "http:" URIs. I suppose "mailto:" URIs could be used directly. In this case, you could just put the verb in the subject (since there wouldn't be a path). Best wishes, Bruno. [*] http://www.faqs.org/faqs/internet-services/access-via-email/
groovepapa82 wrote: > > http://framework.zend.com/wiki/display/ZFPROP/Zend_Controller_Router_Route_Rest+-+Luke+Crouch > "ProductId: 'Aß/230/def'" If you don't want those slashes interpreted as part of a URI hierarchy: ProductId: 'Aß%2F230%2Fdef' Escape them with percent-encoding in the URIs the apps generate. The link text, or @title, can provide human-readable slashes. This seems to be the sort of thing a framework should provide, IMHO. Nice work there, Luke. -Eric
Cool. Thanks for the tip. -L On Fri, May 8, 2009 at 4:07 AM, Eric J. Bowman <eric@...>wrote: > groovepapa82 wrote: > > > > > > http://framework.zend.com/wiki/display/ZFPROP/Zend_Controller_Router_Route_Rest+-+Luke+Crouch > > > > "ProductId: 'Aß/230/def'" > > If you don't want those slashes interpreted as part of a URI hierarchy: > > ProductId: 'Aß%2F230%2Fdef' > > Escape them with percent-encoding in the URIs the apps generate. The > link text, or @title, can provide human-readable slashes. This seems > to be the sort of thing a framework should provide, IMHO. > > Nice work there, Luke. > > -Eric >
Why aren't more people utilizing MIT Kerberos? From what little I know, it is very secure and most major browsers support the GSS-API authentication mechanism out of the box. Mark --- In rest-discuss@yahoogroups.com, "Sebastien Lambla" <seb@...> wrote: > > In digest, the client always sends the realm. If the clients wants to logout > by issuing a post or delete on its authentication resource, the server will > know that the realm is no longer valid. At that stage it will consider the > authentication for that realm to be outdated and the user to be anonymous. > Any access to a protected resource will simply re-trigger a 401. > > I've not experiemented with that technique yet. The server would have to > keep track of which realms were used and discarded, or simply of the ones > that have been issued and delete them when the user logs out or times out. > > I'm a bit worried that this approaches leads to UAs still believing they are > authorized, and suddenly stop being authorized without any form of obvious > communication. Maybe the fact that a 401 is re-issued is enough to make that > solution restful? > > Seb > -------------------------------------------------- > From: "Berend de Boer" <berend@...> > Sent: Thursday, September 25, 2008 10:36 PM > To: "Ryan Tomayko" <rtomayko@...> > Cc: <rest-discuss@yahoogroups.com> > Subject: Re: [rest-discuss] Re: Authentication and authorization > > >>>>>> "Ryan" == Ryan Tomayko <rtomayko@...> writes: > > > > Ryan> While this may be _a_ solution, it sure doesn't feel like a > > Ryan> very good one. > > > > Ryan> Shouldn't forced logout be a function provided by UAs (and > > Ryan> other types of clients)? > > > > Recently someone emailed me a different solution: always use a logged in > > user, which would be "anonymous" if you don't know anything else. So you > > don't have to force logout, you are always logged in as someone. > > > > -- > > Cheers, > > > > Berend de Boer > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > >
Mark Waddle wrote: > Why aren't more people utilizing MIT Kerberos? From what little I know, > it is very secure and most major browsers support the GSS-API > authentication mechanism out of the box. I'd say it's because Kerberos (via SPNEGO/GSS-API) is very centralised. It requires the client to configure the system to talk to the Kerberos server (same as the particular web server or federated) and to have an account on it. Best wishes, Bruno.
Since the IETF is taking up (at least considering for standards track?) OAuth.... Anyone here familiar enough with two-legged OAuth to say whether is is suitable for RESTful designs as another option along w/ Basic/Digest? --peter On Fri, May 8, 2009 at 11:15 AM, Bruno Harbulot <Bruno.Harbulot@manchester.ac.uk> wrote: > > > > > Mark Waddle wrote: > >> Why aren't more people utilizing MIT Kerberos? From what little I know, >> it is very secure and most major browsers support the GSS-API >> authentication mechanism out of the box. > > I'd say it's because Kerberos (via SPNEGO/GSS-API) is very centralised. > It requires the client to configure the system to talk to the Kerberos > server (same as the particular web server or federated) and to have an > account on it. > > Best wishes, > > Bruno. > >
Two legged OAuth requires a pre-existing trust relationship, set up out of band, between the client and server. It extends HTTP Auth (Authorization: header, 4xx responses w/WWW-Authenticate: OAuth challenge) so it should be very compatible. On Fri, May 8, 2009 at 10:07 AM, Peter Keane <pkeane@...> wrote: > Since the IETF is taking up (at least considering for standards > track?) OAuth.... Anyone here familiar enough with two-legged OAuth > to say whether is is suitable for RESTful designs as another option > along w/ Basic/Digest? > > --peter > > On Fri, May 8, 2009 at 11:15 AM, Bruno Harbulot > <Bruno.Harbulot@...> wrote: > > > > > > > > > > Mark Waddle wrote: > > > >> Why aren't more people utilizing MIT Kerberos? From what little I know, > >> it is very secure and most major browsers support the GSS-API > >> authentication mechanism out of the box. > > > > I'd say it's because Kerberos (via SPNEGO/GSS-API) is very centralised. > > It requires the client to configure the system to talk to the Kerberos > > server (same as the particular web server or federated) and to have an > > account on it. > > > > Best wishes, > > > > Bruno. > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Let's say I have an Order resource in a ecommerce Order Entry system.
How would I implement my service so that I can cancel an order rather
than delete it? One is to have the cancel state as part of the order.
THen I can just put a new representation with the cancelled state set to
true:
PUT /orders/333
content-type: application/xml
<order id="333">
<cancelled>false</cancelled>
...
</order>
Seems kinda heavy to me.
Would it still be restful to define a "cancelled" URI that you could put
or post to to change the state?
/orders/333/cancelled
or
/orders/333?cancel=true
You don't even need to send data to change the state in this scenario.
But the problem with this from a pure RESTful standpoint is, isn't this
a mini-RPC? My thought at first is YES IT IS....
.... But, consider if you have cancelling as part of a HATEOAS
<order id="333">
<atom:link rel="CANCEL" href="http://example.com/orders/333/cancelled"/>
...
</order>
Now, I have a CANCEL link that if I follow changes the state of my
resource. Doesn't seem so RPCish now that I've embedded it as a link.
Maybe the answer is /orders/333/cancelled isn't very RESTful by itself,
but when combined with HATEOAS it is?
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
Interesting question...
too bad there isn't a
CANCEL /orders/333
Perhaps have a unique URL for resource state changes:
PUT /orders/333/status?state=CANCEL
I really don't like atom links for resource state changes... In HTML, there
would be a <from>. It would be nice to have a form type idiom for
application/xml... perhaps something like an element with an action and a
method (plus some other things that need to be fleshed out):
<order id="333">
<resource-transitions>
<cancel action="/orders/333/status?state=CANCEL" method="PUT" />
</resource-transitions>
</order>
I'm not completely excited about that specific XML, but I think that the
server should be explicit with non-GET links. IMHO, atom links seem to
indicate a GET and therefore aren't the right idiom for POST/PUT/DELETE.
IMHO, HATEOAS ideally should be implemented with the server explicitly
telling the client how to invoke it's next state...
-Solomon
On Fri, May 8, 2009 at 6:09 PM, Bill Burke <bburke@...> wrote:
>
>
> Let's say I have an Order resource in a ecommerce Order Entry system.
> How would I implement my service so that I can cancel an order rather
> than delete it? One is to have the cancel state as part of the order.
> THen I can just put a new representation with the cancelled state set to
> true:
>
> PUT /orders/333
> content-type: application/xml
>
> <order id="333">
> <cancelled>false</cancelled>
> ...
> </order>
>
> Seems kinda heavy to me.
>
> Would it still be restful to define a "cancelled" URI that you could put
> or post to to change the state?
>
> /orders/333/cancelled
>
> or
>
> /orders/333?cancel=true
>
> You don't even need to send data to change the state in this scenario.
> But the problem with this from a pure RESTful standpoint is, isn't this
> a mini-RPC? My thought at first is YES IT IS....
>
> .... But, consider if you have cancelling as part of a HATEOAS
>
> <order id="333">
> <atom:link rel="CANCEL" href="http://example.com/orders/333/cancelled"/>
> ...
> </order>
>
> Now, I have a CANCEL link that if I follow changes the state of my
> resource. Doesn't seem so RPCish now that I've embedded it as a link.
> Maybe the answer is /orders/333/cancelled isn't very RESTful by itself,
> but when combined with HATEOAS it is?
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com
>
>
Bill Burke wrote: > > Seems kinda heavy to me. > But that's the way it's done. ;-) > > Now, I have a CANCEL link that if I follow changes the state of my > resource. Doesn't seem so RPCish now that I've embedded it as a > link. Maybe the answer is /orders/333/cancelled isn't very RESTful by > itself, but when combined with HATEOAS it is? > Linking to a procedure call, doesn't make that procedure call a REST resource. What happens if you GET /cancelled? What is it a representation of? The resource? Or some action, i.e. remote procedure? If you aren't transferring representations of resources in order to change their state, then you aren't using REST. -Eric
On Fri, May 8, 2009 at 3:09 PM, Bill Burke <bburke@...> wrote:
>
>
> Let's say I have an Order resource in a ecommerce Order Entry system.
> How would I implement my service so that I can cancel an order rather
> than delete it? One is to have the cancel state as part of the order.
> THen I can just put a new representation with the cancelled state set to
> true:
>
> PUT /orders/333
> content-type: application/xml
>
> <order id="333">
> <cancelled>false</cancelled>
> ...
> </order>
>
> Seems kinda heavy to me.
>
> Would it still be restful to define a "cancelled" URI that you could put
> or post to to change the state?
>
> /orders/333/cancelled
>
> or
>
> /orders/333?cancel=true
>
> You don't even need to send data to change the state in this scenario.
> But the problem with this from a pure RESTful standpoint is, isn't this
> a mini-RPC? My thought at first is YES IT IS....
>
> .... But, consider if you have cancelling as part of a HATEOAS
>
> <order id="333">
> <atom:link rel="CANCEL" href="http://example.com/orders/333/cancelled"/>
> ...
> </order>
>
> Now, I have a CANCEL link that if I follow changes the state of my
> resource. Doesn't seem so RPCish now that I've embedded it as a link.
> Maybe the answer is /orders/333/cancelled isn't very RESTful by itself,
> but when combined with HATEOAS it is?
The precise value of a URI is not, in and of itself, "RESTful or not
RESTful" ... it is about how your overall architecture matches up to
the REST architectural patterns. In the case at hand, how does your
client know wha URI to use for canceling an order? If it's discovered
(as you describe here with a <link> -- but a JSON field that said {
... "cancel" : "/orders/333/cancelled" ... } is semantically
equivalent, so the syntax isn' the important bit -- and you do a POST
to it for a state change, you can certainly claim this is a RESTful
approach. If you just do the POST part but make the client do a
string concatenation ("/orders" + orderId + "/cancelled"), well not
quite so much ... but that's still a *lot* better than cancelling an
order with a GET :-).
Craig
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com
>
>
Hello REST gurus,
One of my colleagues asked me "What would be the method mapping for Credit Card validation or fraud detection?"
I was responding to him that we should go with GET to validate credit card details (Because of idempotency). Since we are dealing with credit card, my assumption is the resource (Credit Card) is secured by SSL.
Please give your inputs. Thanks.
With regards,
Saravan.
Now surf faster and smarter ! Check out the new Firefox 3 - Yahoo! Edition http://downloads.yahoo.com/in/firefox/?fr=om_email_firefox
>>>>> "Saravanakumaar" == Saravanakumaar Jeyabalan <jsarava@...> writes:
Saravanakumaar> I was responding to him that we should go with GET
Saravanakumaar> to validate credit card details (Because of
Saravanakumaar> idempotency). Since we are dealing with credit
Saravanakumaar> card, my assumption is the resource (Credit Card)
Saravanakumaar> is secured by SSL.
Yeah, it is sort of idempotent. I would use GET as well. And indeed,
never do things like this without SSL.
--
Cheers,
Berend de Boer
Might there be a caching concern here? I'm specifically remembering something about "agressive" caching in IE6. On Saturday, May 9, 2009, Berend de Boer <berend@...> wrote: >>>>>> "Saravanakumaar" == Saravanakumaar Jeyabalan <jsarava@...> writes: > > Saravanakumaar> I was responding to him that we should go with GET > Saravanakumaar> to validate credit card details (Because of > Saravanakumaar> idempotency). Since we are dealing with credit > Saravanakumaar> card, my assumption is the resource (Credit Card) > Saravanakumaar> is secured by SSL. > > Yeah, it is sort of idempotent. I would use GET as well. And indeed, > never do things like this without SSL. > > -- > Cheers, > > Berend de Boer > > > ------------------------------------ > > Yahoo! Groups Links > > > >
--- In rest-discuss@yahoogroups.com, Craig McClanahan <craigmcc@...> wrote:
>
> On Fri, May 8, 2009 at 3:09 PM, Bill Burke <bburke@...> wrote:
> >
> >
> > Let's say I have an Order resource in a ecommerce Order Entry system.
> > How would I implement my service so that I can cancel an order rather
> > than delete it? One is to have the cancel state as part of the order.
> > THen I can just put a new representation with the cancelled state set to
> > true:
> >
> > PUT /orders/333
> > content-type: application/xml
> >
> > <order id="333">
> > <cancelled>false</cancelled>
> > ...
> > </order>
> >
> > Seems kinda heavy to me.
> >
> > Would it still be restful to define a "cancelled" URI that you could put
> > or post to to change the state?
> >
> > /orders/333/cancelled
> >
> > or
> >
> > /orders/333?cancel=true
> >
> > You don't even need to send data to change the state in this scenario.
> > But the problem with this from a pure RESTful standpoint is, isn't this
> > a mini-RPC? My thought at first is YES IT IS....
> >
> > .... But, consider if you have cancelling as part of a HATEOAS
> >
> > <order id="333">
> > <atom:link rel="CANCEL" href="http://example.com/orders/333/cancelled"/>
> > ...
> > </order>
> >
> > Now, I have a CANCEL link that if I follow changes the state of my
> > resource. Doesn't seem so RPCish now that I've embedded it as a link.
> > Maybe the answer is /orders/333/cancelled isn't very RESTful by itself,
> > but when combined with HATEOAS it is?
>
> The precise value of a URI is not, in and of itself, "RESTful or not
> RESTful" ... it is about how your overall architecture matches up to
> the REST architectural patterns. In the case at hand, how does your
> client know wha URI to use for canceling an order? If it's discovered
> (as you describe here with a <link> -- but a JSON field that said {
> ... "cancel" : "/orders/333/cancelled" ... } is semantically
> equivalent, so the syntax isn' the important bit -- and you do a POST
> to it for a state change, you can certainly claim this is a RESTful
> approach. If you just do the POST part but make the client do a
> string concatenation ("/orders" + orderId + "/cancelled"), well not
> quite so much ... but that's still a *lot* better than cancelling an
> order with a GET :-).
>
> Craig
>
>
> >
> > --
> > Bill Burke
> > JBoss, a division of Red Hat
> > http://bill.burkecentral.com
> >
> >
>
I would post a resource collection that represents canceled orders in the application, such that when a GET is done against this resource, I get all canceled orders.
As far as all the other constraints are adhered too, this would seem RESTful in my opinion. For example the original GET on the order would have a link to the "canceled" order resource.
Eb
I tried to post this yesterday, but I think I might have accidentally replied directly to Bill. Anyway, what about having a status resource that is part of the order?
GET /orders/333
<order id="333">
<status>
<atom:link rel="self" href="status" />
<value>open</value>
</status>
...
</order>
GET /orders/333/status
<status>
<value>open</value>
</status>
Then you can update status like this:
PUT /orders/333/status
<status>
<value>cancelled</value>
</status>
GET /orders/333
<order id="333">
<status>
<atom:link rel="self" href="status" />
<value>cancelled</value>
</status>
...
</order>
This seems to stay restful while not forcing you to pass a potentially large order in its entirety.
I'm assuming here that the media type has a rule defining a default value when there is no "self" link. Sounds reasonable to me, but maybe it's bad for reasons I haven't thought of. In that case you could pass the "self" link as part of every representation.
Aaron
--- In rest-discuss@yahoogroups.com, Bill Burke <bburke@...> wrote:
>
> Let's say I have an Order resource in a ecommerce Order Entry system.
> How would I implement my service so that I can cancel an order rather
> than delete it? One is to have the cancel state as part of the order.
> THen I can just put a new representation with the cancelled state set to
> true:
>
> PUT /orders/333
> content-type: application/xml
>
> <order id="333">
> <cancelled>false</cancelled>
> ...
> </order>
>
> Seems kinda heavy to me.
>
> Would it still be restful to define a "cancelled" URI that you could put
> or post to to change the state?
>
> /orders/333/cancelled
>
> or
>
> /orders/333?cancel=true
>
> You don't even need to send data to change the state in this scenario.
> But the problem with this from a pure RESTful standpoint is, isn't this
> a mini-RPC? My thought at first is YES IT IS....
>
> .... But, consider if you have cancelling as part of a HATEOAS
>
> <order id="333">
> <atom:link rel="CANCEL" href="http://example.com/orders/333/cancelled"/>
> ...
> </order>
>
>
> Now, I have a CANCEL link that if I follow changes the state of my
> resource. Doesn't seem so RPCish now that I've embedded it as a link.
> Maybe the answer is /orders/333/cancelled isn't very RESTful by itself,
> but when combined with HATEOAS it is?
>
>
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com
>
> I don't think they differ as much as you seem to believe they do, > especially if the machine-to-machine interface is designed following > HATEOAS. I think it depends on the application and the interface requirements. In most machine-to-machine interactions, there is a clear intent. But when i design a user interface, i would also like to present information and other possible actions than the primary intent (An example is suggesting similar books when i buy one). When i do this using the same API, i cannot avoid a lot of chatter between the server and the client. To avoid this, i have seen a need for using another set of interfaces which internally use the API designed for machine-machine interactions. Suresh On Sat, May 2, 2009 at 12:32 PM, Stefan Tilkov <stefan.tilkov@...> wrote: > On 02.05.2009, at 04:22, Subbu Allamaraju wrote: > >> Just wondering how far one could take that since >> human-machine interactions and machine-machine interactions differ >> significantly in practice. > > I don't think they differ as much as you seem to believe they do, > especially if the machine-to-machine interface is designed following > HATEOAS. Of course there are practical problems, such as the fact that > HTML supports only GET and POST, browsers don't support explicit > setting of Accept headers or lack a logout option for HTTP Auth, but > these restrictions are not restrictions of REST. > > E.g. if I'm writing an application client, say built using Java/Swing, > that is driven by hypermedia contained in representations returned > from the server – would you expect that there'd have to be a second > server API for other clients? I don't think so. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ >
Eric J. Bowman wrote:
> Bill Burke wrote:
>
>> Seems kinda heavy to me.
>>
>
> But that's the way it's done. ;-)
>
>> Now, I have a CANCEL link that if I follow changes the state of my
>> resource. Doesn't seem so RPCish now that I've embedded it as a
>> link. Maybe the answer is /orders/333/cancelled isn't very RESTful by
>> itself, but when combined with HATEOAS it is?
>>
>
> Linking to a procedure call, doesn't make that procedure call a REST
> resource. What happens if you GET /cancelled?
/orders/{id}/cancelled is a thing. It is a state. It either exists or
doesn't exist. So, if you do a GET and the state exists:
HTTP/1.1 204, No Content
or even
HTTP/1.1 405, Method Not Allowed
Allow: PUT, DELETE
If it doesn't exist:
HTTP/1.1 404, Not Found
or even
HTTP/1.1 410, Gone
> What is it a
> representation of? The resource? Or some action, i.e. remote
> procedure? If you aren't transferring representations of resources in
> order to change their state, then you aren't using REST.
>
So you're saying a thing can't merely exist? It needs to have a
representation? I don't think so.
I think I've just convinced my self that even without the <link> this is
pretty restful.
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
[ Attachment content not displayed ]
>>>>> "Brandon" == Brandon Carlson <bcarlso@...> writes:
Brandon> Might there be a caching concern here? I'm specifically
Brandon> remembering something about "agressive" caching in IE6.
If you specify the following directive:
Cache-Control: no-cache
Even IE6 will not return a stale response. See:
http://support.microsoft.com/kb/234067
--
Cheers,
Berend de Boer
Hello,
Much thanks for giving your inputs. It shows my understanding of REST is getting better.
Please let us know in case other REST gurus think otherwise.
With regards,
Saravan.
--- On Mon, 11/5/09, Berend de Boer <berend@...> wrote:
From: Berend de Boer <berend@...>
Subject: Re: [rest-discuss] Credit Card Validation
To: "Brandon Carlson" <bcarlso@...>
Cc: jsarava@..., rest-discuss@yahoogroups.com
Date: Monday, 11 May, 2009, 1:05 AM
>>>>> "Brandon" == Brandon Carlson <bcarlso@gmail. com> writes:
Brandon> Might there be a caching concern here? I'm specifically
Brandon> remembering something about "agressive" caching in IE6.
If you specify the following directive:
Cache-Control: no-cache
Even IE6 will not return a stale response. See:
http://support. microsoft. com/kb/234067
--
Cheers,
Berend de Boer
Cricket on your mind? Visit the ultimate cricket website. Enter http://beta.cricket.yahoo.comOn Mon, May 11, 2009 at 7:03 AM, Saravanakumaar Jeyabalan <jsarava@...> wrote: > Please let us know in case other REST gurus think otherwise. I am not a REST guru, but I think it would depend on what you mean by "credit card validation". Do you mean: just determine if this is a valid credit card, but don't do anything else? Then I would agree with GET. Or, do you mean authorize or make a payment using this credit card? Then I think you need POST.
On Monday 11 May 2009, Bob Haugen wrote: > On Mon, May 11, 2009 at 7:03 AM, Saravanakumaar Jeyabalan > > <jsarava@...> wrote: > > Please let us know in case other REST gurus think otherwise. > > I am not a REST guru, but I think it would depend on what you mean by > "credit card validation". > > Do you mean: just determine if this is a valid credit card, but don't > do anything else? Then I would agree with GET. In this case, what would be the resource targeted by the request and what would the URL look like? What would be a RESTful response? "yes"/"no", "true"/"false" in the body? Or something more involved? Maybe just a header with an empty body? Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
>>>>> "Michael" == Michael Schuerig <michael@...> writes:
>> Do you mean: just determine if this is a valid credit card, but
>> don't do anything else? Then I would agree with GET.
Michael> In this case, what would be the resource targeted by the
Michael> request and what would the URL look like? What would be a
Michael> RESTful response? "yes"/"no", "true"/"false" in the
Michael> body? Or something more involved? Maybe just a header
Michael> with an empty body?
This url:
/valid-credit-cards/<4111111111111111>
can be responded to with an HTTP response of 200 OK. So basically this
url is for the list of all valid credit cards. And a 404 would be a
perfect response for an invalid credit card (isn't in that list).
You can have a body with yes/no or so, but that would be in addition
to the HTTP response.
--
Cheers,
Berend de Boer
On Mon, May 11, 2009 at 8:34 AM, Michael Schuerig <michael@...> wrote: > On Monday 11 May 2009, Bob Haugen wrote: >> Do you mean: just determine if this is a valid credit card, but don't >> do anything else? Then I would agree with GET. > > In this case, what would be the resource targeted by the request and > what would the URL look like? What would be a RESTful response? > "yes"/"no", "true"/"false" in the body? Or something more involved? > Maybe just a header with an empty body? Check out the various payment gateway services, e.g. http://authorize.net/ or http://skipjack.com/
On Monday 11 May 2009, Berend de Boer wrote: > >>>>> "Michael" == Michael Schuerig <michael@...> writes: > >> Do you mean: just determine if this is a valid credit card, > >> but don't do anything else? Then I would agree with GET. > > Michael> In this case, what would be the resource targeted by the > Michael> request and what would the URL look like? What would be > a Michael> RESTful response? "yes"/"no", "true"/"false" in the > Michael> body? Or something more involved? Maybe just a header > Michael> with an empty body? > > This url: > > /valid-credit-cards/<4111111111111111> > > can be responded to with an HTTP response of 200 OK. So basically > this url is for the list of all valid credit cards. And a 404 would > be a perfect response for an invalid credit card (isn't in that > list). I'm not really interested credit card validation, but let's see if this approach can be extended to my running example of a movie database with movies, people, awards, and awardings (for lack of a better word). I need a way to create an awarding given an award, a year, a movie, and a person. However, this operation might fail with a conflict if the award is already given to someone else for that year. This makes for a bad user experience at the client end. An earlier indication that an awarding may not be possible would surely be appreciated. Before, I was considering some kind of RPC-ish query asking "is this combination of year, award, person, movie possible?". Now, I think it would be better to ask "give me the awarding for this award and year". A HEAD request would be sufficient; response status 200 and 404 could be interpreted as "conflict" and "go ahead" respectively. Of course, given concurrency, such a response is only advisory. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
I guess I'm still a bit concerned, but maybe I shouldn't be. Consider that you are performing CVV2 validation with the credit card number... Using GET, your URL would look something like '/card/1111111111111114?cvv2=732'. Now, if this URL is cached anywhere you basically have a valid card + CVV in a single URL, which goes a long way towards purchasing something online. Next, if you add in address verification information you're just a Google search away from purchasing something using another person's identity. I'd prefer to use a method that is not cacheable in this scenario. Thoughts? Brandon On Mon, May 11, 2009 at 12:05 AM, Berend de Boer <berend@...> wrote: >>>>>> "Brandon" == Brandon Carlson <bcarlso@...> writes: > > Brandon> Might there be a caching concern here? I'm specifically > Brandon> remembering something about "agressive" caching in IE6. > > If you specify the following directive: > > Cache-Control: no-cache > > Even IE6 will not return a stale response. See: > > http://support.microsoft.com/kb/234067 > > -- > Cheers, > > Berend de Boer >
>>>>> "Brandon" == Brandon Carlson <bcarlso@...> writes:
Brandon> Now, if this URL is cached anywhere you basically have a
Brandon> valid card + CVV in a single URL, which goes a long way
Brandon> towards purchasing something online. Next, if you add in
Brandon> address verification information you're just a Google
Brandon> search away from purchasing something using another
Brandon> person's identity.
Brandon> I'd prefer to use a method that is not cacheable in this
Brandon> scenario.
Brandon> Thoughts?
I thought we had established SSL as the base line?
Without SSL all bets are off.
If you're concerned about local history: if someone has access to your
local history, you have other worries. For such a person it would be
trivial to install a key logger for example.
--
Cheers,
Berend de Boer
>>>>> "Michael" == Michael Schuerig <michael@...> writes:
Michael> Before, I was considering some kind of RPC-ish query
Michael> asking "is this combination of year, award, person, movie
Michael> possible?". Now, I think it would be better to ask "give
Michael> me the awarding for this award and year". A HEAD request
Michael> would be sufficient; response status 200 and 404 could be
Michael> interpreted as "conflict" and "go ahead" respectively. Of
Michael> course, given concurrency, such a response is only
Michael> advisory.
Yep, I think that's a good approach. For usability you have some JS to
do those checks, and there's a final check on the back-end where you
capture the situation that an insert fails, because someone has
already claimed the award.
--
Cheers,
Berend de Boer
Hi, there, After some intensive studies these days, I've come up with this little piece of presentation to explain the concept of REST from user experience, which was well received around my colleagues. So I deem it a good idea to share with the community, which shall more or less help you deepen understanding of REST as well as UX :) http://www.slideshare.net/trilancer/restful-user-experience-1421793 <http://www.slideshare.net/trilancer/restful-user-experience-1421793> Best regards Wayne
What often differs is the shape of the representations you return for a resource. Web pages have a tendency to aggregate much more information than what a machine is expecting. In a page, the shape of the data is often specialized to help the code in the view stay simple. Making page and service share the same data model is difficult, and not always worth the additional effort. Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Suresh Harikrishnan Sent: 10 May 2009 08:14 To: Stefan Tilkov; discussions of the Representational State Transfer Subject: Re: [rest-discuss] Re: Separating user interfaces from application $B!>(Bprogramming interfaces on the World Wide Web [rest-dis cuss] > I don't think they differ as much as you seem to believe they do, > especially if the machine-to-machine interface is designed following > HATEOAS. I think it depends on the application and the interface requirements. In most machine-to-machine interactions, there is a clear intent. But when i design a user interface, i would also like to present information and other possible actions than the primary intent (An example is suggesting similar books when i buy one). When i do this using the same API, i cannot avoid a lot of chatter between the server and the client. To avoid this, i have seen a need for using another set of interfaces which internally use the API designed for machine-machine interactions. Suresh On Sat, May 2, 2009 at 12:32 PM, Stefan Tilkov <stefan.tilkov@...> wrote: > On 02.05.2009, at 04:22, Subbu Allamaraju wrote: > >> Just wondering how far one could take that since >> human-machine interactions and machine-machine interactions differ >> significantly in practice. > > I don't think they differ as much as you seem to believe they do, > especially if the machine-to-machine interface is designed following > HATEOAS. Of course there are practical problems, such as the fact that > HTML supports only GET and POST, browsers don't support explicit > setting of Accept headers or lack a logout option for HTTP Auth, but > these restrictions are not restrictions of REST. > > E.g. if I'm writing an application client, say built using Java/Swing, > that is driven by hypermedia contained in representations returned > from the server - would you expect that there'd have to be a second > server API for other clients? I don't think so. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > ------------------------------------ Yahoo! Groups Links
2009/5/14 Sebastien Lambla <seb@...> > > > What often differs is the shape of the representations you return for a > resource. > > Web pages have a tendency to aggregate much more information than what a > machine is expecting. In a page, the shape of the data is often specialized > to help the code in the view stay simple. > > Making page and service share the same data model is difficult, and not > always worth the additional effort. Agreed. But one should always strive to make the API usable as a UI. First, this makes the API easier for a developer to debug and to learn. The developer can simply click through the representations manually to understand what's going on. If you have an API that only a machine can understand (easily), you have real problems. -- Nick
I'm pondering how a RESTful service can best support a highly interactive UI. I think HATEOAS precludes hard-coding application specific intelligence on the client. I'm not sure if it is reconcilable with shipping rules or other metadata to the client. And I have no good idea how to map to resources some of the things I'd need to find out from the server. A case in point from a movie database (my running example for pestering various mailing lists): There are movies and their associated participants, there are unaffiliated people, and there are awards. Everyone's supreme goal is to receive an award. So the user tries to help, grabs an award and starts to drag it around. But where to drop it? There are all kinds of potential targets around, but on closer scrutiny (requiring intelligence) only some of them fit. You just can't honor an actor with a Best Picture Oscar. And, after all, the award may already have been given to some other person/movie for the relevant year. In other words, which one of the, say, 100 potential drop targets are really eligible, is a highly dynamic decision best left to the server. So, given a list of candidate drop targets, how do I RESTfully ask the server to filter them and return only the real contenders? There are two additional constraints I can immediately think of. There are too many candidates to comfortably stuff into the query string of a GET request. Having to ask the server at all is too bad, performance- wise, but unavoidable (AFAICT). Several trips, e.g. for POSTing a new resource and then GETting the needed information from it in another request, is probably too much traffic. I'm very curious to read your suggestions. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
Hi Wayne, I would say this is the best answer I've seen to the RPC/SOAP vs REST debate. Darrel On Tue, May 12, 2009 at 11:18 PM, bwstudios117 <bwstudios117@...> wrote: > > > Hi, there, > > After some intensive studies these days, I've come up with this little piece > of presentation to explain the concept of REST from user experience, which > was well received around my colleagues. > > So I deem it a good idea to share with the community, which shall more or > less help you deepen understanding of REST as well as UX :) > > > > http://www.slideshare.net/trilancer/restful-user-experience-1421793 > > > > Best regards > > Wayne > >
Hi Michael, Seems like a bit of strange interaction but I have a few thoughts... You could make the action of grabbing the award a state transition and do a GET on /Award/xyz. The returned representation could provide links to potential recipients, or could indicate that the award has already been given. Assuming that possible recipients are returned, these links could be rendered on the client as potential drop zones. Dropping the award could POST to a subresource of the recipient. Something like POST /Actor/Dustin_Hoffman/Awards?url=/Ocsar/2009/BestMovie Does that make any sense? Darrel On Sun, May 17, 2009 at 6:36 PM, Michael Schuerig <michael@...> wrote: > > > > I'm pondering how a RESTful service can best support a highly > interactive UI. I think HATEOAS precludes hard-coding application > specific intelligence on the client. I'm not sure if it is reconcilable > with shipping rules or other metadata to the client. And I have no good > idea how to map to resources some of the things I'd need to find out > from the server. > > A case in point from a movie database (my running example for pestering > various mailing lists): There are movies and their associated > participants, there are unaffiliated people, and there are awards. > Everyone's supreme goal is to receive an award. So the user tries to > help, grabs an award and starts to drag it around. But where to drop it? > There are all kinds of potential targets around, but on closer scrutiny > (requiring intelligence) only some of them fit. You just can't honor an > actor with a Best Picture Oscar. And, after all, the award may already > have been given to some other person/movie for the relevant year. > > In other words, which one of the, say, 100 potential drop targets are > really eligible, is a highly dynamic decision best left to the server. > So, given a list of candidate drop targets, how do I RESTfully ask the > server to filter them and return only the real contenders? > > There are two additional constraints I can immediately think of. There > are too many candidates to comfortably stuff into the query string of a > GET request. Having to ask the server at all is too bad, performance- > wise, but unavoidable (AFAICT). Several trips, e.g. for POSTing a new > resource and then GETting the needed information from it in another > request, is probably too much traffic. > > I'm very curious to read your suggestions. > > Michael > > --
If the question is about a RESTful way to provide feedback to users quickly using a typical HTML UI, here is one way to look at the challenge: One of the best ways to provide user feedback in an HTML UI is to use images. The image tag performs a GET on a URI. The URI need not be an actual binary image, but a resource that, after evaluating application state, returns an answer using a valid image media type. With this in mind, consider the following simple example: Compose an HTML document that shows a series of "empty box" images with text names on them: "Animal", "Vegetable", "Mineral" <img src="animal_box.png" /> <img src="veggie_box.png" /> <img src="mineral_box.png" /> and a set of images of various creatures. <img src="fish.png" /> <img src="celery.png" /> <img src="granite.png" /> Now add a scripted UI that allows the user to drag the creature images around the UI and "drop" them on the empty box images. When the user executes the drop, the script will concatenate the name of the box image and the name of the creature image and update the src attribute of the box image: <img src="animal_fish.png" /> or <img src="veggie_granite.png" /> Upon receiving the GET request for the "new" image, the server can use rules on the server to evaulate the validity of the URI and return the proper response. For example: GET /animal_fish.png could return 200 OK and an image that indicating success (green check mark) but GET /veggie_granite.png could return 404 Not Found and an image indicating failure (a red X) Hopefully, this gives you some ideas on how to implement your solution. mca http://amundsen.com/blog/ On Sun, May 17, 2009 at 18:36, Michael Schuerig <michael@...> wrote: > > I'm pondering how a RESTful service can best support a highly > interactive UI. I think HATEOAS precludes hard-coding application > specific intelligence on the client. I'm not sure if it is reconcilable > with shipping rules or other metadata to the client. And I have no good > idea how to map to resources some of the things I'd need to find out > from the server. > > A case in point from a movie database (my running example for pestering > various mailing lists): There are movies and their associated > participants, there are unaffiliated people, and there are awards. > Everyone's supreme goal is to receive an award. So the user tries to > help, grabs an award and starts to drag it around. But where to drop it? > There are all kinds of potential targets around, but on closer scrutiny > (requiring intelligence) only some of them fit. You just can't honor an > actor with a Best Picture Oscar. And, after all, the award may already > have been given to some other person/movie for the relevant year. > > In other words, which one of the, say, 100 potential drop targets are > really eligible, is a highly dynamic decision best left to the server. > So, given a list of candidate drop targets, how do I RESTfully ask the > server to filter them and return only the real contenders? > > There are two additional constraints I can immediately think of. There > are too many candidates to comfortably stuff into the query string of a > GET request. Having to ask the server at all is too bad, performance- > wise, but unavoidable (AFAICT). Several trips, e.g. for POSTing a new > resource and then GETting the needed information from it in another > request, is probably too much traffic. > > I'm very curious to read your suggestions. > > Michael > > -- > Michael Schuerig > mailto:michael@... > http://www.schuerig.de/michael/ > > > > ------------------------------------ > > Yahoo! Groups Links > > > > >
On Monday 18 May 2009, Darrel Miller wrote: > Seems like a bit of strange interaction but I have a few thoughts... Eliza: Why do you say that? More seriously, I don't think the suggested interaction is strange. If I would, I wouldn't have suggested it in the first place. Am I missing something? What do you think is strange about it? > You could make the action of grabbing the award a state transition > and do a GET on /Award/xyz. The returned representation could > provide links to potential recipients, or could indicate that the > award has already been given. I'm sorry, I can't do that, Dave^H^Hrrel. The problem is that there are too many potential recipients. Say the user starts dragging a Best Actor in a Leading Role award. There are literally thousands of potential recipients and the choice is only restricted by what the client is currently displaying to the user. Thus, the client needs a way to ask the server which of these candidates really are hopefuls. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
On Monday 18 May 2009, mike amundsen wrote: > If the question is about a RESTful way to provide feedback to users > quickly using a typical HTML UI, here is one way to look at the > challenge: Mike, although the practical background of my question is indeed an HTML-based RIA, this is not essential to the question. [...] > Now add a scripted UI that allows the user to drag the creature > images around the UI and "drop" them on the empty box images. When > the user executes the drop, the script will concatenate the name of > the box image and the name of the creature image and update the src > attribute of the box image: > > <img src="animal_fish.png" /> > or > <img src="veggie_granite.png" /> > > Upon receiving the GET request for the "new" image, the server can > use rules on the server to evaulate the validity of the URI and > return the proper response. > > For example: > GET /animal_fish.png could return 200 OK and an image that indicating > success (green check mark) > but > GET /veggie_granite.png could return 404 Not Found and an image > indicating failure (a red X) That's too late. I don't want to annoy users by indicating failure of an operation that wasn't going to succeed to begin with. That is, I don't want to tell the user that their drag & drop failed, rather, as soon as they start to drag, I want to indicate where they might drop with a very large chance of success (modulo actions of concurrent users). As I wrote in response to Darrell, it won't work to ask for all possible targets for award assignment as there are too many of them. The client does know about an eligible subset, but the server does not. The client has its own state, unknown to the server, and that's as it should be, in general, as long as the conversational state is maintained through navigational requests following links. You may compare this technical problem to a human conversation. It is the difference between asking the waiter to recount the entire menu vs. asking whether two specific dishes are available. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
It sounds like a 'thicker' client leveraging object orientation of some kind would be a good solution. The thick client logic can be served as OO code-on-demand (e.g. Javascript) that augments your basic hypermedia formats to provide the functionality you are looking for (this would be the best thick client approach if you want to stay 'crawl-able'). When the user performs an object/resource interaction which the client-side code allows; the object/resource(s) can be submitted to your REST interface for final validation and persistence. Modular & OO code-on-demand would allow for relatively easy changes to the logic over time. I don't think much is lost by keeping these user actions (aside: are they application states?) away from the server; since they're very unlikely to be entry points to the application - plus, valid changes are eventually submitted to the server for persistence anyway. Regards, Mike Michael Schuerig wrote: > I'm pondering how a RESTful service can best support a highly > interactive UI. I think HATEOAS precludes hard-coding application > specific intelligence on the client. I'm not sure if it is reconcilable > with shipping rules or other metadata to the client. And I have no good > idea how to map to resources some of the things I'd need to find out > from the server. > > A case in point from a movie database (my running example for pestering > various mailing lists): There are movies and their associated > participants, there are unaffiliated people, and there are awards. > Everyone's supreme goal is to receive an award. So the user tries to > help, grabs an award and starts to drag it around. But where to drop it? > There are all kinds of potential targets around, but on closer scrutiny > (requiring intelligence) only some of them fit. You just can't honor an > actor with a Best Picture Oscar. And, after all, the award may already > have been given to some other person/movie for the relevant year. > > In other words, which one of the, say, 100 potential drop targets are > really eligible, is a highly dynamic decision best left to the server. > So, given a list of candidate drop targets, how do I RESTfully ask the > server to filter them and return only the real contenders? > > There are two additional constraints I can immediately think of. There > are too many candidates to comfortably stuff into the query string of a > GET request. Having to ask the server at all is too bad, performance- > wise, but unavoidable (AFAICT). Several trips, e.g. for POSTing a new > resource and then GETting the needed information from it in another > request, is probably too much traffic. > > I'm very curious to read your suggestions. > > Michael > >
On Monday 18 May 2009, Mike Kelly wrote: > It sounds like a 'thicker' client leveraging object orientation of > some kind would be a good solution. The thick client logic can be > served as OO code-on-demand (e.g. Javascript) that augments your > basic hypermedia formats to provide the functionality you are looking > for (this would be the best thick client approach if you want to stay > 'crawl-able'). When the user performs an object/resource interaction > which the client-side code allows; the object/resource(s) can be > submitted to your REST interface for final validation and > persistence. Modular & OO code-on-demand would allow for relatively > easy changes to the logic over time. My client is based on HTML and JavaScript, but that is just an incidental aspect. I could put as much logic there as I want, but the architecturally relevant question is whether this would be a good idea. Final validation lies with the server as always, but that is just too late to ensure a pleasant user experience. It's not nice to withhold telling a user that an action is not possible when the app could have kept him unobtrusively from even trying. > I don't think much is lost by keeping these user actions (aside: are > they application states?) away from the server; since they're very > unlikely to be entry points to the application - plus, valid changes > are eventually submitted to the server for persistence anyway. As far as the conversation between client and server is concerned, I'm after a look-before-you-leap query. There are, say, 100 state transitions that might or might not be possible, as far as the client knows. So the client wants to get advice from the server, which of these transitions are likely(!) to be possible. Based on this advice, the client offers only the likely actions/transitions to the user. Because at any one time there is a large number of potential actions/transitions, it is not possible to ask the server about each of them individually. Also, the entire state space is much too large to copy it to the client, and because the landscape is changed by other users, caching isn't much use either. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
Michael Schuerig wrote: > [snip]I think HATEOAS precludes hard-coding application > specific intelligence on the client. I think this "feature" of HATEOAS is misleading. If you're writing an AJAX application, you're still getting "application specific intelligence" from the server downloaded as Javascript. Unless you're writing a pure HTML based application, which it seems your not, you're gonna break this *mythical* constraint. As I keep saying over and over, machine-based clients usually have to have application specific intelligence to actually work. Unless you're one of the < 1% applications that are generic enough to interpret things on the fly. FWIW, back in the mid-90s we had *much* richer UIs through VB, VC++, and Powerbuilder (and we were much much more productive). We had much more stateless services. What the web gave everybody was a standard distribution mechanism. Companies liked these rich UIs but they didn't like installing and upgrading them on 100s of callcenter computers. This may sound like I'm attacking REST, but I'm not. I'm just pointing out that religious design decisions will probably hurt you in the short and long run. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Monday 18 May 2009, Bill Burke wrote: > Michael Schuerig wrote: > > [snip]I think HATEOAS precludes hard-coding application > > specific intelligence on the client. > > I think this "feature" of HATEOAS is misleading. If you're writing > an AJAX application, you're still getting "application specific > intelligence" from the server downloaded as Javascript. Unless > you're writing a pure HTML based application, which it seems your > not, you're gonna break this mythical constraint. As I keep saying > over and over, machine-based clients usually have to have application > specific intelligence to actually work. Unless you're one of the < > 1% applications that are generic enough to interpret things on the > fly. My statement was too strong. I'll restate it as The client should not second-guess the server. Both are conversant in the domain of discourse, but the server is the one who sets the rules. Please, everyone, just forget about the implementation technology I'm incidentally using for the client. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
Code-on-demand (e.g. javascript) is part of ReST. The server can send all of the logic required as javascript on initial load, or event-driven as json, or however you want to do it. You can have as rich as UI as you want, as long as the states are embedded in representations from the server. No limitations on content of representation.
On Monday 18 May 2009, Bob Haugen wrote: > Code-on-demand (e.g. javascript) is part of ReST. > > The server can send all of the logic required as javascript on > initial load, or event-driven as json, or however you want to do it. Yes, I mentioned logic on the client. Probably I shouldn't have. The problem I've described is not resolved by adding more logic anywhere, it depends on state. The logic needs some data to chew on. If the relevant logic is on the client, somehow the data has to get there too. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
On Mon, May 18, 2009 at 2:04 AM, Michael Schuerig <michael@...> wrote: > > first place. Am I missing something? What do you think is strange about > it? Just the concept that users assign awards. I could understand if they voted for their favourite actor in each category based on nominations. Anyway, its not important to the real discussion... > > I'm sorry, I can't do that, Dave^H^Hrrel. The problem is that there are > too many potential recipients. Say the user starts dragging a Best Actor > in a Leading Role award. There are literally thousands of potential > recipients and the choice is only restricted by what the client is > currently displaying to the user. Thus, the client needs a way to ask > the server which of these candidates really are hopefuls. > So create a hierarchy. The first representation returned could have 10 categories of recipients. You render those categories on the client and, as the client hovers over the category, you do another request and drill down into that category and render the results. I'm not sure how at this point how the problem is any different that with a regular fat client. Darrel
On Tue, May 12, 2009 at 11:18 PM, bwstudios117 <bwstudios117@...> wrote: > > > Hi, there, > > After some intensive studies these days, I've come up with this little piece > of presentation to explain the concept of REST from user experience, which > was well received around my colleagues. > > So I deem it a good idea to share with the community, which shall more or > less help you deepen understanding of REST as well as UX :) > > > > http://www.slideshare.net/trilancer/restful-user-experience-1421793 Excellent presentation. One of the best I've seen. I have a small quibble with your suggesting that NOUNS should be unconstrained, but other than that, its wonderful. I happen to call the vertices of the Interface Triangle of Contraints IFaPs (Identifiers, Formats, and Protocols), but Nouns (I), Representations (F), and Verbs (P) is definitely more accessible. On the unconstrained nouns issue...while I agree that there will be MORE nouns than verbs or formats, this does not mean they should be unconstrained. Constrained vocabularies of nouns is overall a good thing. Sometimes its called reference data, controlled vocabularies, taxonomies, authorities, codes, master data, enumerations, etc. In web terms, its the principle that resources should have as few aliases as possible. A good example of a RESTful move in this direction is the BBC's adoption of MusicBrainz identifiers for Songs, Artists, etc. In general, the more constrained the interface in all three dimensions (N/I, R/F, V/P) the biggest the "network effect" the more value a single user gains from more users using the same interface. The AWWW v1 has a good quote on this if you're interested... -- NIck
On Mon, May 18, 2009 at 8:49 AM, Michael Schuerig <michael@...> wrote: > Yes, I mentioned logic on the client. Probably I shouldn't have. The > problem I've described is not resolved by adding more logic anywhere, it > depends on state. The logic needs some data to chew on. If the relevant > logic is on the client, somehow the data has to get there too. Since both the logic and the data are coming from the server, you can get both at initial load or event-driven later. I guess I don't understand the problem.
On Monday 18 May 2009, Darrel Miller wrote: > On Mon, May 18, 2009 at 2:04 AM, Michael Schuerig <michael@...> wrote: > > first place. Am I missing something? What do you think is strange > > about it? > > Just the concept that users assign awards. I could understand if > they voted for their favourite actor in each category based on > nominations. Anyway, its not important to the real discussion... I see, just forget about the details of the application, they are pretty dumb anyway. The only purpose is for me to explore tools and techniques. > > I'm sorry, I can't do that, Dave^H^Hrrel. The problem is that there > > are too many potential recipients. Say the user starts dragging a > > Best Actor in a Leading Role award. There are literally thousands > > of potential recipients and the choice is only restricted by what > > the client is currently displaying to the user. Thus, the client > > needs a way to ask the server which of these candidates really are > > hopefuls. > > So create a hierarchy. The first representation returned could have > 10 categories of recipients. You render those categories on the > client and, as the client hovers over the category, you do another > request and drill down into that category and render the results. Now I can't follow. The problem is not that I still need to render anything. Everything already is rendered. The problem is about which interactions between these things are possible and that depends on state only the server knows. A version of the same problem, simplified to the core, is this: There are 100 buttons on the screen each triggering a specific action (state transition). Only some of these actions are really possible at any one time and only the server knows which ones they are. Also, the server does not know what is shown on the screen. For a pleasant UI, I'd like to enable only those buttons of which the server tells me that their associated actions are most likely possible. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
On Monday 18 May 2009, Bob Haugen wrote: > On Mon, May 18, 2009 at 8:49 AM, Michael Schuerig <michael@...> wrote: > > Yes, I mentioned logic on the client. Probably I shouldn't have. > > The problem I've described is not resolved by adding more logic > > anywhere, it depends on state. The logic needs some data to chew > > on. If the relevant logic is on the client, somehow the data has to > > get there too. > > Since both the logic and the data are coming from the server, you can > get both at initial load or event-driven later. I can't load the relevant data initially as it is bound to change over time. How to load it on demand is all my question is about. How do I ask RESTfully which of a set of 100 candidate state transitions are actually possible? There's time for one roundtrip to the server. > I guess I don't understand the problem. It's more like you're not seeing a problem where I am. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
On Mon, May 18, 2009 at 9:50 AM, Michael Schuerig <michael@...> wrote: > I can't load the relevant data initially as it is bound to change over > time. How to load it on demand is all my question is about. How do I ask > RESTfully which of a set of 100 candidate state transitions are actually > possible? There's time for one roundtrip to the server. One event-driven roundtrip to the server to get some json data is usually pretty quick. A lot faster than initial load. >> I guess I don't understand the problem. > > It's more like you're not seeing a problem where I am. Is the problem you're seeing that going to get the data using XMLHttpRequest will be too slow for you?
On Monday 18 May 2009, Bob Haugen wrote: > On Mon, May 18, 2009 at 9:50 AM, Michael Schuerig <michael@...> wrote: > > I can't load the relevant data initially as it is bound to change > > over time. How to load it on demand is all my question is about. > > How do I ask RESTfully which of a set of 100 candidate state > > transitions are actually possible? There's time for one roundtrip > > to the server. > > One event-driven roundtrip to the server to get some json data is > usually pretty quick. A lot faster than initial load. Yes, I violently agree. Then, how do you express RESTfully a query inquiring about the possibility of a set of ~100, or for the sake of argument arbitrarily many, state transitions? The "question" is too long to stuff everything into the URL for a GET request. POST or PUT requests are technically, RESTfully not appropriate. That is all my question is about and I thought I had made this clearly initially. Judging by the amount of misunderstanding, however, I've made a very bad job of it. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
On Mon, May 18, 2009 at 11:42 AM, Michael Schuerig <michael@...> wrote: >> One event-driven roundtrip to the server to get some json data is >> usually pretty quick. A lot faster than initial load. > > Yes, I violently agree. Then, how do you express RESTfully a query > inquiring about the possibility of a set of ~100, or for the sake of > argument arbitrarily many, state transitions? The "question" is too long > to stuff everything into the URL for a GET request. POST or PUT requests > are technically, RESTfully not appropriate. 2 possibilities come to mind: 1. You got all the states etc from the server in the first place. Could you express a shorter query that would fit into the URL for a GET request, since you do not have to repeat everything for the server? 2. Personally, I would not hesitate to user POST for this situation, if I could not make it work with GET. Caching is not an issue anyway. But I am sure somebody else will disagree. (I am not a purist, and treat REST as guidelines which allow variances if I know what I am doing and the price of the variance is acceptable.) > That is all my question is about and I thought I had made this clearly > initially. Judging by the amount of misunderstanding, however, I've made > a very bad job of it. Communication is difficult. I think I finally understand, altho I wouldn't bet on it...
On Monday 18 May 2009, Bob Haugen wrote: > On Mon, May 18, 2009 at 11:42 AM, Michael Schuerig <michael@...> wrote: > >> One event-driven roundtrip to the server to get some json data is > >> usually pretty quick. A lot faster than initial load. > > > > Yes, I violently agree. Then, how do you express RESTfully a query > > inquiring about the possibility of a set of ~100, or for the sake > > of argument arbitrarily many, state transitions? The "question" is > > too long to stuff everything into the URL for a GET request. POST > > or PUT requests are technically, RESTfully not appropriate. > > 2 possibilities come to mind: > > 1. You got all the states etc from the server in the first place. > Could you express a shorter query that would fit into the URL for a > GET request, since you do not have to repeat everything for the > server? I don't think so. Over the time of a session, the server could have sent large amounts of data to the client. To refer to any chunk of that data later on is difficult at best. The only way to get a shortcut would be to reify conversational state, which, I think, would put an unnecessary burden on both client and server as, in principle, the interaction can be completely stateless. > 2. Personally, I would not hesitate to user POST for this situation, > if I could not make it work with GET. Caching is not an issue > anyway. But I am sure somebody else will disagree. I agree about the POST, using it is my fall-back solution at any rate, but only when I'm convinced that there is no better GET-based solution. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
Maybe I'm missing something, since you haven't described the application in detail, but I can't imagine a single interface that would present every nominee and every award for all Oscar categories. Practically speaking, wouldn't you (for the sake of the user) have to significantly constrain the UI?
For instance, you could create a bounded draggable area containing (drop target) nominees for a category and the (draggable) award for that category. This simple design decision eliminates the problem of deciding whether a user action was legal or not, because you just don't provide the user with illegal options.
> In other words, which one of the, say, 100 potential drop
> targets are
> really eligible, is a highly dynamic decision best left to
> the server.
Hullo Michael, I am not sure I fully grasp your situation, but I will attempt an answer nonetheless. My answer will be governed by the following assumptions. 1> Initially when the ~100 candidate state transitions were presented to the client, it was the server that defined this set of state transitions to the client. 2> That the model you propose is indeed the only way to present this problem, in that only the server can host or execute the logic needed to determine the allowed state transitions given the chosen award. Provided 2 is true and 1> holds, a record of that set could be store on the server prior to sending it to the client, and a unique identifier (URI) that represents this set could be created on the server. The end effect is that for this particular client's request there will exist a resource (URI) on the server that keeps track of the entire list of candidates in the set sent to the client. That collection URI will be included in the request. When the client attempts to filter (grabs the award in your example), the client could simply send the award URI, and the candidate state transition collection URI to the server. The server can then process this request by referring to the collection store on the server side for this client. I believe that this would represent a ReSTful solution that would over come the GET request size/length limitations. Another approach would be to make the selection criteria (logic) to be sent along with the data. This would mean that the assumption I made in 2> is no longer valid. JavaScript would be a particularly good choice for this, since it can be run on the browser and on the server. At my company, we use JavaScript in this fashion to execute web services (syntactic and semantic resource validation as well as business logic) for resources both on the server side and on the client. We have created a restful open source framework called Hannibal http://code.google.com/p/hannibalcodegenerator/ <http://code.google.com/p/hannibalcodegenerator/>to make this easier. I hope this helps. Regards, Bediako On Mon, May 18, 2009 at 12:42 PM, Michael Schuerig <michael@...>wrote: > > > On Monday 18 May 2009, Bob Haugen wrote: > > On Mon, May 18, 2009 at 9:50 AM, Michael Schuerig > <michael@... <michael%40schuerig.de>> wrote: > > > I can't load the relevant data initially as it is bound to change > > > over time. How to load it on demand is all my question is about. > > > How do I ask RESTfully which of a set of 100 candidate state > > > transitions are actually possible? There's time for one roundtrip > > > to the server. > > > > One event-driven roundtrip to the server to get some json data is > > usually pretty quick. A lot faster than initial load. > > Yes, I violently agree. Then, how do you express RESTfully a query > inquiring about the possibility of a set of ~100, or for the sake of > argument arbitrarily many, state transitions? The "question" is too long > to stuff everything into the URL for a GET request. POST or PUT requests > are technically, RESTfully not appropriate. > > That is all my question is about and I thought I had made this clearly > initially. Judging by the amount of misunderstanding, however, I've made > a very bad job of it. > > Michael > > -- > Michael Schuerig > mailto:michael@... <michael%40schuerig.de> > http://www.schuerig.de/michael/ > > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly (p) 202.683.7486 (f) 703.563.6279
[MS:] > > In other words, which one of the, say, 100 potential drop > > targets are > > really eligible, is a highly dynamic decision best left to > > the server. On Monday 18 May 2009, Jared Hirsch wrote: > Maybe I'm missing something, since you haven't described the > application in detail, but I can't imagine a single interface that > would present every nominee and every award for all Oscar categories. Please don't read too much into the concrete example. The example is debatable as is the number of 100 possible actions/transitions. The general problem exists notwithstanding. > Practically speaking, wouldn't you (for the sake of the user) have to > significantly constrain the UI? I don't think so, although my concern is not really with the details of UI design. Just consider: Over the course of a session with the application, a member of the Academy looks at the details of 20 movies. Finally, he hits the right one. Huge production, huge cast. He decides that one of them is going to receive the award for Best Actor in a Supporting Role and starts dragging the little Oscar icon across the screen. The client application doesn't know anything of semantic relevance about awards. Those business rules are left to the server. And it is only the server who knows that the award, by now in mid-flight across the screen, does not apply to directors, cutters, or female actors. In another, similar case, the guy starts dragging the Best Picture Oscar towards a list of movies from various years. For all but the current year this Oscar has already been awarded, only movies from the current year are eligible to receive it. Again, that's nothing the client knows anything about and the server knows everything. Now, as phony as the example is, the interaction itself is completely reasonable. And my question, I think, is really simple: How do I ask the server which drop targets are possible while keeping in line with RESTful principles. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
On Monday 18 May 2009, Bediako George wrote: > Hullo Michael, > I am not sure I fully grasp your situation, but I will attempt an > answer nonetheless. My answer will be governed by the following > assumptions. 1> Initially when the ~100 candidate state transitions > were presented to the client, it was the server that defined this set > of state transitions to the client. That's not the case. The possible transitions accumulate over time. I'd say it's putting too much of a burden on the server to ask it to keep track. > Provided 2 is true and 1> holds, a record of that set could be store > on the server prior to sending it to the client, and a unique > identifier (URI) that represents this set could be created on the > server. The end effect is that for this particular client's request > there will exist a resource (URI) on the server that keeps track of > the entire list of candidates in the set sent to the client. That is a valid technique, I surmise, but it doesn't seem to fit in this case. Creating such a process resource, e.g. for opening a transaction that accrues further details before being committed, is reasonable when there is a definite intend that it be committed sooner or latter. Creating numerous such resources just in case they might be needed, looks like a design mistake. > Another approach would be to make the selection criteria (logic) to > be sent along with the data. This would mean that the assumption I > made in 2> is no longer valid. JavaScript would be a particularly > good choice for this, since it can be run on the browser and on the > server. I have no clear opinion about shuttling business logic around between client and server. However, in this case it doesn't apply as the decisions to be made require access to the server's state. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
<snip> For all but the current year this Oscar has already been awarded, only movies from the current year are eligible to receive it. ... How do I ask the server which drop targets are possible while keeping in line with RESTful principles. </snip> GET /possible-targets/?oscar=best-picture-2009 returns a list of valid drop targets. mca http://amundsen.com/blog/ On Mon, May 18, 2009 at 17:05, Michael Schuerig <michael@...> wrote: > [MS:] > > > In other words, which one of the, say, 100 potential drop > > > targets are > > > really eligible, is a highly dynamic decision best left to > > > the server. > > On Monday 18 May 2009, Jared Hirsch wrote: > > Maybe I'm missing something, since you haven't described the > > application in detail, but I can't imagine a single interface that > > would present every nominee and every award for all Oscar categories. > > Please don't read too much into the concrete example. The example is > debatable as is the number of 100 possible actions/transitions. The > general problem exists notwithstanding. > > > Practically speaking, wouldn't you (for the sake of the user) have to > > significantly constrain the UI? > > I don't think so, although my concern is not really with the details of > UI design. Just consider: Over the course of a session with the > application, a member of the Academy looks at the details of 20 movies. > Finally, he hits the right one. Huge production, huge cast. He decides > that one of them is going to receive the award for Best Actor in a > Supporting Role and starts dragging the little Oscar icon across the > screen. > > The client application doesn't know anything of semantic relevance about > awards. Those business rules are left to the server. And it is only the > server who knows that the award, by now in mid-flight across the screen, > does not apply to directors, cutters, or female actors. > > In another, similar case, the guy starts dragging the Best Picture Oscar > towards a list of movies from various years. For all but the current > year this Oscar has already been awarded, only movies from the current > year are eligible to receive it. Again, that's nothing the client knows > anything about and the server knows everything. > > Now, as phony as the example is, the interaction itself is completely > reasonable. And my question, I think, is really simple: How do I ask the > server which drop targets are possible while keeping in line with > RESTful principles. > > Michael > > -- > Michael Schuerig > mailto:michael@... > http://www.schuerig.de/michael/ > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Monday 18 May 2009, mike amundsen wrote: > GET /possible-targets/?oscar=best-picture-2009 > returns a list of valid drop targets. If you consider a list containing every movie shot in that year possible, I'd have to agree. I reckon there must be a better strategy. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
---------- Forwarded message ---------- From: Bediako George <bediakogeorge@...> Date: Mon, May 18, 2009 at 6:21 PM Subject: Re: [rest-discuss] Re: Giving the UI a REST To: Michael Schuerig <michael@...> On Mon, May 18, 2009 at 5:14 PM, Michael Schuerig <michael@...>wrote: > On Monday 18 May 2009, Bediako George wrote: > > Hullo Michael, > > I am not sure I fully grasp your situation, but I will attempt an > > answer nonetheless. My answer will be governed by the following > > assumptions. 1> Initially when the ~100 candidate state transitions > > were presented to the client, it was the server that defined this set > > of state transitions to the client. > > That's not the case. How would the client know the set of possible transitions if the server is not telling what is possible or the client does not have business logic that it executes to figure this out on its own? > The possible transitions accumulate over time. I'd > say it's putting too much of a burden on the server to ask it to keep > track. Point taken, but this depends on the application. Will this application have tens, hundreds or thousands of users? > > Provided 2 is true and 1> holds, a record of that set could be store > > on the server prior to sending it to the client, and a unique > > identifier (URI) that represents this set could be created on the > > server. The end effect is that for this particular client's request > > there will exist a resource (URI) on the server that keeps track of > > the entire list of candidates in the set sent to the client. > > That is a valid technique, I surmise, but it doesn't seem to fit in this > case. Creating such a process resource, e.g. for opening a transaction > that accrues further details before being committed, is reasonable when > there is a definite intend that it be committed sooner or latter. > Creating numerous such resources just in case they might be needed, > looks like a design mistake. Again this depends on the number of users. Also there is no need for these collections to be created as part of a transaction, if you are referring to a transaction in the open long lived database connection rollback/commit database sense. > > > > Another approach would be to make the selection criteria (logic) to > > be sent along with the data. This would mean that the assumption I > > made in 2> is no longer valid. JavaScript would be a particularly > > good choice for this, since it can be run on the browser and on the > > server. > > I have no clear opinion about shuttling business logic around between > client and server. However, in this case it doesn't apply as the > decisions to be made require access to the server's state. There is a modification to that approach which would involve sending the list of candidate states to the server in micro batches. The pseudo code would look something like this. 1> User chooses award. 2> Client creates transaction resource on server. Server provides unique URI. 3> Client add all candidate states to resource via server provided URI in batches of N where N is less than max allowed GET parameters size/length. 4> Client requests server to execute state transition business logic against aforementioned URI and the User's award. This would eliminate the need for the server to keep around candidate state transition sets that will never be used, while at the same time provides the client with the ability to send "unlimited" size candidate state collections to the server. > > Michael > > -- > Michael Schuerig > mailto:michael@... > http://www.schuerig.de/michael/ > > > > ------------------------------------ > > Yahoo! Groups Links > > > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly (p) 202.683.7486 (f) 703.563.6279 -- Bediako George Partner - Lucid Technics, LLC Think Clearly (p) 202.683.7486 (f) 703.563.6279
Hi Michael,
I'd POST a list of possible targets to the server and have it return
the valid drop targets as the result. I don't consider this to be
unRESTful in any way; I can't see any obvious resource that should be
visible but is hidden in this particular scenario.
In other words: Just use POST.
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
On 18.05.2009, at 00:36, Michael Schuerig wrote:
>
>
>
> I'm pondering how a RESTful service can best support a highly
> interactive UI. I think HATEOAS precludes hard-coding application
> specific intelligence on the client. I'm not sure if it is
> reconcilable
> with shipping rules or other metadata to the client. And I have no
> good
> idea how to map to resources some of the things I'd need to find out
> from the server.
>
> A case in point from a movie database (my running example for
> pestering
> various mailing lists): There are movies and their associated
> participants, there are unaffiliated people, and there are awards.
> Everyone's supreme goal is to receive an award. So the user tries to
> help, grabs an award and starts to drag it around. But where to drop
> it?
> There are all kinds of potential targets around, but on closer
> scrutiny
> (requiring intelligence) only some of them fit. You just can't honor
> an
> actor with a Best Picture Oscar. And, after all, the award may already
> have been given to some other person/movie for the relevant year.
>
> In other words, which one of the, say, 100 potential drop targets are
> really eligible, is a highly dynamic decision best left to the server.
> So, given a list of candidate drop targets, how do I RESTfully ask the
> server to filter them and return only the real contenders?
>
> There are two additional constraints I can immediately think of. There
> are too many candidates to comfortably stuff into the query string
> of a
> GET request. Having to ask the server at all is too bad, performance-
> wise, but unavoidable (AFAICT). Several trips, e.g. for POSTing a new
> resource and then GETting the needed information from it in another
> request, is probably too much traffic.
>
> I'm very curious to read your suggestions.
>
> Michael
>
> --
> Michael Schuerig
> mailto:michael@schuerig.de
> http://www.schuerig.de/michael/
>
>
>
> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%; font-
> weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; } #ygrp-
> mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!-- #ygrp-
> sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%; line-
> height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom: 10px;
> padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px; font-family:
> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> input, textarea {font:99% arial,helvetica,clean,sans-serif;} #ygrp-
> mlmsg pre, code {font:115% monospace;*font-size:100%;} #ygrp-mlmsg *
> {line-height:1.22em;} #ygrp-text{ font-family: Georgia; } #ygrp-
> text p{ margin: 0 0 1em 0; } dd.last p a { font-family: Verdana;
> font-weight: bold; } #ygrp-vitnav{ padding-top: 10px; font-family:
> Verdana; font-size: 77%; margin: 0; } #ygrp-vitnav a{ padding: 0
> 1px; } #ygrp-mlmsg #logo{ padding-bottom: 10px; } #ygrp-reco
> { margin-bottom: 20px; padding: 0px; } #ygrp-reco #reco-head { font-
> weight: bold; color: #ff7900; } #reco-category{ font-size: 77%; }
> #reco-desc{ font-size: 77%; } #ygrp-vital a{ text-decoration:
> none; } #ygrp-vital a:hover{ text-decoration: underline; } #ygrp-
> sponsor #ov ul{ padding: 0 0 0 8px; margin: 0; } #ygrp-sponsor #ov
> li{ list-style-type: square; padding: 6px 0; font-size: 77%; } #ygrp-
> sponsor #ov li a{ text-decoration: none; font-size: 130%; } #ygrp-
> sponsor #nc{ background-color: #eee; margin-bottom: 20px; padding: 0
> 8px; } #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-sponsor .ad
> #hd1{ font-family: Arial; font-weight: bold; color: #628c2a; font-
> size: 100%; line-height: 122%; } #ygrp-sponsor .ad a{ text-
> decoration: none; } #ygrp-sponsor .ad a:hover{ text-decoration:
> underline; } #ygrp-sponsor .ad p{ margin: 0; font-weight: normal;
> color: #000000; } o{font-size: 0; } .MsoNormal{ margin: 0 0 0 0; }
> #ygrp-text tt{ font-size: 120%; } blockquote{margin: 0 0 0
> 4px;} .replbq{margin:4} dd.last p span { margin-right: 10px; font-
> family: Verdana; font-weight: bold; } dd.last p span.yshortcuts
> { margin-right: 0; } div.photo-title a, div.photo-title a:active,
> div.photo-title a:hover, div.photo-title a:visited { text-
> decoration: none; } div.file-title a, div.file-title a:active,
> div.file-title a:hover, div.file-title a:visited { text-decoration:
> none; } #ygrp-msg p#attach-count { clear: both; padding: 15px 0 3px
> 0; overflow: hidden; } #ygrp-msg p#attach-count span { color:
> #1E66AE; font-weight: bold; } div#ygrp-mlmsg #ygrp-msg p a
> span.yshortcuts { font-family: Verdana; font-size: 10px; font-
> weight: normal; } #ygrp-msg p a { font-family: Verdana; font-size:
> 10px; } #ygrp-mlmsg a { color: #1E66AE; } div.attach-table div div a
> { text-decoration: none; } div.attach-table { width: 400px; } -->
On Tuesday 19 May 2009, Stefan Tilkov wrote: > I'd POST a list of possible targets to the server and have it return > the valid drop targets as the result. I don't consider this to be > unRESTful in any way; I can't see any obvious resource that should be > visible but is hidden in this particular scenario. > > In other words: Just use POST. Hi Stefan, yes, that's what I'll do. After the discussion, that's been much longer than I expected, I'm convinced that simply POSTing is the best approach for my problem. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
I also agree that POST is the simplest way. I am not sure it is restful though. If I am not mistaken using POST suggests the following to the restful observer. 1> Changing an existing resource, adding to a collection of resources. 2> Providing a set of data for the server to process. If I am not sure that the scenario Michael described falls into these two categories, so I wonder if using POST is restful. I would be interested to hear a counter argument. Also I believe that using POST takes away the ability to cache, although the suggestion I made would not support caching as well. I suppose you could set the response up for caching by including the appropriate Cache-Control or Expires header fields, however I am not sure the URI you would be POSTing to could be constructed in such a way that caching would make sense given the problem you described. This is not to say that POST cannot be used, I just doubt that this would be considered restful. Regards, Bediako On Tue, May 19, 2009 at 5:21 AM, Michael Schuerig <michael@...>wrote: > On Tuesday 19 May 2009, Stefan Tilkov wrote: > > I'd POST a list of possible targets to the server and have it return > > the valid drop targets as the result. I don't consider this to be > > unRESTful in any way; I can't see any obvious resource that should be > > visible but is hidden in this particular scenario. > > > > In other words: Just use POST. > > Hi Stefan, > > yes, that's what I'll do. After the discussion, that's been much longer > than I expected, I'm convinced that simply POSTing is the best approach > for my problem. > > Michael > > -- > Michael Schuerig > mailto:michael@... > http://www.schuerig.de/michael/ > > > > ------------------------------------ > > Yahoo! Groups Links > > > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly (p) 202.683.7486 (f) 703.563.6279
On Tuesday 19 May 2009, Bediako George wrote: > I also agree that POST is the simplest way. I am not sure it is > restful though. If I am not mistaken using POST suggests the > following to the restful observer. > > 1> Changing an existing resource, adding to a collection of > resources. 2> Providing a set of data for the server to process. > > If I am not sure that the scenario Michael described falls into these > two categories, so I wonder if using POST is restful. I would be > interested to hear a counter argument. No counter argument from me. I posted the original question precisely because I couldn't think of a RESTful way, whereas a non-RESTful one, using POST was obvious from the start. Looking back, however, I should have emphasized even more that I was asking about principles, not for a solution to a practical problem. Although the two can coincide. > Also I believe that using POST takes away the ability to cache, Agreed. But I think there are cases where this is really an advantage. Caching is useful for requests that are expected to be repeated. Let's take the case of a relatively cheap computation that takes arbitrary parameters. It is unlikely that the same request is repeated and even if it is, the cost is low. Therefore the gain of caching is low. But there are costs of caching too. First there's the cost of caching the request itself. Then there's the opportunity cost of *not* caching (evicting) another request. > This is not to say that POST cannot be used, I just doubt that this > would be considered restful. Then, is this just a legitimate case where RESTful principles don't apply? Restricting REST to a defined scope isn't a bad thing at all. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
On Tue, May 19, 2009 at 5:57 AM, Michael Schuerig <michael@...> wrote: > On Tuesday 19 May 2009, Bediako George wrote: >> I also agree that POST is the simplest way. I am not sure it is >> restful though. If I am not mistaken using POST suggests the >> following to the restful observer. >> >> 1> Changing an existing resource, adding to a collection of >> resources. 2> Providing a set of data for the server to process. >> >> If I am not sure that the scenario Michael described falls into these >> two categories, so I wonder if using POST is restful. I would be >> interested to hear a counter argument. > > No counter argument from me. I posted the original question precisely > because I couldn't think of a RESTful way, whereas a non-RESTful one, > using POST was obvious from the start. http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post "It isn’t RESTful to use POST for information retrieval when that information corresponds to a potential resource, because that usage prevents safe reusability and the network-effect of having a URI." [I don't think what you are talking about here is a potential resouirce, or if it was, you could give it a clean URI and GET it.] 'POST only becomes an issue when it is used in a situation for which some other method is ideally suited... The other methods are more valuable to intermediaries because they say something about how failures can be automatically handled and how intermediate caches can optimize their behavior. POST does not have those characteristics, but that doesn’t mean we can live without it. POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.”' [Which might apply in this case.] [But then I am not a purist. And I still think it would be possible to do a GET in this situation with some creativity, but then I am also not willing to pursue the issue long enough to find one...] Example of some creativity in a situation that might or might not give some clues to this one: http://roy.gbiv.com/untangled/2008/paper-tigers-and-hidden-dragons
On Tuesday 19 May 2009, Bob Haugen wrote: > > No counter argument from me. I posted the original question > > precisely because I couldn't think of a RESTful way, whereas a > > non-RESTful one, using POST was obvious from the start. > > http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post > > "It isn�t RESTful to use POST for information retrieval when that > information corresponds to a potential resource, because that usage > prevents safe reusability and the network-effect of having a URI." Aha, blessing from above. Thanks for pointing this out. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
On Tue, May 19, 2009 at 6:43 AM, Michael Schuerig <michael@...> wrote: > On Tuesday 19 May 2009, Bob Haugen wrote: >> > No counter argument from me. I posted the original question >> > precisely because I couldn't think of a RESTful way, whereas a >> > non-RESTful one, using POST was obvious from the start. >> >> http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post >> >> "It isn’t RESTful to use POST for information retrieval when that >> information corresponds to a potential resource, because that usage >> prevents safe reusability and the network-effect of having a URI." > > Aha, blessing from above. Thanks for pointing this out. Well, I hope I wasn't taking his words in vain. My own attitude is that REST is a set of constraints that has benefits, and if you relax one of the constraints, you will weaken or lose the corresponding benefits. But maybe in your case those benefits don't apply. So I don't think RESTfulness is all yes or all no or a Good Housekeeping Seal of Approval or a moral or religious issue.
I share the same view as well. Thank you for the links. They made for a good read on the train this morning. Regards, Bediako On Tue, May 19, 2009 at 8:05 AM, Bob Haugen <bob.haugen@...> wrote: > On Tue, May 19, 2009 at 6:43 AM, Michael Schuerig <michael@...> > wrote: > > On Tuesday 19 May 2009, Bob Haugen wrote: > >> > No counter argument from me. I posted the original question > >> > precisely because I couldn't think of a RESTful way, whereas a > >> > non-RESTful one, using POST was obvious from the start. > >> > >> http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post > >> > >> "It isn’t RESTful to use POST for information retrieval when that > >> information corresponds to a potential resource, because that usage > >> prevents safe reusability and the network-effect of having a URI." > > > > Aha, blessing from above. Thanks for pointing this out. > > Well, I hope I wasn't taking his words in vain. > > My own attitude is that REST is a set of constraints that has > benefits, and if you relax one of the constraints, you will weaken or > lose the corresponding benefits. But maybe in your case those > benefits don't apply. > > So I don't think RESTfulness is all yes or all no or a Good > Housekeeping Seal of Approval or a moral or religious issue. > > > ------------------------------------ > > Yahoo! Groups Links > > > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly (p) 202.683.7486 (f) 703.563.6279
> This means that clients call queries to get state/views out, > then perform commands on that which are sent back to the server. In > other words, clients never ever send state back, only commands. I realize CQS is a big thing right now particularly in the DDD/messaging sense, but I'm wondering why your choosing to combine CQS and REST? Plus the style of CQS that Udi/Greg others are talking about (which I think is influencing this) is a strict seperation, maybe you could do the same but use REST just for the Q? Alternatively if it's just CQS in general that you need then can you not go for this by just carefully using the HTTP verbs (GET/OPTIONS vs PUT/POST/DELETE)? > There is a domain model on the server which interprets and executes this and all the domain logic around it. Isn't one approach just to use ROA but when its a PUT you take the incoming representation, work out what commands to apply based on it, and then apply those to the domain model? You might also be interested in this thread of posts, I think some of them cover messaging over REST type designs: http://duncan-cragg.org/blog/post/distributed-observer-pattern-rest-dialogues/
> So I deem it a good idea to share with the community, which shall more or less help you deepen understanding of REST as well > as UX :) Great stuff, I'm wondering if you know of good example apps that you believe effectively combine REST/RPC?
This is a discussion we had with Laribee at my ReST workshop. If you want to impelemnt CQS, this is an internal implementation details. You generate your document by executing a query, and the altered state in the document being sent back gets processed to decide which commands to apply. As I highlighted at that point, it's more code, but it means that you decouple the existence of your document format representing the state of your resource, and the implementation details of how the state change actually operates. It's a difficult problem to solve, but one I really want to see covered in OR 2.1, especially as some users are already implementing something equivalent. Seb From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Colin Jack Sent: 19 May 2009 15:55 To: Rest List Subject: Re: [rest-discuss] CommandQuerySeparation and REST? > This means that clients call queries to get state/views out, > then perform commands on that which are sent back to the server. In > other words, clients never ever send state back, only commands. I realize CQS is a big thing right now particularly in the DDD/messaging sense, but I'm wondering why your choosing to combine CQS and REST? Plus the style of CQS that Udi/Greg others are talking about (which I think is influencing this) is a strict seperation, maybe you could do the same but use REST just for the Q? Alternatively if it's just CQS in general that you need then can you not go for this by just carefully using the HTTP verbs (GET/OPTIONS vs PUT/POST/DELETE)? > There is a domain model on the server which interprets and executes this and all the domain logic around it. Isn't one approach just to use ROA but when its a PUT you take the incoming representation, work out what commands to apply based on it, and then apply those to the domain model? You might also be interested in this thread of posts, I think some of them cover messaging over REST type designs: http://duncan-cragg.org/blog/post/distributed-observer-pattern-rest-dialogue s/
Greg, What exactly in my preceding email leads you to believe that I have missed the benefits of CQS? Depending on the granularity of your resources, it is feasible to decouple state description documents from queries that are used in their creation, and in the other way to decide which command(s) get executed based on the (state+resource) being acted upon. Are you saying that it is impossible to gather a list of commands to execute based on changes in a document representing the state of a resource? I'm not arguing it is trivial, I'm arguing it is feasible because some people have been doing just that. As for the multiple ways in which data can change, could you provide an example you think is not achievable so we can have a baseline for discussion? I refuse the hypothesis that there is no way to map a rich domain with CQS and granular document-based resources. I argue that it's a complex problem I'd like to discuss more. Seb -----Original Message----- From: Greg Young [mailto:gregoryyoung1@...] Sent: 19 May 2009 17:39 To: Sebastien Lambla Subject: Re: [rest-discuss] CommandQuerySeparation and REST? I would like to suggest that you have missed most if not all of the benefit of CQS. The whole concept of "sending documents and hoping to figure out what possibly changed" is in its very nature flawed for all but the most trivial of systems. Said differently it is usually *impossible* to do this as there are multiple ways in which data can change. Cheers, Greg On Tue, May 19, 2009 at 12:33 PM, Sebastien Lambla <seb@...> wrote: > > > This is a discussion we had with Laribee at my ReST workshop. > > > > If you want to impelemnt CQS, this is an internal implementation details. > You generate your document by executing a query, and the altered state in > the document being sent back gets processed to decide which commands to > apply. > > > > As I highlighted at that point, it’s more code, but it means that you > decouple the existence of your document format representing the state of > your resource, and the implementation details of how the state change > actually operates. > > > > It’s a difficult problem to solve, but one I really want to see covered in > OR 2.1, especially as some users are already implementing something > equivalent. > > > > Seb > > > > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On > Behalf Of Colin Jack > Sent: 19 May 2009 15:55 > To: Rest List > Subject: Re: [rest-discuss] CommandQuerySeparation and REST? > > > >> This means that clients call queries to get state/views out, >> then perform commands on that which are sent back to the server. In >> other words, clients never ever send state back, only commands. > > I realize CQS is a big thing right now particularly in the DDD/messaging > sense, but I'm wondering why your choosing to combine CQS and REST? > > Plus the style of CQS that Udi/Greg others are talking about (which I think > is influencing this) is a strict seperation, maybe you could do the same but > use REST just for the Q? > > Alternatively if it's just CQS in general that you need then can you not go > for this by just carefully using the HTTP verbs (GET/OPTIONS vs > PUT/POST/DELETE)? > > >> There is a domain model on the server which interprets and executes this >> and all the domain logic around it. > > Isn't one approach just to use ROA but when its a PUT you take the incoming > representation, work out what commands to apply based on it, and then apply > those to the domain model? > > You might also be interested in this thread of posts, I think some of them > cover messaging over REST type designs: > > http://duncan-cragg.org/blog/post/distributed-observer-pattern-rest-dialogue s/ > > > > -- It is the mark of an educated mind to be able to entertain a thought without accepting it.
> If you want to impelemnt CQS, this is an internal implementation details.
Agreed if you do it this way its hidden.
> You generate your document by executing a query, and the altered state in
the document being sent back gets processed to decide which commands to
apply.
>
> As I highlighted at that point, it’s more code, but it means that you
decouple the existence of your document format representing the state of
your resource, and the
> implementation details of how the state change actually operates.
Agreed, but isn't all you need to be able to say "if the representation has
the value X but the current value (in the domain) is Y then perform action
Z" where Z could be anything from calling a method to creating a command to
be acted upon?
> It’s a difficult problem to solve, but one I really want to see covered in
OR 2.1, especially as some users are already implementing something
equivalent.
I'm probably missing something but is it much more complex than this (C#
ofcourse):
HandleIncomingContract
.When_contract_has(representationContract =>
representationContract.Archived)
.If_domain_has(user => user.Archived == false)
.Update_domain(user => user.Archive())
.Map_properties_by_name_and_type() // or not, depending on way
you've designed things
.Complete_using(representationContract)
.To_update(user);
Now I'm not fluent interface whizz, and it was just a spike, but the basic
idea is representationContract is coming in and we're updating user which is
a domain entity, would this not handle most cases where we'd be dealing with
an update?
Obviously I realize the code in Update_domain could be nearly anything....
2009/5/19 Sebastien Lambla <seb@...>
> This is a discussion we had with Laribee at my ReST workshop.
>
>
>
> If you want to impelemnt CQS, this is an internal implementation details.
> You generate your document by executing a query, and the altered state in
> the document being sent back gets processed to decide which commands to
> apply.
>
>
>
> As I highlighted at that point, it’s more code, but it means that you
> decouple the existence of your document format representing the state of
> your resource, and the implementation details of how the state change
> actually operates.
>
>
>
> It’s a difficult problem to solve, but one I really want to see covered in
> OR 2.1, especially as some users are already implementing something
> equivalent.
>
>
>
> Seb
>
>
>
> *From:* rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com]
> *On Behalf Of *Colin Jack
> *Sent:* 19 May 2009 15:55
> *To:* Rest List
> *Subject:* Re: [rest-discuss] CommandQuerySeparation and REST?
>
>
>
>
>
> > This means that clients call queries to get state/views out,
> > then perform commands on that which are sent back to the server. In
> > other words, clients never ever send state back, only commands.
>
> I realize CQS is a big thing right now particularly in the DDD/messaging
> sense, but I'm wondering why your choosing to combine CQS and REST?
>
> Plus the style of CQS that Udi/Greg others are talking about (which I think
> is influencing this) is a strict seperation, maybe you could do the same but
> use REST just for the Q?
>
> Alternatively if it's just CQS in general that you need then can you not go
> for this by just carefully using the HTTP verbs (GET/OPTIONS vs
> PUT/POST/DELETE)?
>
>
> > There is a domain model on the server which interprets and executes this
> and all the domain logic around it.
>
> Isn't one approach just to use ROA but when its a PUT you take the incoming
> representation, work out what commands to apply based on it, and then apply
> those to the domain model?
>
> You might also be interested in this thread of posts, I think some of them
> cover messaging over REST type designs:
>
>
> http://duncan-cragg.org/blog/post/distributed-observer-pattern-rest-dialogues/
>
>
>
>
>
http://www.slideshare.net/Wisec/http-parameter-pollution-a-new-category-of-web-attacks gets good from about about slide 21 on. A number of the attacks seem to rely on injecting action params/values. So the main takeway I got from this was, don't embed actions in URLs. Thoughts? Bill
On Tue, May 19, 2009 at 1:03 PM, Sebastien Lambla <seb@...> wrote: > > > As for the multiple ways in which data can change, could you provide an > example you think is not achievable so we can have a baseline for > discussion? I refuse the hypothesis that there is no way to map a rich > domain with CQS and granular document-based resources. I argue that it's a > complex problem I'd like to discuss more. > Suppose someone does a PUT on /Customer/XYZ/Address and the server receives an updated address. Assuming the domain model accepts two potential messages for updating an address: - CorrectCustomerAddress - CustomerHasMovedToNewAddress Which one command message do you send based on the updated address received in the PUT? Darrel
I don't think so. The main take-away is not to process requests from
untrusted sources with unverifiable data. I would use one-time URIs to
deal with attacks like this.
Subbu
On May 19, 2009, at 10:41 AM, Bill de hOra wrote:
>
>
> http://www.slideshare.net/Wisec/http-parameter-pollution-a-new-category-of-web-attacks
>
> gets good from about about slide 21 on. A number of the attacks seem
> to
> rely on injecting action params/values. So the main takeway I got from
> this was, don't embed actions in URLs. Thoughts?
>
> Bill
>
>
>
> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%; font-
> weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; } #ygrp-
> mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!-- #ygrp-
> sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%; line-
> height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom: 10px;
> padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px; font-family:
> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> input, textarea {font:99% arial,helvetica,clean,sans-serif;} #ygrp-
> mlmsg pre, code {font:115% monospace;*font-size:100%;} #ygrp-mlmsg *
> {line-height:1.22em;} #ygrp-text{ font-family: Georgia; } #ygrp-
> text p{ margin: 0 0 1em 0; } dd.last p a { font-family: Verdana;
> font-weight: bold; } #ygrp-vitnav{ padding-top: 10px; font-family:
> Verdana; font-size: 77%; margin: 0; } #ygrp-vitnav a{ padding: 0
> 1px; } #ygrp-mlmsg #logo{ padding-bottom: 10px; } #ygrp-reco
> { margin-bottom: 20px; padding: 0px; } #ygrp-reco #reco-head { font-
> weight: bold; color: #ff7900; } #reco-category{ font-size: 77%; }
> #reco-desc{ font-size: 77%; } #ygrp-vital a{ text-decoration:
> none; } #ygrp-vital a:hover{ text-decoration: underline; } #ygrp-
> sponsor #ov ul{ padding: 0 0 0 8px; margin: 0; } #ygrp-sponsor #ov
> li{ list-style-type: square; padding: 6px 0; font-size: 77%; } #ygrp-
> sponsor #ov li a{ text-decoration: none; font-size: 130%; } #ygrp-
> sponsor #nc{ background-color: #eee; margin-bottom: 20px; padding: 0
> 8px; } #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-sponsor .ad
> #hd1{ font-family: Arial; font-weight: bold; color: #628c2a; font-
> size: 100%; line-height: 122%; } #ygrp-sponsor .ad a{ text-
> decoration: none; } #ygrp-sponsor .ad a:hover{ text-decoration:
> underline; } #ygrp-sponsor .ad p{ margin: 0; font-weight: normal;
> color: #000000; } o{font-size: 0; } .MsoNormal{ margin: 0 0 0 0; }
> #ygrp-text tt{ font-size: 120%; } blockquote{margin: 0 0 0
> 4px;} .replbq{margin:4} dd.last p span { margin-right: 10px; font-
> family: Verdana; font-weight: bold; } dd.last p span.yshortcuts
> { margin-right: 0; } div.photo-title a, div.photo-title a:active,
> div.photo-title a:hover, div.photo-title a:visited { text-
> decoration: none; } div.file-title a, div.file-title a:active,
> div.file-title a:hover, div.file-title a:visited { text-decoration:
> none; } #ygrp-msg p#attach-count { clear: both; padding: 15px 0 3px
> 0; overflow: hidden; } #ygrp-msg p#attach-count span { color:
> #1E66AE; font-weight: bold; } div#ygrp-mlmsg #ygrp-msg p a
> span.yshortcuts { font-family: Verdana; font-size: 10px; font-
> weight: normal; } #ygrp-msg p a { font-family: Verdana; font-size:
> 10px; } #ygrp-mlmsg a { color: #1E66AE; } div.attach-table div div a
> { text-decoration: none; } div.attach-table { width: 400px; } -->
Hello, this is not meant as questioning the existence of ATOM in any way, I am just curious what people think about the following: Supposed that all feed- or entry-level property elements (e.g. title, author, id...) did not have complex content but just simple String values, would it have been a reasonable choice to define a set of HTTP headers and link relations an use the HTTP header as the envelope instead of a new XML language (ATOM)? The question (for me) behind this is to what extend it makes sense to stuff resource relationship information (see Link header) and key- value meta data in the HTTP header? Especially when it saves me from inventing a new XML language or using ATOM+simple extensions with its overhead (which I propably do not need in my domain). Is there an implementations-related 'maximum' of HTTP header lines that serves as a sort of natural boundary for using the HTTP header as a domain specific envelope? Dunno, but it just feels odd at times to have the HTTP header and the ATOM envelope itself when dealing with ATOM-based implementations. Do I put my Links in the header or in <link> elements...or both? Jan
I like POST /Orders/Cancelled?url=/Orders/333 It sort of feels like placing the order in the pile of canceled orders. Viewing the cancelled orders is as simple as GET /Orders/Cancelled and re-opening the order is POST /Orders/Open?url=/Orders/333 It's not the only way but it works for me. Darrel On Fri, May 8, 2009 at 6:09 PM, Bill Burke <bburke@...> wrote: > > > Let's say I have an Order resource in a ecommerce Order Entry system. > How would I implement my service so that I can cancel an order rather > than delete it? One is to have the cancel state as part of the order. > THen I can just put a new representation with the cancelled state set to > true: > > PUT /orders/333 > content-type: application/xml > > <order id="333"> > <cancelled>false</cancelled> > ... > </order> > > Seems kinda heavy to me. > > Would it still be restful to define a "cancelled" URI that you could put > or post to to change the state? > > /orders/333/cancelled > > or > > /orders/333?cancel=true > > You don't even need to send data to change the state in this scenario. > But the problem with this from a pure RESTful standpoint is, isn't this > a mini-RPC? My thought at first is YES IT IS.... > > .... But, consider if you have cancelling as part of a HATEOAS > > <order id="333"> > <atom:link rel="CANCEL" href="http://example.com/orders/333/cancelled"/> > ... > </order> > > Now, I have a CANCEL link that if I follow changes the state of my > resource. Doesn't seem so RPCish now that I've embedded it as a link. > Maybe the answer is /orders/333/cancelled isn't very RESTful by itself, > but when combined with HATEOAS it is? > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > >
interesting. yes, it helps to keep action items out of the query string, but that's just the obvious example. much the same as SQL injection; when servers swallow the query string without due diligence (filtering, validating, tossing unknowns), bad things can happen. thanks for the pointer. mca http://amundsen.com/blog/ On Tue, May 19, 2009 at 13:41, Bill de hOra <bill@...> wrote: > > http://www.slideshare.net/Wisec/http-parameter-pollution-a-new-category-of-web-attacks > > gets good from about about slide 21 on. A number of the attacks seem to > rely on injecting action params/values. So the main takeway I got from > this was, don't embed actions in URLs. Thoughts? > > Bill > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
That, BTW, was exactly the point I was trying to make (of course very much influenced by that particular Roy posting): There is no resource hidden in your scenario, so I think POST is perfectly RESTful here. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ On 19.05.2009, at 13:43, Michael Schuerig wrote: > On Tuesday 19 May 2009, Bob Haugen wrote: >>> No counter argument from me. I posted the original question >>> precisely because I couldn't think of a RESTful way, whereas a >>> non-RESTful one, using POST was obvious from the start. >> >> http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post >> >> "It isn’t RESTful to use POST for information retrieval when that >> information corresponds to a potential resource, because that usage >> prevents safe reusability and the network-effect of having a URI." > > Aha, blessing from above. Thanks for pointing this out. > > Michael > > -- > Michael Schuerig > mailto:michael@... > http://www.schuerig.de/michael/ > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
> Suppose someone does a PUT on /Customer/XYZ/Address and the server > receives an updated address. Assuming the domain model accepts two > potential messages for updating an address: > - CorrectCustomerAddress > - CustomerHasMovedToNewAddress > > Which one command message do you send based on the updated address > received in the PUT? I'd model it by specifying two different resources. Given a GET: <address for="/Customer/XYZ"> <action rel="http://actions.acme.org/address-correction" method="put" href="/Customer/XYZ/Address" /> <action rel="http://actions.acme.org/address-moved" method="post" href="/Customer/XYZ" /> <content> <line1>Somewhere</line1> </content> </address> The UA would process the document, discover two links it can follow with any modifications to the document it wants to submit, and present the user with the option of following either links. How the UA presents the two options is up to how much understanding is hard-coded in the client (for a rel value). What we then have is the same representation being sent to two resources, with various semantics. Another option is to make that kind of decisions based on the actual content of the mediatype. The typical scenario would be in html forms. POST /Customer/XYZ/Address line1=Somewhere;reason=[correction|moving] Another option in html is to simply serve two different pages: GET /Customer/XYZ/Address <a href="Customer/XYZ/Address/Moving.html">I'm moving</a> or <a href="Customer/XYZ/Address/Correction">There was a mistake</a> Each pointing the result of the form to the correct URI. You can have the same *representation* you wish to change used by multiple *resources*. I don't see why you can't create as many resources as you need, as intent is carried by the link being followed. Seb
Hi Darrel, PUTting the state remains the 'right' thing to do. Regarding the perceived overhead, remember that REST is designed to optimize for efficiency but for federated evolution (among others). Use the archirectural goals of REST to judge your solution. Alternatively you could PATCH the order but would have to do some specs work inside your domain for the non-standard PATCH and the format that expresses the delta. In addition, verify that the order status change is in fact idempotent (can you PUT the order state N-times and still have the meaning of the single PUT?) - if it is not, you must use POST anyway. Often, such cancelations are expressed using explicit cancelation requests (see UBL) instead of business object state changes. So you might even have: POST /orderManager <OrderCancelation> <orderId>123456</orderId> </OrderCancelation> HTH, Jan On May 19, 2009, at 8:36 PM, Darrel Miller wrote: > I like > > POST /Orders/Cancelled?url=/Orders/333 > > It sort of feels like placing the order in the pile of canceled > orders. Viewing the cancelled orders is as simple as > > GET /Orders/Cancelled > > and re-opening the order is > > POST /Orders/Open?url=/Orders/333 > > It's not the only way but it works for me. > > Darrel > > On Fri, May 8, 2009 at 6:09 PM, Bill Burke <bburke@...> wrote: >> >> >> Let's say I have an Order resource in a ecommerce Order Entry system. >> How would I implement my service so that I can cancel an order rather >> than delete it? One is to have the cancel state as part of the order. >> THen I can just put a new representation with the cancelled state >> set to >> true: >> >> PUT /orders/333 >> content-type: application/xml >> >> <order id="333"> >> <cancelled>false</cancelled> >> ... >> </order> >> >> Seems kinda heavy to me. >> >> Would it still be restful to define a "cancelled" URI that you >> could put >> or post to to change the state? >> >> /orders/333/cancelled >> >> or >> >> /orders/333?cancel=true >> >> You don't even need to send data to change the state in this >> scenario. >> But the problem with this from a pure RESTful standpoint is, isn't >> this >> a mini-RPC? My thought at first is YES IT IS.... >> >> .... But, consider if you have cancelling as part of a HATEOAS >> >> <order id="333"> >> <atom:link rel="CANCEL" href="http://example.com/orders/333/ >> cancelled"/> >> ... >> </order> >> >> Now, I have a CANCEL link that if I follow changes the state of my >> resource. Doesn't seem so RPCish now that I've embedded it as a link. >> Maybe the answer is /orders/333/cancelled isn't very RESTful by >> itself, >> but when combined with HATEOAS it is? >> >> -- >> Bill Burke >> JBoss, a division of Red Hat >> http://bill.burkecentral.com >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > >
Thanks for the post Stefan. I was lacking the context needed to understand where you were going. Reading Roy's post help quite a bit. Regards, Bediako On Tue, May 19, 2009 at 3:01 PM, Stefan Tilkov <stefan.tilkov@...m>wrote: > That, BTW, was exactly the point I was trying to make (of course very > much influenced by that particular Roy posting): There is no resource > hidden in your scenario, so I think POST is perfectly RESTful here. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > On 19.05.2009, at 13:43, Michael Schuerig wrote: > > > On Tuesday 19 May 2009, Bob Haugen wrote: > >>> No counter argument from me. I posted the original question > >>> precisely because I couldn't think of a RESTful way, whereas a > >>> non-RESTful one, using POST was obvious from the start. > >> > >> http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post > >> > >> "It isn’t RESTful to use POST for information retrieval when that > >> information corresponds to a potential resource, because that usage > >> prevents safe reusability and the network-effect of having a URI." > > > > Aha, blessing from above. Thanks for pointing this out. > > > > Michael > > > > -- > > Michael Schuerig > > mailto:michael@... > > http://www.schuerig.de/michael/ > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly (p) 202.683.7486 (f) 703.563.6279
Ech - meant to write 'RESt is *NOT* designed to optimize for *BANDWIDTH* efficiency' Sorry, Jan On May 19, 2009, at 8:52 PM, Jan Algermissen wrote: > Hi Darrel, > > PUTting the state remains the 'right' thing to do. Regarding the > perceived overhead, remember that REST is designed to optimize for > efficiency but for federated evolution (among others). Use the > archirectural goals of REST to judge your solution. > > Alternatively you could PATCH the order but would have to do some > specs work inside your domain for the non-standard PATCH and the > format that expresses the delta. > > In addition, verify that the order status change is in fact idempotent > (can you PUT the order state N-times and still have the meaning of the > single PUT?) - if it is not, you must use POST anyway. Often, such > cancelations are expressed using explicit cancelation requests (see > UBL) instead of business object state changes. So you might even have: > > > POST /orderManager > > <OrderCancelation> > <orderId>123456</orderId> > </OrderCancelation> > > HTH, > > Jan > > > > On May 19, 2009, at 8:36 PM, Darrel Miller wrote: > >> I like >> >> POST /Orders/Cancelled?url=/Orders/333 >> >> It sort of feels like placing the order in the pile of canceled >> orders. Viewing the cancelled orders is as simple as >> >> GET /Orders/Cancelled >> >> and re-opening the order is >> >> POST /Orders/Open?url=/Orders/333 >> >> It's not the only way but it works for me. >> >> Darrel >> >> On Fri, May 8, 2009 at 6:09 PM, Bill Burke <bburke@...> wrote: >>> >>> >>> Let's say I have an Order resource in a ecommerce Order Entry >>> system. >>> How would I implement my service so that I can cancel an order >>> rather >>> than delete it? One is to have the cancel state as part of the >>> order. >>> THen I can just put a new representation with the cancelled state >>> set to >>> true: >>> >>> PUT /orders/333 >>> content-type: application/xml >>> >>> <order id="333"> >>> <cancelled>false</cancelled> >>> ... >>> </order> >>> >>> Seems kinda heavy to me. >>> >>> Would it still be restful to define a "cancelled" URI that you >>> could put >>> or post to to change the state? >>> >>> /orders/333/cancelled >>> >>> or >>> >>> /orders/333?cancel=true >>> >>> You don't even need to send data to change the state in this >>> scenario. >>> But the problem with this from a pure RESTful standpoint is, isn't >>> this >>> a mini-RPC? My thought at first is YES IT IS.... >>> >>> .... But, consider if you have cancelling as part of a HATEOAS >>> >>> <order id="333"> >>> <atom:link rel="CANCEL" href="http://example.com/orders/333/ >>> cancelled"/> >>> ... >>> </order> >>> >>> Now, I have a CANCEL link that if I follow changes the state of my >>> resource. Doesn't seem so RPCish now that I've embedded it as a >>> link. >>> Maybe the answer is /orders/333/cancelled isn't very RESTful by >>> itself, >>> but when combined with HATEOAS it is? >>> >>> -- >>> Bill Burke >>> JBoss, a division of Red Hat >>> http://bill.burkecentral.com >>> >>> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On May 19, 2009, at 9:01 PM, Sebastien Lambla wrote: <snip/> > > You can have the same *representation* you wish to change used by > multiple > *resources*. I don't see why you can't create as many resources as > you need, > as intent is carried by the link being followed. > Yes. Especially since resources are cheap while representations are not. It is much harder to do a new representation (spec work, serialization, deserialization, versioning) that to add an additional link relation. In general, if an event has domain significance (e.g. a customer moves) you can make up a POST-accepting resource for it that implements the appropriate action to take upon the domain event. This is somewhat equivalent to named event channels in message based systems. If you need additional data to go with the POST (e.g. reason for moving) you could consider stuffing your data object format (the address format) into an ATOM envelope and define an ATOM extension for transmiting the movementReason. That way, you still need not define an additional representation. (Though one could argue that information that is not entry processing meta data does not belong in an ATOM extension...) Jan > Seb > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
I don't disagree with you, it's a matter of tradeoffs. Designing a ReST architecture requires the client being instructed in what to do next in the media type definition, aka your document format. This requires a lot of engineering and thoughts in how to design those, including how the interaction can be driven by the server and how the links are to be followed, which makes creating them expensive, but hopefully much more loosely coupled, reusable and durable. If you package the intent and the semantics of an operation within a message and POST to a queue, you may breach many constraints of ReST in the process, which is a tradeoff each developer has to evaluate for themselves. More importantly, by defining multiple messages carrying intent, you enforce the client to understand those, which is coupling the client to the details of what commands exist, which means the client needs an understanding of each of those commands. I compare that to the case of understanding the media type to send a representation, and understanding how to follow links, and I'd argue that the latter has lower coupling, with higher implementation cost. Seb -----Original Message----- From: Greg Young [mailto:gregoryyoung1@...] Sent: 19 May 2009 20:31 To: Sebastien Lambla Subject: Re: [rest-discuss] CommandQuerySeparation and REST? Yes things like this can be done ... but when you start going down this path (everything becomes actions like these) don't you really lose much of what you had to benefit fgrom in the beginning? This is why I was saying I prefer to just use a pipeline on the write side. Cheers, Greg On Tue, May 19, 2009 at 3:01 PM, Sebastien Lambla <seb@...> wrote: > > >> Suppose someone does a PUT on /Customer/XYZ/Address and the server >> receives an updated address. Assuming the domain model accepts two >> potential messages for updating an address: >> - CorrectCustomerAddress >> - CustomerHasMovedToNewAddress >> >> Which one command message do you send based on the updated address >> received in the PUT? > > I'd model it by specifying two different resources. Given a GET: > > <address for="/Customer/XYZ"> > <action rel="http://actions.acme.org/address-correction" method="put" > href="/Customer/XYZ/Address" /> > <action rel="http://actions.acme.org/address-moved" method="post" > href="/Customer/XYZ" /> > <content> > <line1>Somewhere</line1> > </content> > </address> > > The UA would process the document, discover two links it can follow with any > modifications to the document it wants to submit, and present the user with > the option of following either links. How the UA presents the two options is > up to how much understanding is hard-coded in the client (for a rel value). > > What we then have is the same representation being sent to two resources, > with various semantics. > > Another option is to make that kind of decisions based on the actual content > of the mediatype. The typical scenario would be in html forms. > > POST /Customer/XYZ/Address > > line1=Somewhere;reason=[correction|moving] > > Another option in html is to simply serve two different pages: > > GET /Customer/XYZ/Address > > <a href="Customer/XYZ/Address/Moving.html">I'm moving</a> or <a > href="Customer/XYZ/Address/Correction">There was a mistake</a> > > Each pointing the result of the form to the correct URI. > > You can have the same *representation* you wish to change used by multiple > *resources*. I don't see why you can't create as many resources as you need, > as intent is carried by the link being followed. > > Seb > > -- It is the mark of an educated mind to be able to entertain a thought without accepting it.
On Tue, May 19, 2009 at 3:01 PM, Sebastien Lambla <seb@...> wrote: > > I'd model it by specifying two different resources. <snip solution that models commands as representations> Sounds like a very reasonable solution. However, it missed my point You said, > Are you saying that it is impossible to gather a list of commands to execute > based on changes in a document representing the state of a resource? If, as you suggest you only have a single representation with a set of changes there will be some "commands" that are indistinguishable. I believe this is why Greg said : > The whole concept of "sending documents and hoping to figure out what > possibly changed" is in its very nature flawed for all but the most > trivial of systems. I do think if you are going to do CQS with REST, the client needs to initiate the command. Darrel
> If, as you suggest you only have a single representation with a set of > changes there will be some "commands" that are indistinguishable. True, but in my experience in cases where this sort of context matters you find that out in discussions with the business and they end up being requirements, which if we do choose to go for REST can be fed into the design of our resources and representations (as Seb has done in his example). 2009/5/19 Darrel Miller <darrel.miller@...> > > > On Tue, May 19, 2009 at 3:01 PM, Sebastien Lambla <seb@...<seb%40serialseb.com>> > wrote: > > > > I'd model it by specifying two different resources. > > <snip solution that models commands as representations> > > Sounds like a very reasonable solution. However, it missed my point > You said, > > > Are you saying that it is impossible to gather a list of commands to > execute > > based on changes in a document representing the state of a resource? > > If, as you suggest you only have a single representation with a set of > changes there will be some "commands" that are indistinguishable. > > I believe this is why Greg said : > > > The whole concept of "sending documents and hoping to figure out what > > possibly changed" is in its very nature flawed for all but the most > > trivial of systems. > > I do think if you are going to do CQS with REST, the client needs to > initiate the command. > > Darrel > > >
> If you need additional data to go with the POST (e.g. reason for > moving) you could consider stuffing your data object format (the > address format) into an ATOM envelope and define an ATOM extension for > transmiting the movementReason. That way, you still need not define an > additional representation. I'm quite uncomfortable with the idea of wrapping things in ATOM to carry a second envelope, if you're into xml land you can do the same with namespaces. If you're going to go the extension way, why not add the data to your original mediatype? Because you've defined a relationship (move) between the current resource representation (an address) and a link action (the url /customer using the method POST), you can implement a client that reflect on those to implement extensions. Here, our rel=move specifies the relationship to the URI. A client is quite free to interpret, globally for your app, that xml extensions can be used (and presented to the user) where possible. Aka xmlns:move describes the <move:Reason> element as a child of <Address>. The client knows it can ask for the reason by having a repository mapping relationships to extensions. You'd end up with the following, crafted automatically, with a lot of nice code reuse: <Address for="/Customer/XYZ" xmlns="http://schemas.acme.org/Customer/Address" xmlns:move="http://schemas.acme.org/Customer/Address/Move"> <Content>...</Content> <move:Reason>God knows</move:Reason> </Address> Seb
> <snip solution that models commands as representations> > Sounds like a very reasonable solution. However, it missed my point Let me try to clarify the solution I offered. One representation (the xml document containing the <Address> element) can be accepted by two different resources (the existing address and the customer), using two different links (one for replacing the address, the other one for asking the customer to add a new address). The difference here is that I have multiple resources, that happen to have the same representation, but generate different commands. > If, as you suggest you only have a single representation with a set of > changes there will be some "commands" that are indistinguishable. I suggest that one media type may have multiple elements describing the state of the resource, and that two different resources may have two different representations while still using the same media type. I suggest as well that multiple resources can operate upon the same domain model if they wish to, this is completely irrelevant to the ReST side of things, and is an implementation detail that, if leaked, would break ReST constraints. > I do think if you are going to do CQS with REST, the client needs to > initiate the command. The client follows links prepared by the server. If a client knows about "commands" and where to send them in advance, you loose the values you get by giving URIs to things. It doesn't mean you can't model every command as a URI accepting a POST and not accepting a GET, but tunnelling everything through POST is usually a sign that you're shy of creating new resources for new things that need to be changed individually, or you don't want the server to control the interaction. If you don't need the benefits of exposing resources on a ReST interface and just need a POX way to channel messages to a unique endpoint, then do so. From my understanding of the scenario you propose, where a client needs to initiate a command it knows about, where the interaction is inside the message and opaque to intermediaries, one cannot "do CQS with REST" because one would be breaking ReST constraints. As for the difference between command messages and documents with various URIs and link relationships, it comes from the simple fact that I wish my applications to be layered, so that the semantics of following links, interpreting relationships to build representations should be reused across the codebase, rather than imperatively coded in the client. This lets the server change both the interaction model and the semantics of an operation without changing the client. If you have the luxury of controlling the client, you probably don't need the secularity of a ReST interface. Seb
> Then I believe you are saying CQS and REST cannot exist together. Your reformulation is correct, I'm saying that as far as the client is concerned, CQS and ReST cannot coexist. My initial response was in the context of building a ReSTful interface on top of CQS, not how to carry CQS to the client. Here, the client is the ReST interface. > A huge part of CQS is carrying forward the context of the original > operation while you try really hard to ignore the context. Even if you > do something like documents (hoping to the source events at the > server) this will only work in extremely naive circumstances. As far as the ReST interface presented to the client is concerned, creating new resources where different contexts exist makes most circumstances very naive, on purpose. > the client knows what operations are actually supported (it being the > messages contain data as well wouldn't it still need to know about > them?)? > it is the one that represents the behaviors the two are > conceptually coupled. the only time they are not is when you do not > have a behavior oriented UI (i.e. they are data oriented). using cqs > your interface should be behavior oriented (not data oriented). I think we may have a misunderstanding. The client to the ReST interface should be document-oriented to be a consumer of a ReST architecture. Context is only provided through the server assigning new resources and new URIs, the client should have no knowledge of this. A good ReST client has local state (the cache) it uses to navigate links created by the server (who owns and retains the context in which it created a resource), and only knows how to operate on the documents it has received or wishes to create. In that sense, a good ReST client is data-centric, as much as an ATOM blog client or a web browser is data-centric. There is inherently no behaviour defined beyond what the data and interaction model the media type defines. In this discussion, I assume the ReST interface operates as the client to a CQS model, and the UA operates as a client to the ReST interface. Anything that blurs those lines would hurt both models. Seb
On Tue, May 19, 2009 at 5:16 PM, Sebastien Lambla <seb@...> wrote: > One representation (the xml document containing the <Address> element) can > be accepted by two different resources (the existing address and the > customer), using two different links (one for replacing the address, the > other one for asking the customer to add a new address). Yes I agree that is nice way of indicating the reason for change. > > The difference here is that I have multiple resources, that happen to have > the same representation, but generate different commands. > Cool. > >> I do think if you are going to do CQS with REST, the client needs to >> initiate the command. > > The client follows links prepared by the server. If a client knows about > "commands" and where to send them in advance, you loose the values you get > by giving URIs to things. The client does need some comprehension of the fact that there are multiple links that can be followed to indicate the different reasons for changing an address. I agree that the client does not need to know that the two Urls will be mapped to different commands, just that the distinction must start at the client. > From my understanding of the scenario you propose, where a client needs to > initiate a command it knows about, where the interaction is inside the > message and opaque to intermediaries, one cannot "do CQS with REST" because > one would be breaking ReST constraints. > I think the problem is more a case of my poor choice of language than me proposing that a client needs to know about commands. Thanks for the clarification, I understand much better where you are coming from, Darrel
Subbu Allamaraju wrote: > I don't think so. The main take-away is not to process requests from > untrusted sources with unverifiable data. I would use one-time URIs to > deal with attacks like this. So don't use query parameters at all? Interesting. Looking at the deck again it still seems the attack severity is related to putting actions into URLs as I can get the remote server to do something other than show me data. Bill
Oh no. Irrespective of how the data is submitted (query params, matrix params, or even path segments), the server needs to validate the request. One way is to attach a signature or a hash of the data encoded in the URI. If a malicious user tweaks the URI, it will fail validation. This alone won't fix all possible attach vectors, but this can be one of the measures. Subbu > > So don't use query parameters at all? Interesting. Looking at the deck > again it still seems the attack severity is related to putting actions > into URLs as I can get the remote server to do something other than > show > me data. > > Bill > > > ------------------------------------ > > Yahoo! Groups Links > > > --- http://subbu.org
>>>>> "Bill" == Bill de hOra <bill@...> writes:
Bill> http://www.slideshare.net/Wisec/http-parameter-pollution-a-new-category-of-web-attacks
Bill> gets good from about about slide 21 on. A number of the
Bill> attacks seem to rely on injecting action params/values. So
Bill> the main takeway I got from this was, don't embed actions in
Bill> URLs. Thoughts?
In particular:
1. How should one handle multiple occurrences of the same key?
In my applications a later key overrides the earlier.
2. Precedence, from lowest to highest:
Cookie -> GET -> POST
3. When are parameters validated: it seems some applications just
validate the individual request fields (slide 27), not the final
result if you are using the software from a vendor who shall remain
unnamed, and who does weird things.
The lesson is obviously to validate input at the right point...
Otherwise I'm a bit at a loss: you always validate your parameters,
and you check permissions. It seems his case only works for
applications that don't do that, i.e. that have trusted urls or so, or
trusted sources of input. Which is just crazy.
Same with his rewriting, that can never become an issue if you just
validate your input and check permissions. Basic stuff.
The sad thing is that it probably works against a lot of apps who
don't do of these things because you have to be liberal in what you
accept...
I loved slide slide 41, that's pure science fiction. I mean you have
always those descriptions about alien devices that are probed, wake up
and somehow take over. How that would work given the probe is passive
and doesn't execute any code always left me wondering.
But this is a nice example: you just trigger a bug in the probe and
inject yourself into the prober. Very simple and given that hardly
anyone truly validates their input, possible till the end of times.
--
Cheers,
Berend de Boer
Colin Jack wrote: > I realize CQS is a big thing right now particularly in the DDD/messaging > sense, but I'm wondering why your choosing to combine CQS and REST? My hope is that the REST part will help with integration, easier security(authorization), and general improvement of the distributed part of the app. I am also having a hard time seeing any realistic alternatives too. > Plus the style of CQS that Udi/Greg others are talking about (which I > think is influencing this) is a strict seperation, maybe you could do > the same but use REST just for the Q? And what would I do for the C? > Alternatively if it's just CQS in general that you need then can you not > go for this by just carefully using the HTTP verbs (GET/OPTIONS vs > PUT/POST/DELETE)? Yes, I will. Just wondering if others had come across this before and what their experience was. > > There is a domain model on the server which interprets and executes > this and all the domain logic around it. > > Isn't one approach just to use ROA but when its a PUT you take the > incoming representation, work out what commands to apply based on it, > and then apply those to the domain model? That would seem to lose the "why is this being sent" part of CQS, and make it more data-driven, which I don't want. > You might also be interested in this thread of posts, I think some of > them cover messaging over REST type designs: > > http://duncan-cragg.org/blog/post/distributed-observer-pattern-rest-dialogues/ > <http://duncan-cragg.org/blog/post/distributed-observer-pattern-rest-dialogues/> Thanks for the links! I'll look them over. thanks, Rickard
Subbu Allamaraju wrote: > Oh no. Irrespective of how the data is submitted (query params, matrix > params, or even path segments), the server needs to validate the > request. One way is to attach a signature or a hash of the data encoded > in the URI. If a malicious user tweaks the URI, it will fail validation. > This alone won't fix all possible attach vectors, but this can be one of > the measures. I understand techniques like a hmacsha1 of the of the URL/parts, but I don't see how that helps here since the attack is on how implementations evaluate parameters; eg what your server does in response to handle multiple occurrences of the same key isn't necessarily prevented by a hash/nonce. Bill
Well, this isn't a problem of just bandwidth efficiency. Its CPU performance on client and server (having to marshal to and from XML or JSon or whatever). And a code maintainability problem as your server code would be much more complicated. So far, in the REST presentations I've done the average developer is more concerned about productivity than anything else. Jan Algermissen wrote: > > > > Ech - meant to write 'RESt is *NOT* designed to optimize for > *BANDWIDTH* efficiency' > > Sorry, > Jan > > On May 19, 2009, at 8:52 PM, Jan Algermissen wrote: > > > Hi Darrel, > > > > PUTting the state remains the 'right' thing to do. Regarding the > > perceived overhead, remember that REST is designed to optimize for > > efficiency but for federated evolution (among others). Use the > > archirectural goals of REST to judge your solution. > > > > Alternatively you could PATCH the order but would have to do some > > specs work inside your domain for the non-standard PATCH and the > > format that expresses the delta. > > > > In addition, verify that the order status change is in fact idempotent > > (can you PUT the order state N-times and still have the meaning of the > > single PUT?) - if it is not, you must use POST anyway. Often, such > > cancelations are expressed using explicit cancelation requests (see > > UBL) instead of business object state changes. So you might even have: > > > > > > POST /orderManager > > > > <OrderCancelation> > > <orderId>123456</orderId> > > </OrderCancelation> > > > > HTH, > > > > Jan > > > > > > > > On May 19, 2009, at 8:36 PM, Darrel Miller wrote: > > > >> I like > >> > >> POST /Orders/Cancelled?url=/Orders/333 > >> > >> It sort of feels like placing the order in the pile of canceled > >> orders. Viewing the cancelled orders is as simple as > >> > >> GET /Orders/Cancelled > >> > >> and re-opening the order is > >> > >> POST /Orders/Open?url=/Orders/333 > >> > >> It's not the only way but it works for me. > >> > >> Darrel > >> > >> On Fri, May 8, 2009 at 6:09 PM, Bill Burke <bburke@... > <mailto:bburke%40redhat.com>> wrote: > >>> > >>> > >>> Let's say I have an Order resource in a ecommerce Order Entry > >>> system. > >>> How would I implement my service so that I can cancel an order > >>> rather > >>> than delete it? One is to have the cancel state as part of the > >>> order. > >>> THen I can just put a new representation with the cancelled state > >>> set to > >>> true: > >>> > >>> PUT /orders/333 > >>> content-type: application/xml > >>> > >>> <order id="333"> > >>> <cancelled>false</cancelled> > >>> ... > >>> </order> > >>> > >>> Seems kinda heavy to me. > >>> > >>> Would it still be restful to define a "cancelled" URI that you > >>> could put > >>> or post to to change the state? > >>> > >>> /orders/333/cancelled > >>> > >>> or > >>> > >>> /orders/333?cancel=true > >>> > >>> You don't even need to send data to change the state in this > >>> scenario. > >>> But the problem with this from a pure RESTful standpoint is, isn't > >>> this > >>> a mini-RPC? My thought at first is YES IT IS.... > >>> > >>> .... But, consider if you have cancelling as part of a HATEOAS > >>> > >>> <order id="333"> > >>> <atom:link rel="CANCEL" href="http://example.com/orders/333/ > <http://example.com/orders/333/> > >>> cancelled"/> > >>> ... > >>> </order> > >>> > >>> Now, I have a CANCEL link that if I follow changes the state of my > >>> resource. Doesn't seem so RPCish now that I've embedded it as a > >>> link. > >>> Maybe the answer is /orders/333/cancelled isn't very RESTful by > >>> itself, > >>> but when combined with HATEOAS it is? > >>> > >>> -- > >>> Bill Burke > >>> JBoss, a division of Red Hat > >>> http://bill.burkecentral.com <http://bill.burkecentral.com> > >>> > >>> > >> > >> > >> ------------------------------------ > >> > >> Yahoo! Groups Links > >> > >> > >> > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On May 20, 2009, at 3:37 PM, Bill Burke wrote: > > So far, in the REST presentations I've done the average developer is > more concerned about productivity than anything else. > Very true. REST trades immediate developer productivity for federated evolvability; OTH, programmers are happy to go a long way to make changing the not-distributed apps easy, so why should they not e willing to do the same for networked systems? Jan > Jan Algermissen wrote: >> >> >> >> Ech - meant to write 'RESt is *NOT* designed to optimize for >> *BANDWIDTH* efficiency' >> >> Sorry, >> Jan >> >> On May 19, 2009, at 8:52 PM, Jan Algermissen wrote: >> >>> Hi Darrel, >>> >>> PUTting the state remains the 'right' thing to do. Regarding the >>> perceived overhead, remember that REST is designed to optimize for >>> efficiency but for federated evolution (among others). Use the >>> archirectural goals of REST to judge your solution. >>> >>> Alternatively you could PATCH the order but would have to do some >>> specs work inside your domain for the non-standard PATCH and the >>> format that expresses the delta. >>> >>> In addition, verify that the order status change is in fact >>> idempotent >>> (can you PUT the order state N-times and still have the meaning of >>> the >>> single PUT?) - if it is not, you must use POST anyway. Often, such >>> cancelations are expressed using explicit cancelation requests (see >>> UBL) instead of business object state changes. So you might even >>> have: >>> >>> >>> POST /orderManager >>> >>> <OrderCancelation> >>> <orderId>123456</orderId> >>> </OrderCancelation> >>> >>> HTH, >>> >>> Jan >>> >>> >>> >>> On May 19, 2009, at 8:36 PM, Darrel Miller wrote: >>> >>>> I like >>>> >>>> POST /Orders/Cancelled?url=/Orders/333 >>>> >>>> It sort of feels like placing the order in the pile of canceled >>>> orders. Viewing the cancelled orders is as simple as >>>> >>>> GET /Orders/Cancelled >>>> >>>> and re-opening the order is >>>> >>>> POST /Orders/Open?url=/Orders/333 >>>> >>>> It's not the only way but it works for me. >>>> >>>> Darrel >>>> >>>> On Fri, May 8, 2009 at 6:09 PM, Bill Burke <bburke@... >> <mailto:bburke%40redhat.com>> wrote: >>>>> >>>>> >>>>> Let's say I have an Order resource in a ecommerce Order Entry >>>>> system. >>>>> How would I implement my service so that I can cancel an order >>>>> rather >>>>> than delete it? One is to have the cancel state as part of the >>>>> order. >>>>> THen I can just put a new representation with the cancelled state >>>>> set to >>>>> true: >>>>> >>>>> PUT /orders/333 >>>>> content-type: application/xml >>>>> >>>>> <order id="333"> >>>>> <cancelled>false</cancelled> >>>>> ... >>>>> </order> >>>>> >>>>> Seems kinda heavy to me. >>>>> >>>>> Would it still be restful to define a "cancelled" URI that you >>>>> could put >>>>> or post to to change the state? >>>>> >>>>> /orders/333/cancelled >>>>> >>>>> or >>>>> >>>>> /orders/333?cancel=true >>>>> >>>>> You don't even need to send data to change the state in this >>>>> scenario. >>>>> But the problem with this from a pure RESTful standpoint is, isn't >>>>> this >>>>> a mini-RPC? My thought at first is YES IT IS.... >>>>> >>>>> .... But, consider if you have cancelling as part of a HATEOAS >>>>> >>>>> <order id="333"> >>>>> <atom:link rel="CANCEL" href="http://example.com/orders/333/ >> <http://example.com/orders/333/> >>>>> cancelled"/> >>>>> ... >>>>> </order> >>>>> >>>>> Now, I have a CANCEL link that if I follow changes the state of my >>>>> resource. Doesn't seem so RPCish now that I've embedded it as a >>>>> link. >>>>> Maybe the answer is /orders/333/cancelled isn't very RESTful by >>>>> itself, >>>>> but when combined with HATEOAS it is? >>>>> >>>>> -- >>>>> Bill Burke >>>>> JBoss, a division of Red Hat >>>>> http://bill.burkecentral.com <http://bill.burkecentral.com> >>>>> >>>>> >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > ------------------------------------ > > Yahoo! Groups Links > > >
Jan Algermissen wrote: > > On May 20, 2009, at 3:37 PM, Bill Burke wrote: >> >> So far, in the REST presentations I've done the average developer is >> more concerned about productivity than anything else. >> > > Very true. REST trades immediate developer productivity for federated > evolvability; OTH, programmers are happy to go a long way to make > changing the not-distributed apps easy, so why should they not e willing > to do the same for networked systems? > I just don't think Darrel or my solution is any less-RESTful than what you suggested, yet a lot more efficient and maintainable. Especially if you combine order cancellation with HATEOAS. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Tue, May 19, 2009 at 10:41 AM, Bill de hOra <bill@...> wrote: > > http://www.slideshare.net/Wisec/http-parameter-pollution-a-new-category-of-web-attacks > > gets good from about about slide 21 on. A number of the attacks seem to > rely on injecting action params/values. So the main takeway I got from > this was, don't embed actions in URLs. Thoughts? I'd say yes, since you're already never passing query parameters to SQL statements, or storing and then rendering any content you receive (XSS), the next logical step would be to remove all actions from URLs. Crazy idea. Check parameters before accepting them, URL encode every value you pass to the next request, etc. URL encoding, SQL encoding, HTML encoding ... I'm seeing a pattern here. Something about being paranoid and not trusting inputs. Since query parameters are inputs, I wouldn't give them a free pass. I don't see why the main takeaway is don't use when all you have to do is be safe about it. Assaf > > > Bill > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
>>>>> "Assaf" == Assaf Arkin <assaf@...> writes:
Assaf> URL encoding, SQL encoding, HTML encoding ... I'm seeing a
Assaf> pattern here. Something about being paranoid and not
Assaf> trusting inputs. Since query parameters are inputs, I
Assaf> wouldn't give them a free pass. I don't see why the main
Assaf> takeaway is don't use when all you have to do is be safe
Assaf> about it.
I think you're partially missing the point: because certain things in
HTTP are undefined, some parameter checking is incomplete, or done at
the wrong point in the application logic.
I.e. with a cookie you can override a POST/GET value for example. A
really interesting case is a piece of software by a large software
vendor (who have caused most of the suffering programmers have to
endure daily) that concatenates values if you have more than one of
the same name. If you validate the individual pieces they might still
be ok, but the combination might be fatal.
--
Cheers,
Berend de Boer
On Wed, May 20, 2009 at 12:21 PM, Berend de Boer <berend@...> wrote: > >>>>> "Assaf" == Assaf Arkin <assaf@...> writes: > > Assaf> URL encoding, SQL encoding, HTML encoding ... I'm seeing a > Assaf> pattern here. Something about being paranoid and not > Assaf> trusting inputs. Since query parameters are inputs, I > Assaf> wouldn't give them a free pass. I don't see why the main > Assaf> takeaway is don't use when all you have to do is be safe > Assaf> about it. > > I think you're partially missing the point: because certain things in > HTTP are undefined, some parameter checking is incomplete, or done at > the wrong point in the application logic. > > I.e. with a cookie you can override a POST/GET value for example. A > really interesting case is a piece of software by a large software > vendor (who have caused most of the suffering programmers have to > endure daily) that concatenates values if you have more than one of > the same name. If you validate the individual pieces they might still > be ok, but the combination might be fatal. Speaking strictly about HTTP, you can't use a cookie to override a POST/GET value. HTTP is very well defined, with cookies appearing separately from query parameters and entity body. There's no confusion about it. You could write an API that decides to present a consolidated view, where a key maps to one value that multiple values can present, and it arbitrarily picks one. You could use that API with the naive assumption that if you put some value, you'll receive it. Hence the analogy of writing an API that uses simple string concatenation to create SQL statements and then feeding query parameters to it because your HTML forms guide people to submit the right values. And yes, some people over-reacted by mandating no SQL behind CGI. Assaf > > > -- > Cheers, > > Berend de Boer >
So what is the general consensus? Provided I have two parameters, one from the querystring, one from a mediatype, and they point to the same value, what should happen? Error out always, ignore the first or the last? I realize that the option may be given to the developer to choose from, but I want to know what people think should be the default. From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Assaf Arkin Sent: 20 May 2009 21:57 To: Berend de Boer Cc: Rest List Subject: Re: [rest-discuss] http parameter pollution and action URLs On Wed, May 20, 2009 at 12:21 PM, Berend de Boer <berend@...> wrote: >>>>> "Assaf" == Assaf Arkin <assaf@...> writes: Assaf> URL encoding, SQL encoding, HTML encoding ... I'm seeing a Assaf> pattern here. Something about being paranoid and not Assaf> trusting inputs. Since query parameters are inputs, I Assaf> wouldn't give them a free pass. I don't see why the main Assaf> takeaway is don't use when all you have to do is be safe Assaf> about it. I think you're partially missing the point: because certain things in HTTP are undefined, some parameter checking is incomplete, or done at the wrong point in the application logic. I.e. with a cookie you can override a POST/GET value for example. A really interesting case is a piece of software by a large software vendor (who have caused most of the suffering programmers have to endure daily) that concatenates values if you have more than one of the same name. If you validate the individual pieces they might still be ok, but the combination might be fatal. Speaking strictly about HTTP, you can't use a cookie to override a POST/GET value. HTTP is very well defined, with cookies appearing separately from query parameters and entity body. There's no confusion about it. You could write an API that decides to present a consolidated view, where a key maps to one value that multiple values can present, and it arbitrarily picks one. You could use that API with the naive assumption that if you put some value, you'll receive it. Hence the analogy of writing an API that uses simple string concatenation to create SQL statements and then feeding query parameters to it because your HTML forms guide people to submit the right values. And yes, some people over-reacted by mandating no SQL behind CGI. Assaf -- Cheers, Berend de Boer
Just put some of my thoughts together on the subject: http://bill.burkecentral.com/2009/05/21/to-wadl-or-not-to-wadl/ Am I on the right track? Thanks, Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
>>>>> "Sebastien" == Sebastien Lambla <seb@...> writes:
Sebastien> So what is the general consensus? Provided I have two
Sebastien> parameters, one from the querystring, one from a
Sebastien> mediatype, and they point to the same value, what
Sebastien> should happen? Error out always, ignore the first or
Sebastien> the last?
Sebastien> I realize that the option may be given to the developer
Sebastien> to choose from, but I want to know what people think
Sebastien> should be the default.
There is no consensus, and that is why there is an attack vector. Some
software does some seriously weird things.
I found the following, from lowest to highest precedence, useful:
cookie -> get -> put
Not that I use cookies, but you get the point.
--
Cheers,
Berend de Boer
It seems to me that the concept of URL templates is a mistake. It's
either a URL or it's a more elaborate description of how to construct
a request, which requires some sort of client-side logic to process.
URL templates are in the latter category, but they are deceptive in
that they look like the former. That gives ground for confusion. In
any case, HTML has already come up with a tried-and-tested solution,
which I think could easily be applied outside the context of HTML (Eg.
in any XML document). For example, the case described at [1] could be
implemented as:
<form name="search" action="http://example.org/addresses"
method="get" enctype="application/x-www-form-urlencoded">
<input name="contains" type="text" />
</form>
instead of:
<link rel="search"
template="http://example.org/addresses?contains={search_string}"/>
and with:
<form name="edit" action="http://example.org/address/home"
method="put" enctype="application/vnd.address+xml" />
instead of:
<link rel="edit self" type="application/vnd.address+xml"
href="http://example.org/address/home"/>
For most cases, I think this would suffice. I don't see the reason for
your proposed "failure-type" attribute. The response would contain a
content-type header, that describes the mime type and the status code
would indicate if it's a failure.
--
troels
[1] http://www.subbu.org/blog/2008/09/on-linking-part-2
On Thu, May 21, 2009 at 2:55 PM, Bill Burke <bburke@redhat.com> wrote:
>
>
> Just put some of my thoughts together on the subject:
>
> http://bill.burkecentral.com/2009/05/21/to-wadl-or-not-to-wadl/
>
> Am I on the right track?
>
> Thanks,
>
> Bill
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com
>
>
When I talk to folks about the WADL/No-WADL issue, I ask this: "Is the need for some WADL-like item for humans or for machines?" In other words, is this a documentation issue or an automation issue? If this is about documentation, then sure it'd be nice to have, but does it need to be a part of each resource representation? Something returned from OPTIONS? I usually suggest using a custom document type at the resource-level (application/resource-documentation+xhtml) that any client can request and read, using that as a guide in building interactions with that resource. If this is about automation, then is this about build-time or run-time information? If this is about build-time (ala SOAP), my personal preference is to "just say no" since I believe builders/code-generators, etc. add a level of tight-binding that does not work well for long-lived HTTP apps (just me). If this is about run-time, I have yet to see a working design and/or example of this in real life (seems like pretty complicated state-engine stuff at a high meta-level). I will admit that this last option does intrigue me, though. mca http://amundsen.com/blog/ On Thu, May 21, 2009 at 08:55, Bill Burke <bburke@...> wrote: > Just put some of my thoughts together on the subject: > > http://bill.burkecentral.com/2009/05/21/to-wadl-or-not-to-wadl/ > > Am I on the right track? > > Thanks, > > Bill > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > ------------------------------------ > > Yahoo! Groups Links > > > >
A couple of questions:
#1 What about failures?
#2 So maybe <link> for aggregating data. <form> for updates?
#3 How do you get the community at large to accept <form>? Atom link
seems to be sort of a defacto standard.
troels knak-nielsen wrote:
> It seems to me that the concept of URL templates is a mistake. It's
> either a URL or it's a more elaborate description of how to construct
> a request, which requires some sort of client-side logic to process.
> URL templates are in the latter category, but they are deceptive in
> that they look like the former. That gives ground for confusion. In
> any case, HTML has already come up with a tried-and-tested solution,
> which I think could easily be applied outside the context of HTML (Eg.
> in any XML document). For example, the case described at [1] could be
> implemented as:
>
> <form name="search" action="http://example.org/addresses"
> method="get" enctype="application/x-www-form-urlencoded">
> <input name="contains" type="text" />
> </form>
>
> instead of:
>
> <link rel="search"
> template="http://example.org/addresses?contains={search_string}"/>
>
> and with:
>
> <form name="edit" action="http://example.org/address/home"
> method="put" enctype="application/vnd.address+xml" />
>
> instead of:
>
> <link rel="edit self" type="application/vnd.address+xml"
> href="http://example.org/address/home"/>
>
> For most cases, I think this would suffice. I don't see the reason for
> your proposed "failure-type" attribute. The response would contain a
> content-type header, that describes the mime type and the status code
> would indicate if it's a failure.
>
> --
> troels
>
> [1] http://www.subbu.org/blog/2008/09/on-linking-part-2
>
> On Thu, May 21, 2009 at 2:55 PM, Bill Burke <bburke@...> wrote:
>>
>> Just put some of my thoughts together on the subject:
>>
>> http://bill.burkecentral.com/2009/05/21/to-wadl-or-not-to-wadl/
>>
>> Am I on the right track?
>>
>> Thanks,
>>
>> Bill
>>
>> --
>> Bill Burke
>> JBoss, a division of Red Hat
>> http://bill.burkecentral.com
>>
>>
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
On Thu, May 21, 2009 at 5:55 AM, Bill Burke <bburke@...> wrote: > > > Just put some of my thoughts together on the subject: > > http://bill.burkecentral.com/2009/05/21/to-wadl-or-not-to-wadl/ > > Am I on the right track? > (Also replied on your blog, awaiting moderation) I happen to have asked Marc Hadley (JAX-RS spec lead, author of the WADL spec) about how WADL might be used to describe a REST service driven by self-discovered embedded links, rather than by URI template patterns. His response was, basically, you could do this but need to establish a convention on how to figure out which elements are links. His full comments were on his blog: http://weblogs.java.net/blog/mhadley/archive/2009/04/hateoas_with_wa.html Craig
Hello all. I have a situation here where I'm more or less stuck and appreciate your inputs. I have a Restfull infrastructure in place that allow us to expose resources like this: GET /employees/1234 --- <employee id="1234"> <name>Toni</name> <function>analyst</function> <since>20080808</since> </employee> Now I need to synchronize these resources with another application in a way that only changed fields are passing back and forward, that could be represented by formats like <update ref="/employee/1234> <field id="function"> <old_value>Analyst</old_value> <new_value>Manager</new_value> </field> <field id="since"> <old_value>20080808</old_value> <new_value>20090909</new_value> </field> </update> This case seems simple enough, but things can be more complicated, as we may have to have atomic operations on more than one resource, so instead of that example we can end up with something like <operations> <operation type="update" ref="/employee/1234"> <field id="function"> <old_value>Analyst</old_value> <new_value>Program Manager</new_value> </field> <field id="since"> <old_value>20080808</old_value> <new_value>20090909</new_value> </field> </operation type="delete" ref="/employee/1234/assignments" /> </operations> (none of this formats are defined yet, it can be whatever we come up) Now none of this seems to be Restfull, and our first guess was to use SyncML to exchange this kind of information. However, SyncML seems a overkill to me, and since it has some principles that are restish (client-server, limited set of verbs, mime-type) I was wonder how I could simplify the processing and turn it into something I can implement over our rest infrastructure. I also was wondering if things like micro-formats and/or protocols like Atom could be used to do such a thing, (I just found something called FeedSync) but I don't have enough knowledge of them to tell. Any help will be very welcome, even if this is not a case for Rest... _______________________________________________ Melhores cumprimentos / Beir beannacht / Best regards António Manuel dos Santos Mota _______________________________________________
We've had this discussion on twitter very recently. I just don't get what the scenario for wadl is. If you have a media type, you impement that media type from the spec (or codegen some of it through xsd or however you want to model the representation). You still have to implement how the agent will present links and transitions to the users. What's the use case for WADL? Human documentation can be served using xhtml just as well. Tool automation, beyond generating xsd-based classes and entrypoint discovery (and the latter is a chicken and egg problem, you still need to discover the wadl file), cannot be done unless you standardise links. And even then, i'd argue that it hasn't bought you much: retrieve a document, modify it, discover the link to update it and you still need a way to display that link to the user. Unless you want to standardise how the link itself is displayed, which is what html does. In which case you don't really need WADL to do that: your media type can define this. I'm sure I'm missing the point, but i've almost killed the wadl implementation in OpenRasta because I couldn't justify the use case. I have kept it for only one scenario, a non-restful one: being able to use the list of all possible URIs, and together with data ranges pulled from an external source, hit the server with test scenarios and their expected outcomes. It is however mostly a CI / test scenario, and as such probably shouldn't ever be made public. Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Craig McClanahan Sent: 21 May 2009 15:42 To: Bill Burke Cc: REST-Discuss Discussion Group Subject: Re: [rest-discuss] To wadl or not to wadl On Thu, May 21, 2009 at 5:55 AM, Bill Burke <bburke@...> wrote: > > > Just put some of my thoughts together on the subject: > > http://bill.burkecentral.com/2009/05/21/to-wadl-or-not-to-wadl/ > > Am I on the right track? > (Also replied on your blog, awaiting moderation) I happen to have asked Marc Hadley (JAX-RS spec lead, author of the WADL spec) about how WADL might be used to describe a REST service driven by self-discovered embedded links, rather than by URI template patterns. His response was, basically, you could do this but need to establish a convention on how to figure out which elements are links. His full comments were on his blog: http://weblogs.java.net/blog/mhadley/archive/2009/04/hateoas_with_wa.html Craig ------------------------------------ Yahoo! Groups Links
Sebastien Lambla wrote: > I'm sure I'm missing the point, but i've almost killed the wadl > implementation in OpenRasta because I couldn't justify the use case. I've been trying to kill WADL in RESTEasy as well. I totally agree that an XSD and/or human documentation should be sufficient. People want it though. They want to be able to generate client stubs so that they don't have to hand-code HTTP requests. You know all the arguments.... :( -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Hi Antonio, I wonder if FeedSync would help you? http://feedsync.org/spec/ Jim
Anybody know of any de facto Java-based mime types? I'm basically looking for ones that allow you to serialize Java objects as your message body. I did not see any registered at Iana. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
--- In rest-discuss@yahoogroups.com, Rickard Öberg <rickardoberg@...> wrote: > > Hi! > > I'm writing a new application for GTD workflows, and wanted to see if I > can apply the REST principles to the web API. I have had much good input > from the discussions here so far, but one thing I need help with. > > Basically, I want the application to use Command and Query separation at > its root. This means that clients call queries to get state/views out, > then perform commands on that which are sent back to the server. In > other words, clients never ever send state back, only commands. So far I > have resources in my URI structure for the queries, which can be GET, > and that works quite ok, but then I also have the commands in my URI > structure, such as: > /user/123/inbox/createtask > which when GET returns an empty JSON structure or HTML form, which can > then be filled in and POST'ed back. There is a domain model on the > server which interprets and executes this and all the domain logic > around it. > > But from my reading of the "RESTful web services" this corresponds to > the REST/RPC hybrid architecture. It is difficult, at best, to do > caching of resources, since there is no POST/PUT/DELETE which explicitly > could be used to invalidate resource caches, such as that of > /user/123/inbox. Using lastmodified/etags for caching works though. > > Does anyone have experience building CQS-systems that have a more > RESTful approach? How are others dealing with this? > > Thanks, > Rickard Just writing to confirm that I use the approach you are talking about. Where your domain model goes, be it client or server, is not REST vs. RPC. It is a federation problem. If your web app's client portion is object-oriented and in the same trust domain as the server, and the communication channel is secure, then your model can exist on the client. Your protocol still needs to be able to handle failure, though, and you need to think of what tolerances you need before you design the API. Caching is part of layering, too. As an aside, I've never seen anyone debate CQS vs. REST before. I remember bringing up CQS to Stefan Tilkov as a basic REST pattern (Jul 12, 2008), and he commented that he totally forgot about CQS.
--- In rest-discuss@yahoogroups.com, Berend de Boer <berend@...> wrote: > > >>>>> "Rickard" == Rickard Öberg <rickardoberg@...> writes: > > Rickard> Does anyone have experience building CQS-systems that have > Rickard> a more RESTful approach? How are others dealing with this? > > CommandQuery doesn't work for distributed systems. As Eiffel programmer > I use it all the time, but it doesn't fit distributed systems. The > overhead is already twice the pure REST model. > > And note that REST knows command/query separation by use of GET versus > the other verbs. I can't understand what confusion of ideas leads somebody to say "CQS doesn't work for distributed systems [...] The overheard is already twice the pure REST model. [...] REST [is] CQS by use of GET versus the other verbs." Don't you see the contradiction? I think you are simply confusing the OP. I know I am confused. Also, explain by example, always. You should not say "The overhead is already twice the pure REST model", you should show it. I have no idea where you get that # from.
I think this is application/x-java-serialized-object. On Thu, May 21, 2009 at 9:23 PM, Bill Burke <bburke@...> wrote: > > > Anybody know of any de facto Java-based mime types? I'm basically > looking for ones that allow you to serialize Java objects as your > message body. I did not see any registered at Iana. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > >
As stated in http://java.sun.com/javase/6/docs/api/java/awt/datatransfer/DataFlavor.html#javaSerializedObjectMimeType it is application/x-java-serialized-object. I do not know if it is really intended to be used for a transport via HTTP (it is actually used for drag and drop normally) but it is used e.g. in Spring Framework's HTTP based endpoints. Am 22.05.2009 um 03:23 schrieb Bill Burke: > Anybody know of any de facto Java-based mime types? I'm basically > looking for ones that allow you to serialize Java objects as your > message body. I did not see any registered at Iana. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com
On Thu, May 21, 2009 at 6:23 PM, Bill Burke <bburke@...> wrote: > > > Anybody know of any de facto Java-based mime types? I'm basically > looking for ones that allow you to serialize Java objects as your > message body. I did not see any registered at Iana. > Some of the Java serialization related classes I've seen (like java.awt.datatransfer.DataFlavor) talk about using "application/x-java-serialized-object" (the "x-" obviously meaning this would no be a registered type). Personally, I would tend to use XML or JSON encoding instead, to avoid requiring Java at the other end of the network pipe. Then, if I were lazy, I'd just use "application/xml" or "application/json" or, if more industrious, define my own media types ... and (at the Java end of the pipe) let JAXB worry about serialization. Craig > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > >
That was my option too, application/xml, sometimes text/xml, and we use XStream for serialization. On our next version of the platform this will be made automatically by the connectors super-class so it will be transparent to the service implementations. Meaning that the resource (that provide one or more services) can be written without any knowledge of the representations format, or the transport for that matter. Actually, a resource can be implemented as a "normal" Java class, without even being aware of the Rest infrastructure or that it will be accessed as a resource in a Restfull architecture. On May 22, 2009 3:11am, Craig McClanahan <craigmcc@...> wrote: > On Thu, May 21, 2009 at 6:23 PM, Bill Burke bburke@...> wrote: > > > > > > Anybody know of any de facto Java-based mime types? I'm basically > > looking for ones that allow you to serialize Java objects as your > > message body. I did not see any registered at Iana. > > > Some of the Java serialization related classes I've seen (like > java.awt.datatransfer.DataFlavor) talk about using > "application/x-java-serialized-object" (the "x-" obviously meaning > this would no be a registered type). > Personally, I would tend to use XML or JSON encoding instead, to avoid > requiring Java at the other end of the network pipe. Then, if I were > lazy, I'd just use "application/xml" or "application/json" or, if more > industrious, define my own media types ... and (at the Java end of the > pipe) let JAXB worry about serialization. > Craig > > -- > > Bill Burke > > JBoss, a division of Red Hat > > http://bill.burkecentral.com > > > > >
I'm looking at it right now, I discovered it yesterday while looking for SyncML related stuff. At first it seems good, as it was first developed by one of my old gurus (Ray Ozzie while on Lotus, before he went over to the dark-side) and he for sure know about data synchronization that it was one of the main strengths of Lotus Notes. However, it's a extension of RSS/Atom and I don't know nothing about these. And I only found 2 Java implementations (Rome and Mesh4x) that I still have to evaluate. On the bright side, MS has good support for it and at the other end of my sync case is a MS shop... Thanks for your pointer, if you or others have good pointers to RSS/Atom, FeedSync/java implementations, from a Restish point-of-view it will be great. Cheers. On May 22, 2009 2:23am, Jim Webber <jim@...> wrote: > Hi Antonio, > I wonder if FeedSync would help you? > http://feedsync.org/spec/ > Jim >
Greetings, On 2009 May 21, at 13:55, Bill Burke wrote: > Just put some of my thoughts together on the subject: > > http://bill.burkecentral.com/2009/05/21/to-wadl-or-not-to-wadl/ I've been using WADL happily for some while now -- for example at [1]. It seems to have two strengths. * It's a very convenient way of documenting the interface, for humans. Given that someone actually wants to use your service, they'll want to know what 'things' are available, and giving "HATEOAS" as the answer to every question isn't going to win you friends. That is, you _do_ need human documentation, and WADL provides a reasonable structure for that. * You can generate test cases from it. Part of the build system for the service described in [1] transforms the WADL file into a test coverage function, which is called by the hand-written tests to verify that (a) none of the test cases use an interaction that isn't described by the WADL (that is, the WADL is telling the truth), and (b) all of the interactions described by the WADL are tested at least once (that is, the tests have a basic completeness). I suppose you could generate client code from the WADL, but I can't think why you'd want to do that. Best wishes, Norman [1] http://myskua.org/doc/qsac/interface-http.html -- Norman Gray : http://nxg.me.uk Dept Physics and Astronomy, University of Leicester
I assume you're generating the HTML from the WADL via an XSLT
transform? If so, is the stylesheet public?
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
On 22.05.2009, at 11:01, Norman Gray wrote:
>
>
>
> Greetings,
>
> On 2009 May 21, at 13:55, Bill Burke wrote:
>
> > Just put some of my thoughts together on the subject:
> >
> > http://bill.burkecentral.com/2009/05/21/to-wadl-or-not-to-wadl/
>
> I've been using WADL happily for some while now -- for example at
> [1]. It seems to have two strengths.
>
> * It's a very convenient way of documenting the interface, for
> humans. Given that someone actually wants to use your service,
> they'll want to know what 'things' are available, and giving
> "HATEOAS" as the answer to every question isn't going to win you
> friends. That is, you _do_ need human documentation, and WADL
> provides a reasonable structure for that.
>
> * You can generate test cases from it. Part of the build system
> for the service described in [1] transforms the WADL file into a test
> coverage function, which is called by the hand-written tests to verify
> that (a) none of the test cases use an interaction that isn't
> described by the WADL (that is, the WADL is telling the truth), and
> (b) all of the interactions described by the WADL are tested at least
> once (that is, the tests have a basic completeness).
>
> I suppose you could generate client code from the WADL, but I can't
> think why you'd want to do that.
>
> Best wishes,
>
> Norman
>
> [1] http://myskua.org/doc/qsac/interface-http.html
>
> --
> Norman Gray : http://nxg.me.uk
> Dept Physics and Astronomy, University of Leicester
>
>
>
> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%; font-
> weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; } #ygrp-
> mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!-- #ygrp-
> sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%; line-
> height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom: 10px;
> padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px; font-family:
> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> input, textarea {font:99% arial,helvetica,clean,sans-serif;} #ygrp-
> mlmsg pre, code {font:115% monospace;*font-size:100%;} #ygrp-mlmsg *
> {line-height:1.22em;} #ygrp-text{ font-family: Georgia; } #ygrp-
> text p{ margin: 0 0 1em 0; } dd.last p a { font-family: Verdana;
> font-weight: bold; } #ygrp-vitnav{ padding-top: 10px; font-family:
> Verdana; font-size: 77%; margin: 0; } #ygrp-vitnav a{ padding: 0
> 1px; } #ygrp-mlmsg #logo{ padding-bottom: 10px; } #ygrp-reco
> { margin-bottom: 20px; padding: 0px; } #ygrp-reco #reco-head { font-
> weight: bold; color: #ff7900; } #reco-category{ font-size: 77%; }
> #reco-desc{ font-size: 77%; } #ygrp-vital a{ text-decoration:
> none; } #ygrp-vital a:hover{ text-decoration: underline; } #ygrp-
> sponsor #ov ul{ padding: 0 0 0 8px; margin: 0; } #ygrp-sponsor #ov
> li{ list-style-type: square; padding: 6px 0; font-size: 77%; } #ygrp-
> sponsor #ov li a{ text-decoration: none; font-size: 130%; } #ygrp-
> sponsor #nc{ background-color: #eee; margin-bottom: 20px; padding: 0
> 8px; } #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-sponsor .ad
> #hd1{ font-family: Arial; font-weight: bold; color: #628c2a; font-
> size: 100%; line-height: 122%; } #ygrp-sponsor .ad a{ text-
> decoration: none; } #ygrp-sponsor .ad a:hover{ text-decoration:
> underline; } #ygrp-sponsor .ad p{ margin: 0; font-weight: normal;
> color: #000000; } o{font-size: 0; } .MsoNormal{ margin: 0 0 0 0; }
> #ygrp-text tt{ font-size: 120%; } blockquote{margin: 0 0 0
> 4px;} .replbq{margin:4} dd.last p span { margin-right: 10px; font-
> family: Verdana; font-weight: bold; } dd.last p span.yshortcuts
> { margin-right: 0; } div.photo-title a, div.photo-title a:active,
> div.photo-title a:hover, div.photo-title a:visited { text-
> decoration: none; } div.file-title a, div.file-title a:active,
> div.file-title a:hover, div.file-title a:visited { text-decoration:
> none; } #ygrp-msg p#attach-count { clear: both; padding: 15px 0 3px
> 0; overflow: hidden; } #ygrp-msg p#attach-count span { color:
> #1E66AE; font-weight: bold; } div#ygrp-mlmsg #ygrp-msg p a
> span.yshortcuts { font-family: Verdana; font-size: 10px; font-
> weight: normal; } #ygrp-msg p a { font-family: Verdana; font-size:
> 10px; } #ygrp-mlmsg a { color: #1E66AE; } div.attach-table div div a
> { text-decoration: none; } div.attach-table { width: 400px; } -->
Stefan, hello. On 2009 May 22, at 10:07, Stefan Tilkov wrote: > I assume you're generating the HTML from the WADL via an XSLT > transform? If so, is the stylesheet public? Yes. See <http://code.google.com/p/skua/source/browse/#svn/trunk/code/qsac/wadl >. What's there owes a distant debt to Mark Nottingham's original wadl_documentation.xsl from the WADL distribution. I'd be interested in any comments on this. I don't make strong claims about the RESTfulness of this service, but I think it gets most of the point. All the best, Norman -- Norman Gray : http://nxg.me.uk Dept Physics and Astronomy, University of Leicester
Craig McClanahan wrote: > On Thu, May 21, 2009 at 6:23 PM, Bill Burke <bburke@...> wrote: >> >> Anybody know of any de facto Java-based mime types? I'm basically >> looking for ones that allow you to serialize Java objects as your >> message body. I did not see any registered at Iana. >> > > Some of the Java serialization related classes I've seen (like > java.awt.datatransfer.DataFlavor) talk about using > "application/x-java-serialized-object" (the "x-" obviously meaning > this would no be a registered type). > > Personally, I would tend to use XML or JSON encoding instead, to avoid > requiring Java at the other end of the network pipe. Then, if I were > lazy, I'd just use "application/xml" or "application/json" or, if more > industrious, define my own media types ... and (at the Java end of the > pipe) let JAXB worry about serialization. > Kinda pointless to use XML or JSON for Java to Java applications. You still get huge benefits from being RESTful though going Java to Java over something like RMI. When you do Java to Java with XML/JAXB, beyond a huge performance problem, you have a maintainability problem. Hibernate and JPA Entities make good DTOs in a Java to Java system. In an XML based one, its probably not a good idea to have Entities and JAXB classes one and the same. Mainly because of proxying (many Entities are proxied and JAXB doesn't like proxies with field mappings) and if the schema diverges from the database schema or Hibernate/JPA mapping. Personally, I was kinda hoping for a registered Java media type of: application/*+java;version=xxx Would be cool to make one and register it, but only Sun/Oracle can do this because of trademark infringement. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Norman Gray wrote: > > Greetings, > > On 2009 May 21, at 13:55, Bill Burke wrote: > >> Just put some of my thoughts together on the subject: >> >> http://bill.burkecentral.com/2009/05/21/to-wadl-or-not-to-wadl/ > > I've been using WADL happily for some while now -- for example at [1]. > It seems to have two strengths. > > * It's a very convenient way of documenting the interface, for > humans. Given that someone actually wants to use your service, they'll > want to know what 'things' are available, and giving "HATEOAS" as the > answer to every question isn't going to win you friends. That is, you > _do_ need human documentation, and WADL provides a reasonable structure > for that. > An XSD would work just as well, no? And you'd have one less document to maintain. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill, hello. On 2009 May 22, at 12:11, Bill Burke wrote: > Norman Gray wrote: >> Greetings, >> On 2009 May 21, at 13:55, Bill Burke wrote: >>> Just put some of my thoughts together on the subject: >>> >>> http://bill.burkecentral.com/2009/05/21/to-wadl-or-not-to-wadl/ >> I've been using WADL happily for some while now -- for example at >> [1]. It seems to have two strengths. >> * It's a very convenient way of documenting the interface, for >> humans. Given that someone actually wants to use your service, >> they'll want to know what 'things' are available, and giving >> "HATEOAS" as the answer to every question isn't going to win you >> friends. That is, you _do_ need human documentation, and WADL >> provides a reasonable structure for that. > > An XSD would work just as well, no? And you'd have one less > document to maintain. I'm not completely sure what you mean. There is a document type which the WADL file conforms to, namely the wadl.rnc of the WADL distribution, which seems as good a structure as any for documenting an interface like this. I could just write an HTML page directly, true, but it's the fact that the WADL file generates two files, one of which is verified to match the code's regression tests, that seems to be the win here. And if I make a change to the interface, I'm forced to change the implementation, and vice versa. All the best, Norman -- Norman Gray : http://nxg.me.uk Dept Physics and Astronomy, University of Leicester
> Just writing to confirm that I use the approach you are talking > about. Where your domain model goes, be it client or server, is > not REST vs. RPC. It is a federation problem. If your web app's > client portion is object-oriented and in the same trust domain as > the server, and the communication channel is secure, then your > model can exist on the client. Your protocol still needs to be > able to handle failure, though, and you need to think of what > tolerances you need before you design the API. Not sure I follow and wanted to check. I think you are saying that you agree with the using POST based commands for all updates, and maybe that by running the same model on the client you can have it generate the commands? > As an aside, I've never seen anyone debate CQS vs. REST before. I > remember bringing up CQS to Stefan Tilkov as a basic REST pattern > (Jul 12, 2008), and he commented that he totally forgot about CQS. We might all be talking about slightly different things to be honest. My original thought was that Rickard is thinking of the sort of approach that Udi Dahan has described before ( http://www.udidahan.com/2008/08/11/command-query-separation-and-soa/). 2009/5/21 johnzabroski <johnzabroski@...> > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > Rickard Öberg <rickardoberg@...> wrote: > > > > Hi! > > > > I'm writing a new application for GTD workflows, and wanted to see if I > > can apply the REST principles to the web API. I have had much good input > > from the discussions here so far, but one thing I need help with. > > > > Basically, I want the application to use Command and Query separation at > > its root. This means that clients call queries to get state/views out, > > then perform commands on that which are sent back to the server. In > > other words, clients never ever send state back, only commands. So far I > > have resources in my URI structure for the queries, which can be GET, > > and that works quite ok, but then I also have the commands in my URI > > structure, such as: > > /user/123/inbox/createtask > > which when GET returns an empty JSON structure or HTML form, which can > > then be filled in and POST'ed back. There is a domain model on the > > server which interprets and executes this and all the domain logic > > around it. > > > > But from my reading of the "RESTful web services" this corresponds to > > the REST/RPC hybrid architecture. It is difficult, at best, to do > > caching of resources, since there is no POST/PUT/DELETE which explicitly > > could be used to invalidate resource caches, such as that of > > /user/123/inbox. Using lastmodified/etags for caching works though. > > > > Does anyone have experience building CQS-systems that have a more > > RESTful approach? How are others dealing with this? > > > > Thanks, > > Rickard > > Just writing to confirm that I use the approach you are talking about. > Where your domain model goes, be it client or server, is not REST vs. RPC. > It is a federation problem. If your web app's client portion is > object-oriented and in the same trust domain as the server, and the > communication channel is secure, then your model can exist on the client. > Your protocol still needs to be able to handle failure, though, and you need > to think of what tolerances you need before you design the API. > > Caching is part of layering, too. > > As an aside, I've never seen anyone debate CQS vs. REST before. I remember > bringing up CQS to Stefan Tilkov as a basic REST pattern (Jul 12, 2008), and > he commented that he totally forgot about CQS. > > >
--- In rest-discuss@yahoogroups.com, Colin Jack <colin.jack@...> wrote: > > > Just writing to confirm that I use the approach you are talking > > about. Where your domain model goes, be it client or server, is > > not REST vs. RPC. It is a federation problem. If your web app's > > client portion is object-oriented and in the same trust domain as > > the server, and the communication channel is secure, then your > > model can exist on the client. Your protocol still needs to be > > able to handle failure, though, and you need to think of what > > tolerances you need before you design the API. > > Not sure I follow and wanted to check. I think you are saying that you agree > with the using POST based commands for all updates, and maybe that by > running the same model on the client you can have it generate the commands? > > > > As an aside, I've never seen anyone debate CQS vs. REST before. I > > remember bringing up CQS to Stefan Tilkov as a basic REST pattern > > (Jul 12, 2008), and he commented that he totally forgot about CQS. > > We might all be talking about slightly different things to be honest. My > original thought was that Rickard is thinking of the sort of approach that > Udi Dahan has described before ( > http://www.udidahan.com/2008/08/11/command-query-separation-and-soa/). I agree 100%. Udi is trying to solve a totally different problem than me. As far as I can see, whenever I read Pat Helland, I feel like somebody is giving me a solution for a problem I don't have. "Solutineering". Maybe people have problems for these solutions -- I usually see it as technical people solving technical problems at the implementation level rather than business problems at the domain level. I don't quite understand how CQS has become lumped with Udi Dahan. Why not just call it UDI in capitals? :) My application of CQS has to do with how I conceive of deployment. Look at how Udi explains things. He assumes you are already deployed. That's not REST. He then sort of acknowledges that by saying: "Of course, once we talk about web UI's things are a bit different - but still similar. While web-server-side there may be a level of independence, for browser side inter-component communications we're still likely to target javascript. There, I've managed to say something technical supporting mashups and SOA without lying through my teeth." - Udi So he might have CQS using SOA separation, but not using REST separation. At most, I only assume I am partially deployed.
Colin Jack wrote: > We might all be talking about slightly different things to be honest. My > original thought was that Rickard is thinking of the sort of approach > that Udi Dahan has described before > (http://www.udidahan.com/2008/08/11/command-query-separation-and-soa/ > <http://www.udidahan.com/2008/08/11/command-query-separation-and-soa/>). Yes, I was referring to CQS on the architectural level a la Udi, rather than on an object basis as I think it was described initially. /Rickard
On May 22, 2009, at 12:08 AM, Bill Burke wrote: > > I've been trying to kill WADL in RESTEasy as well. I totally agree > that > an XSD and/or human documentation should be sufficient. People want > it > though. They want to be able to generate client stubs so that they > don't have to hand-code HTTP requests. You know all the > arguments.... :( Something like WADL makes sense as a model (and source for code generation) for implementing the server component. Providing it as an API description is 'dangerous', IMHO, as it creates an impression of stability (of the interface) that REST is actually trying to avoid. All the client has to know should be in the media type specification. If the scope and expected stability of the media type makes developers feel they need code generation - maybe that is a hint that the media type is too narrowly defined? Though, I also know all the argumets :-) Jan > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > ------------------------------------ > > Yahoo! Groups Links > > >
António Mota wrote (in <http://tech.dir.groups.yahoo.com/group/rest-discuss/message/12537>): > I was trying to answer a question that was directed to me, not writing > a treaty, and I did so in an expeditious manner, from the top of my > head. I take your point and I apologize for giving offense. > I didn't know that [to] post in here one has to be so "purist" > with terminology[.] A person does not have to be a purist with terminology in order to post in rest-discuss and, indeed, the evidence shows that many of us are not purists. I include myself among the guilty (as it were). > [Maybe] I have to read the entire REST dissertation > before I post something. I don’t expect or demand that people read through “Architectural Styles and the Design of Network-based Software Architectures†(<http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm>) before their respective first postings, let alone before each posting. I do think that solid preparation is in order. If a person can read English proficiently, a thorough reading of the dissertation is a great start to answering many questions that the person might have about the Representational State Transfer. The dissertation is available without fee to anybody with HTTP service. The writing is straightforward and readily digestible. (The ideas that the writing presents may well be challenging, but that’s a separate matter.) > What you call misunderstanding is simply [an] imprecise use of > terminology. I wrote "architectural constraints" instead of "sets of > architectural constraints" and I said “interface constraints†instead > of "architectural constraints of the uniform interface". Damn these > simplifications.... I appreciate the clarification. I gave your message (<http://tech.dir.groups.yahoo.com/group/rest-discuss/message/12519>) a fifth reading. I still fail to detect the simplicity. I find that the message implicitly draws distinctions where none exist and that the message uses misnomers that offer no cognitive or lexical savings. (Yes, I understand that frank assessments in a textual medium are bound to give offense. No, I don’t intend to give offense. Yes, I tried other wordings in order to minimize the offense. No, none of those wordings helped.) > [What] strikes me is [that] you were so quick to point [to] such terrible > faults[,] [yet] you said [nothing] in response to the original question[!] I felt that the combination of your response (<http://tech.dir.groups.yahoo.com/group/rest-discuss/message/12519>) and my response (<http://tech.groups.yahoo.com/group/rest-discuss/message/12531>) to your response gave a correct, if not very helpful, answer to the original message (<http://tech.dir.groups.yahoo.com/group/rest-discuss/message/12513>). The writer of that original message admits to knowing little about REST but, instead of asking for materials that would teach him about REST per se, asks how to judge the conformance of a Web site to the style. Maybe he’s trying to learn about REST by example. But, if that is so, he’s asking for the knowledge that would obviate any need to learn by example. Maybe he suspects (as is true) that there exist criteria such that failure to meet those criteria is obvious even to the novice. But, if that is so, he’s asking for knowledge that would highlight only Web sites which are not RESTful, thus leaving all good examples to elude him. Were I to respond to the writer of the original message, I would ask about his ultimate goal. > Gee, I thought this list was about [trying] to help other people with > their questions [...] The rest-discuss forum is for “[general] discussion about REpresentational State Transfer, the name given to the architectural style which describes the best behaved subset of the World Wide Web (circa 1993), as determined by Roy Fielding†[sic throughout] (<http://tech.groups.yahoo.com/group/rest-discuss/>). That mandate leaves plenty of room for helping people with their questions. But questions that are off topic deserve either no response or gentle pointers to appropriate fora. Questions that show a lack of basic research deserve gentle pointers to basic and information. So, while there’s no call for sour attitudes, there’s no obligation to belabor oneself in helping those who fail to help themselves, and there’s certainly no obligation to help those who fail to respect the stated object of discussion. A modicum of diligence goes a long way. > But[,] then again, [it] is much easier to point to [others'] mistakes than point to correct answers... Yes, in general, it is much easier to find fault than to find correct answers. In this case, though, issuing a correct answer is trivial: an architecture is RESTful if and only if the architecture conforms to every constraint of the Representational State Transfer. It is dubious, if not preposterous, that such an answer, albeit correct, is helpful to the beginner. Sometimes the best response to a question is a question. When two men enter a drugstore and ask for advice and supplies so that the one can perform heart surgery on the other, a wise clerk avoids recommending this tool or that pain reliever. Instead, the wise clerk asks why the sick man refuses treatment in a hospital and why his healthy friend believes that performing a home surgery on a vital organ is a good idea. (I thank Jim Coyle and Mal Sharpe (<http://www.coyleandsharpe.com/>) for their inspirational “Druggist†prank [<http://audio.cdbaby.com/4c1626c0/mp3lofi/c/o/coylesharpe-31.mp3>].) -- Please do not include my address in public replies. I will read public replies on the list.
--- In rest-discuss@yahoogroups.com, Rickard Öberg <rickardoberg@...> wrote: > > > Yes, I was referring to CQS on the architectural level a la Udi, rather > than on an object basis as I think it was described initially. > > /Rickard > Sorry, but I don't see a difference. Command-Query Separation is a basic principle for improving the testability and reliability of code that modifies state. I also think that "at the architectural level" vs "object basis" ruins the point of the principle. As an example, I recently tried commenting on Udi Dahan's blog (he and I are just chatting about something separate from the CQS entry). What happened when I clicked Submit Comment? I got back the following error page: ----- Method Not Implemented POST to /wp-comments-post.php not supported. Apache Server at www.udidahan.com Port 80 ----- First off, an end-user has no idea what the heck that means. It's scary, and it also doesn't make any sense whatsoever. Can't POST to /post.php?! I am using this as an example, not to talk bad about Udi's blog software vendor, but rather to illuminate what you should be asking. (I get similar errors on other blogs, such as Tim Heuer's.) If you're handling a problem at an architectural level, then you are defining constraints that disallow such non-sense as the above from ever happening. As an architect, you really don't want a customer service email saying "The web page told me I can't POST to example.com/post.php". Actually, chances are you won't even get that email, because people seek pleasure and avoid pain, and that email -- unless successfully POSTING to Post.PHP is life or death -- is PAIN. That's why you want to explicitly do CQS in your architecture as part of ensuring correct resource deployment. Architectural principles exist and are enforced so that programmer's don't make these common mistakes. By preventing mistakes, you improve the reliability, durability and consistency of your system, end-to-end. At the object-level, you're enforcing the same kind of "hey, let's make state transformations clear" mission as you are at the architectural level. I also find that REST's set of architectural constraints are great for resource deployment and ad-hoc configuration of resources. So as I see it, CQS and REST go together like P&J. REST is your architectural constraints, though. CQS is simply your sanity check/design pattern to make sure you're obeying those constraints. The major difference between REST and Udi's CQS is static deployment, which is a faulty assumption on the Web (or any truly dynamic, late-bound app where significant pieces of the puzzle aren't known until "runtime") and also leads to "page request life cycle"-based architectural specifications.
I'm having challenges in certain situations because of the numerous
allowable state transitions. For example, if I have an "phone book"
search application, I might be able to expose the initial search like:
http://myphonebook.com/search?q={searchTerms}
Which might respond with a list of people. But the service doesn't
want to return them all, so it pages them and provides links to the
next/previous page.
<link rel="previous" href="http://myphonebook.com/results/123?p=1"/>
<link rel="next" href="http://myphonebook.com/results/123?p=3"/>
But then a client comes in and says she needs some additional state
transitions for sorting some metadata fields. So, the service exposes
them:
<link rel="previous" href="http://myphonebook.com/results/123?p=1"/>
<link rel="next" href="http://myphonebook.com/results/123?p=3"/>
<link rel="sort.lastname" href="http://myphonebook.com/results/123?s=lastname"/>
<link rel="sort.firstname"
href="http://myphonebook.com/results/123?s=firstname"/>
<link rel="sort.city" href="http://myphonebook.com/results/123?s=city"/>
<link rel="sort.county" href="http://myphonebook.com/results/123?s=county"/>
Later, another client comes and says he needs to provide direct links
to 10 pages of results (e.g. page 1 - 10). So, the services exposes
those too:
<link rel="previous" href="http://myphonebook.com/results/123?p=1"/>
<link rel="page.1" href="http://myphonebook.com/results/123?p=1"/>
<link rel="page.2" href="http://myphonebook.com/results/123?p=2"/>
<link rel="page.3" href="http://myphonebook.com/results/123?p=3"/>
<link rel="page.4" href="http://myphonebook.com/results/123?p=4"/>
<link rel="page.5" href="http://myphonebook.com/results/123?p=5"/>
<link rel="page.6" href="http://myphonebook.com/results/123?p=6"/>
<link rel="page.7" href="http://myphonebook.com/results/123?p=7"/>
<link rel="page.8" href="http://myphonebook.com/results/123?p=8"/>
<link rel="page.9" href="http://myphonebook.com/results/123?p=9"/>
<link rel="page.10" href="http://myphonebook.com/results/123?p=10"/>
<link rel="sort.lastname" href="http://myphonebook.com/results/123?s=lastname"/>
<link rel="sort.firstname"
href="http://myphonebook.com/results/123?s=firstname"/>
<link rel="sort.city" href="http://myphonebook.com/results/123?s=city"/>
<link rel="sort.county" href="http://myphonebook.com/results/123?s=county"/>
Even later... well, you get the point. So, how do folks generally
deal with resources that happen to have many potential next states?
At a certain point the list of links can get ridiculous, leading to
the temptation to templatize the URL - but then that parameterization
feels wrong. Thoughts?
I'm also concerned about the tight-coupling of it. I mean, it seems
that URI Templates are less than ideal because it creates a tighter
coupling between service/client. On the other hand, it seems that
I've basically transferred that coupling from the templated URL over
to the "rel" attribute. Any thoughts on this?
Thanks,
--tim
Interestingly enough I found myself thinking about something similar last week. I don't have my thoughts formalized but I find myself asking myself: "how would I layout this out for human navigation"?
In other words, if this was a web page (providing the same capabilities) would I really have a page with all these state transitions (links) available on it or would I model the solution completely differently? If it would be different for the human usage scenario, why isn't that applicable in the machine/API usage scenario?
Obviously, the situations do differ but I think I would need to figure out where designing for human consumption fails to meet the needs of machine consumption and then make changes there.
Eb
--- In rest-discuss@yahoogroups.com, Tim Williams <williamstw@...> wrote:
>
> I'm having challenges in certain situations because of the numerous
> allowable state transitions. For example, if I have an "phone book"
> search application, I might be able to expose the initial search like:
>
> http://myphonebook.com/search?q={searchTerms}
>
> Which might respond with a list of people. But the service doesn't
> want to return them all, so it pages them and provides links to the
> next/previous page.
>
> <link rel="previous" href="http://myphonebook.com/results/123?p=1"/>
> <link rel="next" href="http://myphonebook.com/results/123?p=3"/>
>
> But then a client comes in and says she needs some additional state
> transitions for sorting some metadata fields. So, the service exposes
> them:
>
> <link rel="previous" href="http://myphonebook.com/results/123?p=1"/>
> <link rel="next" href="http://myphonebook.com/results/123?p=3"/>
> <link rel="sort.lastname" href="http://myphonebook.com/results/123?s=lastname"/>
> <link rel="sort.firstname"
> href="http://myphonebook.com/results/123?s=firstname"/>
> <link rel="sort.city" href="http://myphonebook.com/results/123?s=city"/>
> <link rel="sort.county" href="http://myphonebook.com/results/123?s=county"/>
>
> Later, another client comes and says he needs to provide direct links
> to 10 pages of results (e.g. page 1 - 10). So, the services exposes
> those too:
>
> <link rel="previous" href="http://myphonebook.com/results/123?p=1"/>
> <link rel="page.1" href="http://myphonebook.com/results/123?p=1"/>
> <link rel="page.2" href="http://myphonebook.com/results/123?p=2"/>
> <link rel="page.3" href="http://myphonebook.com/results/123?p=3"/>
> <link rel="page.4" href="http://myphonebook.com/results/123?p=4"/>
> <link rel="page.5" href="http://myphonebook.com/results/123?p=5"/>
> <link rel="page.6" href="http://myphonebook.com/results/123?p=6"/>
> <link rel="page.7" href="http://myphonebook.com/results/123?p=7"/>
> <link rel="page.8" href="http://myphonebook.com/results/123?p=8"/>
> <link rel="page.9" href="http://myphonebook.com/results/123?p=9"/>
> <link rel="page.10" href="http://myphonebook.com/results/123?p=10"/>
> <link rel="sort.lastname" href="http://myphonebook.com/results/123?s=lastname"/>
> <link rel="sort.firstname"
> href="http://myphonebook.com/results/123?s=firstname"/>
> <link rel="sort.city" href="http://myphonebook.com/results/123?s=city"/>
> <link rel="sort.county" href="http://myphonebook.com/results/123?s=county"/>
>
> Even later... well, you get the point. So, how do folks generally
> deal with resources that happen to have many potential next states?
> At a certain point the list of links can get ridiculous, leading to
> the temptation to templatize the URL - but then that parameterization
> feels wrong. Thoughts?
>
> I'm also concerned about the tight-coupling of it. I mean, it seems
> that URI Templates are less than ideal because it creates a tighter
> coupling between service/client. On the other hand, it seems that
> I've basically transferred that coupling from the templated URL over
> to the "rel" attribute. Any thoughts on this?
>
> Thanks,
> --tim
>
One possibility would be to adopt a pattern of sending a "related"
link that points to another representation with all the additional
transition links.
<link rel="related" href="http://myphonebook.com/results/123?r=dhFk3gf" />
You might commit to having a set of expected state transitions in all
search results (paging, start new search, etc.) and then place any
additional links in the related resource.
Another advantage of this pattern is that you have the opportunity to
customize the contents of the related resource based on criteria such
as the type of search, the user doing the search, etc.
mca
http://amundsen.com/blog/
On Mon, Jun 1, 2009 at 10:26, Ebenezer Ikonne <amaeze@...> wrote:
> Interestingly enough I found myself thinking about something similar last week. I don't have my thoughts formalized but I find myself asking myself: "how would I layout this out for human navigation"?
>
> In other words, if this was a web page (providing the same capabilities) would I really have a page with all these state transitions (links) available on it or would I model the solution completely differently? If it would be different for the human usage scenario, why isn't that applicable in the machine/API usage scenario?
>
> Obviously, the situations do differ but I think I would need to figure out where designing for human consumption fails to meet the needs of machine consumption and then make changes there.
>
> Eb
>
> --- In rest-discuss@yahoogroups.com, Tim Williams <williamstw@...> wrote:
>>
>> I'm having challenges in certain situations because of the numerous
>> allowable state transitions. For example, if I have an "phone book"
>> search application, I might be able to expose the initial search like:
>>
>> http://myphonebook.com/search?q={searchTerms}
>>
>> Which might respond with a list of people. But the service doesn't
>> want to return them all, so it pages them and provides links to the
>> next/previous page.
>>
>> <link rel="previous" href="http://myphonebook.com/results/123?p=1"/>
>> <link rel="next" href="http://myphonebook.com/results/123?p=3"/>
>>
>> But then a client comes in and says she needs some additional state
>> transitions for sorting some metadata fields. So, the service exposes
>> them:
>>
>> <link rel="previous" href="http://myphonebook.com/results/123?p=1"/>
>> <link rel="next" href="http://myphonebook.com/results/123?p=3"/>
>> <link rel="sort.lastname" href="http://myphonebook.com/results/123?s=lastname"/>
>> <link rel="sort.firstname"
>> href="http://myphonebook.com/results/123?s=firstname"/>
>> <link rel="sort.city" href="http://myphonebook.com/results/123?s=city"/>
>> <link rel="sort.county" href="http://myphonebook.com/results/123?s=county"/>
>>
>> Later, another client comes and says he needs to provide direct links
>> to 10 pages of results (e.g. page 1 - 10). So, the services exposes
>> those too:
>>
>> <link rel="previous" href="http://myphonebook.com/results/123?p=1"/>
>> <link rel="page.1" href="http://myphonebook.com/results/123?p=1"/>
>> <link rel="page.2" href="http://myphonebook.com/results/123?p=2"/>
>> <link rel="page.3" href="http://myphonebook.com/results/123?p=3"/>
>> <link rel="page.4" href="http://myphonebook.com/results/123?p=4"/>
>> <link rel="page.5" href="http://myphonebook.com/results/123?p=5"/>
>> <link rel="page.6" href="http://myphonebook.com/results/123?p=6"/>
>> <link rel="page.7" href="http://myphonebook.com/results/123?p=7"/>
>> <link rel="page.8" href="http://myphonebook.com/results/123?p=8"/>
>> <link rel="page.9" href="http://myphonebook.com/results/123?p=9"/>
>> <link rel="page.10" href="http://myphonebook.com/results/123?p=10"/>
>> <link rel="sort.lastname" href="http://myphonebook.com/results/123?s=lastname"/>
>> <link rel="sort.firstname"
>> href="http://myphonebook.com/results/123?s=firstname"/>
>> <link rel="sort.city" href="http://myphonebook.com/results/123?s=city"/>
>> <link rel="sort.county" href="http://myphonebook.com/results/123?s=county"/>
>>
>> Even later... well, you get the point. So, how do folks generally
>> deal with resources that happen to have many potential next states?
>> At a certain point the list of links can get ridiculous, leading to
>> the temptation to templatize the URL - but then that parameterization
>> feels wrong. Thoughts?
>>
>> I'm also concerned about the tight-coupling of it. I mean, it seems
>> that URI Templates are less than ideal because it creates a tighter
>> coupling between service/client. On the other hand, it seems that
>> I've basically transferred that coupling from the templated URL over
>> to the "rel" attribute. Any thoughts on this?
>>
>> Thanks,
>> --tim
>>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
I found text/uri-list on IANA Is there one for just one URI (or URL)? Thanks in advance. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
What is the use case? Subbu On Mon, Jun 1, 2009 at 8:38 AM, Bill Burke <bburke@...> wrote: > > > I found text/uri-list on IANA Is there one for just one URI (or URL)? > > Thanks in advance. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > >
On Mon, Jun 1, 2009 at 11:38 AM, Bill Burke <bburke@...> wrote: > I found text/uri-list on IANA Is there one for just one URI (or URL)? Erm, I expect that a "list of one" would be just fine. 8-) Mark.
Then I believe you are saying CQS and REST cannot exist together. A huge part of CQS is carrying forward the context of the original operation while you try really hard to ignore the context. Even if you do something like documents (hoping to the source events at the server) this will only work in extremely naive circumstances. > More importantly, by defining multiple messages carrying intent, you enforce > the client to understand those, which is coupling the client to the details > of what commands exist, which means the client needs an understanding of > each of those commands. I compare that to the case of understanding the > media type to send a representation, and understanding how to follow links, > and I'd argue that the latter has lower coupling, with higher implementation > cost. the client knows what operations are actually supported (it being the messages contain data as well wouldn't it still need to know about them?)? it is the one that represents the behaviors the two are conceptually coupled. the only time they are not is when you do not have a behavior oriented UI (i.e. they are data oriented). using cqs your interface should be behavior oriented (not data oriented). Cheers, Greg On Tue, May 19, 2009 at 4:05 PM, Sebastien Lambla <seb@...> wrote: > > > I don't disagree with you, it's a matter of tradeoffs. > > Designing a ReST architecture requires the client being instructed in what > to do next in the media type definition, aka your document format. This > requires a lot of engineering and thoughts in how to design those, including > how the interaction can be driven by the server and how the links are to be > followed, which makes creating them expensive, but hopefully much more > loosely coupled, reusable and durable. > > If you package the intent and the semantics of an operation within a message > and POST to a queue, you may breach many constraints of ReST in the process, > which is a tradeoff each developer has to evaluate for themselves. > > More importantly, by defining multiple messages carrying intent, you enforce > the client to understand those, which is coupling the client to the details > of what commands exist, which means the client needs an understanding of > each of those commands. I compare that to the case of understanding the > media type to send a representation, and understanding how to follow links, > and I'd argue that the latter has lower coupling, with higher implementation > cost. > > Seb > > -----Original Message----- > From: Greg Young [mailto:gregoryyoung1@...] > Sent: 19 May 2009 20:31 > To: Sebastien Lambla > Subject: Re: [rest-discuss] CommandQuerySeparation and REST? > > Yes things like this can be done ... > > but when you start going down this path (everything becomes actions > like these) don't you really lose much of what you had to benefit > fgrom in the beginning? This is why I was saying I prefer to just use > a pipeline on the write side. > > Cheers, > > Greg > > On Tue, May 19, 2009 at 3:01 PM, Sebastien Lambla <seb@...> wrote: >> >> >>> Suppose someone does a PUT on /Customer/XYZ/Address and the server >>> receives an updated address. Assuming the domain model accepts two >>> potential messages for updating an address: >>> - CorrectCustomerAddress >>> - CustomerHasMovedToNewAddress >>> >>> Which one command message do you send based on the updated address >>> received in the PUT? >> >> I'd model it by specifying two different resources. Given a GET: >> >> <address for="/Customer/XYZ"> >> <action rel="http://actions.acme.org/address-correction" method="put" >> href="/Customer/XYZ/Address" /> >> <action rel="http://actions.acme.org/address-moved" method="post" >> href="/Customer/XYZ" /> >> <content> >> <line1>Somewhere</line1> >> </content> >> </address> >> >> The UA would process the document, discover two links it can follow with > any >> modifications to the document it wants to submit, and present the user > with >> the option of following either links. How the UA presents the two options > is >> up to how much understanding is hard-coded in the client (for a rel > value). >> >> What we then have is the same representation being sent to two resources, >> with various semantics. >> >> Another option is to make that kind of decisions based on the actual > content >> of the mediatype. The typical scenario would be in html forms. >> >> POST /Customer/XYZ/Address >> >> line1=Somewhere;reason=[correction|moving] >> >> Another option in html is to simply serve two different pages: >> >> GET /Customer/XYZ/Address >> >> <a href="Customer/XYZ/Address/Moving.html">I'm moving</a> or <a >> href="Customer/XYZ/Address/Correction">There was a mistake</a> >> >> Each pointing the result of the form to the correct URI. >> >> You can have the same *representation* you wish to change used by multiple >> *resources*. I don't see why you can't create as many resources as you > need, >> as intent is carried by the link being followed. >> >> Seb >> >> > > -- > It is the mark of an educated mind to be able to entertain a thought > without accepting it. > > -- It is the mark of an educated mind to be able to entertain a thought without accepting it.
On Mon, Jun 1, 2009 at 7:37 AM, mike amundsen <mamund@...> wrote: > > > One possibility would be to adopt a pattern of sending a "related" > link that points to another representation with all the additional > transition links. > > <link rel="related" href="http://myphonebook.com/results/123?r=dhFk3gf" /> > > You might commit to having a set of expected state transitions in all > search results (paging, start new search, etc.) and then place any > additional links in the related resource. > > Another advantage of this pattern is that you have the opportunity to > customize the contents of the related resource based on criteria such > as the type of search, the user doing the search, etc. I'm new here, but I'd like to dive in anyway just so I can get my thoughts in the mix. But, to me, this sounds like a good idea, the concept of an extended service. For example, the "paging options", with 10 links for pages 1 to 10. Why stop at 10? Why 10 at all? Why not 5 or 20, etc. Since one of the points of the idiom is that the client is not supposed to generate these links like hand, there needs to be some mechanism for the service to provide those link for the client. Now, this can be done several ways. One, is simply that there will be no service that can give you the link to, say, page 73. However, there will be services that can let you step your way, and zero in on the page. You could say "here's all of the pages, in blocks of 100" (1-100, 101-200, 201-300...). Use on of those links and you can get list of pages in the 10s (201-210, 211-220), and finally, those block will give you the 201, 202, 203, etc. pages. So, there is an algorithm to walk the links without necessarily giving the algorithm to the client. You present it as workflow. My point here is that this concept can be extended to other services besides paging, like what was suggested here. As I said, I'm new here, and new to thinking in this mode, so I may be utterly off base. Opinions appreciated. Regards, Will Hartung
> > Even later... well, you get the point. So, how do folks generally > deal with resources that happen to have many potential next states? > At a certain point the list of links can get ridiculous, leading to > the temptation to templatize the URL - but then that parameterization > feels wrong. Thoughts? > Forms ? select/option etc. The mime type of html will tell you how to understand that form and how to make requests using that form. > I'm also concerned about the tight-coupling of it. I mean, it seems > that URI Templates are less than ideal because it creates a tighter > coupling between service/client. On the other hand, it seems that > I've basically transferred that coupling from the templated URL over > to the "rel" attribute. Any thoughts on this? > > Thanks, > --tim > >
--- In rest-discuss@yahoogroups.com, Tim Williams <williamstw@...> wrote: > <link rel="page.1" href="http://myphonebook.com/results/123?p=1"/> Does anyone have strong thoughts on what to do with custom relational types? I see a mixture of URI and string literals in examples. It would seem that if the relational type is not registered, URIs should be leveraged. Thoughts?
Recently, I've been toying with clients that can scan for pre-defined rel values. I use this pattern to write "bots" that can be "programmed" by the server via links. I've also used pre-defined rel values to give non-HTML clients hints on how (and where) to display links to the user. mca http://amundsen.com/blog/ On Mon, Jun 1, 2009 at 15:44, Ebenezer Ikonne <amaeze@gmail.com> wrote: > --- In rest-discuss@yahoogroups.com, Tim Williams <williamstw@...> wrote: > >> <link rel="page.1" href="http://myphonebook.com/results/123?p=1"/> > > > Does anyone have strong thoughts on what to do with custom relational types? I see a mixture of URI and string literals in examples. It would seem that if the relational type is not registered, URIs should be leveraged. > > Thoughts? > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
What's been the format of these rel values? rel = "myValue" or rel = "http://www.domain.com/rel/myValue" On Mon, Jun 1, 2009 at 3:52 PM, mike amundsen <mamund@...> wrote: > > > Recently, I've been toying with clients that can scan for pre-defined > rel values. I use this pattern to write "bots" that can be > "programmed" by the server via links. > > I've also used pre-defined rel values to give non-HTML clients hints > on how (and where) to display links to the user. > > > mca > http://amundsen.com/blog/ > > On Mon, Jun 1, 2009 at 15:44, Ebenezer Ikonne <amaeze@...<amaeze%40gmail.com>> > wrote: > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > Tim Williams <williamstw@...> wrote: > > > >> <link rel="page.1" href="http://myphonebook.com/results/123?p=1"/> > > > > > > Does anyone have strong thoughts on what to do with custom relational > types? I see a mixture of URI and string literals in examples. It would > seem that if the relational type is not registered, URIs should be > leveraged. > > > > Thoughts? > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > >
Right now I'm using my own tokens ("delete", "refresh", "clear",
"list", etc.) right now as this was easier to start and is only used
in some local utility apps. I'm also not considering clashes w/
existing names (i.e. changing semantics).
mca
http://amundsen.com/blog/
On Mon, Jun 1, 2009 at 16:06, Ebenezer Ikonne <amaeze@...> wrote:
>
>
> What's been the format of these rel values?
>
> rel = "myValue" or
> rel = "http://www.domain.com/rel/myValue"
>
> On Mon, Jun 1, 2009 at 3:52 PM, mike amundsen <mamund@...> wrote:
>>
>>
>> Recently, I've been toying with clients that can scan for pre-defined
>> rel values. I use this pattern to write "bots" that can be
>> "programmed" by the server via links.
>>
>> I've also used pre-defined rel values to give non-HTML clients hints
>> on how (and where) to display links to the user.
>>
>> mca
>> http://amundsen.com/blog/
>>
>> On Mon, Jun 1, 2009 at 15:44, Ebenezer Ikonne <amaeze@...> wrote:
>> > --- In rest-discuss@yahoogroups.com, Tim Williams <williamstw@...>
>> > wrote:
>> >
>> >> <link rel="page.1" href="http://myphonebook.com/results/123?p=1"/>
>> >
>> >
>> > Does anyone have strong thoughts on what to do with custom relational
>> > types? I see a mixture of URI and string literals in examples. It would
>> > seem that if the relational type is not registered, URIs should be
>> > leveraged.
>> >
>> > Thoughts?
>> >
>> >
>> >
>> > ------------------------------------
>> >
>> > Yahoo! Groups Links
>> >
>> >
>> >
>> >
>
>
>
>
URIs are safer. On Jun 1, 2009, at 1:06 PM, Ebenezer Ikonne wrote: > What's been the format of these rel values? > > rel = "myValue" or > rel = "http://www.domain.com/rel/myValue" > > On Mon, Jun 1, 2009 at 3:52 PM, mike amundsen <mamund@...> > wrote: > >> >> >> Recently, I've been toying with clients that can scan for pre-defined >> rel values. I use this pattern to write "bots" that can be >> "programmed" by the server via links. >> >> I've also used pre-defined rel values to give non-HTML clients hints >> on how (and where) to display links to the user. >> >> >> mca >> http://amundsen.com/blog/ >> >> On Mon, Jun 1, 2009 at 15:44, Ebenezer Ikonne >> <amaeze@...<amaeze%40gmail.com>> >> wrote: >>> --- In rest-discuss@yahoogroups.com <rest-discuss >>> %40yahoogroups.com>, >> Tim Williams <williamstw@...> wrote: >>> >>>> <link rel="page.1" href="http://myphonebook.com/results/123?p=1"/> >>> >>> >>> Does anyone have strong thoughts on what to do with custom >>> relational >> types? I see a mixture of URI and string literals in examples. It >> would >> seem that if the relational type is not registered, URIs should be >> leveraged. >>> >>> Thoughts? >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >>> >> >> --- http://subbu.org
Safer? Can you share why? On Mon, Jun 1, 2009 at 4:56 PM, Subbu Allamaraju <subbu@...> wrote: > URIs are safer. > > On Jun 1, 2009, at 1:06 PM, Ebenezer Ikonne wrote: > > What's been the format of these rel values? >> >> rel = "myValue" or >> rel = "http://www.domain.com/rel/myValue" >> >> On Mon, Jun 1, 2009 at 3:52 PM, mike amundsen <mamund@...> wrote: >> >> >>> >>> Recently, I've been toying with clients that can scan for pre-defined >>> rel values. I use this pattern to write "bots" that can be >>> "programmed" by the server via links. >>> >>> I've also used pre-defined rel values to give non-HTML clients hints >>> on how (and where) to display links to the user. >>> >>> >>> mca >>> http://amundsen.com/blog/ >>> >>> On Mon, Jun 1, 2009 at 15:44, Ebenezer Ikonne <amaeze@...<amaeze% >>> 40gmail.com>> >>> wrote: >>> >>>> --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, >>>> >>> Tim Williams <williamstw@...> wrote: >>> >>>> >>>> <link rel="page.1" href="http://myphonebook.com/results/123?p=1"/> >>>>> >>>> >>>> >>>> Does anyone have strong thoughts on what to do with custom relational >>>> >>> types? I see a mixture of URI and string literals in examples. It would >>> seem that if the relational type is not registered, URIs should be >>> leveraged. >>> >>>> >>>> Thoughts? >>>> >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>>> >>>> >>> >>> > --- > http://subbu.org > >
I mean, to avoid naming conflicts. On Mon, Jun 1, 2009 at 2:32 PM, Ebenezer Ikonne <amaeze@...> wrote: > Safer? Can you share why? > > > On Mon, Jun 1, 2009 at 4:56 PM, Subbu Allamaraju <subbu@...> wrote: > >> URIs are safer. >> >> On Jun 1, 2009, at 1:06 PM, Ebenezer Ikonne wrote: >> >> What's been the format of these rel values? >>> >>> rel = "myValue" or >>> rel = "http://www.domain.com/rel/myValue" >>> >>> On Mon, Jun 1, 2009 at 3:52 PM, mike amundsen <mamund@...> wrote: >>> >>> >>>> >>>> Recently, I've been toying with clients that can scan for pre-defined >>>> rel values. I use this pattern to write "bots" that can be >>>> "programmed" by the server via links. >>>> >>>> I've also used pre-defined rel values to give non-HTML clients hints >>>> on how (and where) to display links to the user. >>>> >>>> >>>> mca >>>> http://amundsen.com/blog/ >>>> >>>> On Mon, Jun 1, 2009 at 15:44, Ebenezer Ikonne <amaeze@...<amaeze% >>>> 40gmail.com>> >>>> wrote: >>>> >>>>> --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, >>>>> >>>> Tim Williams <williamstw@...> wrote: >>>> >>>>> >>>>> <link rel="page.1" href="http://myphonebook.com/results/123?p=1"/> >>>>>> >>>>> >>>>> >>>>> Does anyone have strong thoughts on what to do with custom relational >>>>> >>>> types? I see a mixture of URI and string literals in examples. It >>>> would >>>> seem that if the relational type is not registered, URIs should be >>>> leveraged. >>>> >>>>> >>>>> Thoughts? >>>>> >>>>> >>>>> >>>>> ------------------------------------ >>>>> >>>>> Yahoo! Groups Links >>>>> >>>>> >>>>> >>>>> >>>>> >>>> >>>> >> --- >> http://subbu.org >> >> >
They are safer from a clashing perspective. More importantly, that's where most proposals are going because the existing use of relationships in HTML and XHTML family of languages had a restriction of using profile= which no one uses. And then you have the whole CURIEs in RDFa for relationships proposal, which is quite a mess at the moment, but still is URIs in RELs. It's just a safer bet from a web arch point of view. Seb To: subbu@... CC: rest-discuss@yahoogroups.com From: amaeze@... Date: Mon, 1 Jun 2009 17:32:37 -0400 Subject: Re: [rest-discuss] Re: HATEOAS - Numerous States Safer? Can you share why? On Mon, Jun 1, 2009 at 4:56 PM, Subbu Allamaraju <subbu@...> wrote: URIs are safer. On Jun 1, 2009, at 1:06 PM, Ebenezer Ikonne wrote: What's been the format of these rel values? rel = "myValue" or rel = "http://www.domain.com/rel/myValue" On Mon, Jun 1, 2009 at 3:52 PM, mike amundsen <mamund@...> wrote: Recently, I've been toying with clients that can scan for pre-defined rel values. I use this pattern to write "bots" that can be "programmed" by the server via links. I've also used pre-defined rel values to give non-HTML clients hints on how (and where) to display links to the user. mca http://amundsen.com/blog/ On Mon, Jun 1, 2009 at 15:44, Ebenezer Ikonne <amaeze@...<amaeze%40gmail.com>> wrote: --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, Tim Williams <williamstw@...> wrote: <link rel="page.1" href="http://myphonebook.com/results/123?p=1"/> Does anyone have strong thoughts on what to do with custom relational types? I see a mixture of URI and string literals in examples. It would seem that if the relational type is not registered, URIs should be leveraged. Thoughts? ------------------------------------ Yahoo! Groups Links --- http://subbu.org _________________________________________________________________ Share your photos with Windows Live Photos – Free. http://clk.atdmt.com/UKM/go/134665338/direct/01/
I don't want to get into a debate on whether transactions are appropriate for restful applications/services or not, but check this out: http://www.jboss.org/community/wiki/TransactionalsupportforJAXRSbasedapplications Mike Musgrave of JBoss TXM put it together. Pretty clean API. I want to see if Atom Links can replace some of the published URI schemes so that we can limit the number of URIs exposed by the system and give more flexibility to the system as a whole. I'm also wondering if we standardize on Link Relationships rather than data format, this may free a DTX standard from having to define a data format altogether. Also, there's probably is, or going to be support for a compensating transaction engine as well as we all know, 2pC DTX ain't really that appropriate for loosly coupled systems. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
I have not studied the proposal in depth yet, so I may comment more after I do so. But my immediate response is that I think another less-well-known transaction pattern is more appropriate for the Web in general and ReST in particular. That is variously called "provisional-final" or "options" (among other names). It does not require locking, nor does it require compensating actions (which are either troublesome or impossible). The basics are: 1. In the first phase, all participants update their resources provisionally (whether by state or by separate provisional resource), 2. Upon commit, all participants update their resources in their final state (or create final resources). 3. Upon abort or cancel, all participants delete their provisional resources, or mark them cancelled, or create new cancelled resources. The pattern also allows selective commits or cancels, for example for a bidding process. It was implemented in OASIS BTP, which could also be made RESTful without a lot of work. http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=business-transaction
I know JBoss TX has a BTP implementation. This here is 2pc dtx restful api. Bob Haugen wrote: > > > > I have not studied the proposal in depth yet, so I may comment more > after I do so. > > But my immediate response is that I think another less-well-known > transaction pattern is more appropriate for the Web in general and > ReST in particular. > > That is variously called "provisional-final" or "options" (among other > names). > > It does not require locking, nor does it require compensating actions > (which are either troublesome or impossible). > > The basics are: > 1. In the first phase, all participants update their resources > provisionally (whether by state or by separate provisional resource), > 2. Upon commit, all participants update their resources in their final > state (or create final resources). > 3. Upon abort or cancel, all participants delete their provisional > resources, or mark them cancelled, or create new cancelled resources. > > The pattern also allows selective commits or cancels, for example for > a bidding process. > > It was implemented in OASIS BTP, which could also be made RESTful > without a lot of work. > http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=business-transaction > <http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=business-transaction> > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
In the never ending quest to replace the acronym HATEOAS with something more intuitive and pronounceable, I humbly submit Yet Another Attempt To Replace The Acronym (YAATRTA) HATEOAS: HYDEPR (HYpermedia DEscribes PRotocols). Credit goes to Jim Webber for coining the term "Hypermedia Describes Protocols". My "value add" was simply to turn it into an acronym: HYDEPR (pronounced HIGH-de-pur). (Hey, that's what analysts do. :-)) Here's how to use it in context: "Perhaps the most important RESTful 'uniform interface' constraint is the HYDEPR constraint (formerly known as the HATEOAS constraint)." Read this post for more information: Epiphany: Replace HATEOAS With "Hypermedia Describes Protocols"<http://blogs.gartner.com/nick_gall/2009/06/02/epiphany-replace-hateoas-with-hypermedia-describes-protocols/> -- Nick -- Nick Gall Phone: +1.781.608.5871 AOL IM: Nicholas Gall Yahoo IM: nick_gall_1117 MSN IM: (same as email) Google Talk: (same as email) Email: nick.gall AT-SIGN gmail DOT com Weblog: http://ironick.typepad.com/ironick/
On Wed, Jun 03, 2009 at 09:33:29AM -0400, Nick Gall wrote: > In the never ending quest to replace the acronym HATEOAS with something more > intuitive and pronounceable, I humbly submit Yet Another Attempt To Replace > The Acronym (YAATRTA) HATEOAS: HYDEPR (HYpermedia DEscribes PRotocols). > Credit goes to Jim Webber for coining the term "Hypermedia Describes > Protocols". My "value add" was simply to turn it into an acronym: HYDEPR > (pronounced HIGH-de-pur). (Hey, that's what analysts do. :-)) > Here's how to use it in context: "Perhaps the most important RESTful > 'uniform interface' constraint is the HYDEPR constraint (formerly known as > the HATEOAS constraint)." I think the phrase "Hypermedia Describes Protocols" is pretty non-intuitive, and certainly doesn't convey as much as HATEOAS does when expanded. Additionally, I think they're both about as unpronounceable as each other. Heh. -- Noah Slater, http://tumbolia.org/nslater
On Wed, Jun 3, 2009 at 9:42 AM, Noah Slater <nslater@...> wrote: > I think the phrase "Hypermedia Describes Protocols" is pretty > non-intuitive, and > certainly doesn't convey as much as HATEOAS does when expanded. > Additionally, I > think they're both about as unpronounceable as each other. Heh. > That's why the quest is never ending! Heh. -- Nick -- Nick Gall Phone: +1.781.608.5871 AOL IM: Nicholas Gall Yahoo IM: nick_gall_1117 MSN IM: (same as email) Google Talk: (same as email) Email: nick.gall AT-SIGN gmail DOT com Weblog: http://ironick.typepad.com/ironick/
Let's assume for a moment that the following supposition is true: At their heart many systems that we develop are behavior not data centric. REST is essentially a data centric interchange. We can in various ways build behavioral interfaces as data representations. Seb gave good examples using his rel links, another good example would be modelling a resource as a state machine. These solutions do however offer a rather large impedance mismatch with that of our behavioral system (often times creating a large language gap as an example). When does this impedance mismatch become appropriate to take on? Greg -- It is the mark of an educated mind to be able to entertain a thought without accepting it.
Perhaps LAST -- "links as state transitions." I can image all sorts
of "Who's on first" style routines as people talk about the "last REST
constraint...."
--peter
On Wed, Jun 3, 2009 at 8:58 AM, Josh Sled <jsled@...> wrote:
> Nick Gall <nick.gall@...> writes:
>> HATEOAS: HYDEPR
>
> "HATEOAS" is atrocious, and "HYDEPR" is not much better.
>
> I've never found "the hypermedia constraint" or "hypermedia" to be
> insufficient.
>
> --
> ...jsled
> http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
>
For me, actually the more important word in the expanded HATEOAS is not Hipermedia, but Engine. Implying that is not sufficient for a representation "to have links", it's important to understand that those links are the way used by the server to drive the application from one state to another. The links per se are useless, unless they are a expression of the intentions of the server. If you receive a representation of a resource from the server with no links, it's nevertheless complying with HATEOAS, meaning that there is no more states to go from there (imagine a "goodbye" page). Now if you use Javascript to insert a bunch of links in it, that doesn't make it more HATEOAS... Actually, if you do that you are breaking HATEOAS, because you can insert a link that you know that corresponds to some state of the application (now it has hypermedia), but since it was not originated by the server, it's not a engine to anything... You don't break the H but you break the E... :) So I think I'll stick with HATEOAS. Josh Sled wrote: > Nick Gall <nick.gall@...> writes: > >> HATEOAS: HYDEPR >> > > "HATEOAS" is atrocious, and "HYDEPR" is not much better. > > I've never found "the hypermedia constraint" or "hypermedia" to be > insufficient. > >
+1 to Peter for "LAST"
"Which is the hypermedia constraint?"
"The LAST one."'
"Yeah, but which one is last?"
"The hypermedia one."
"But that's the first one, right?"
"No, it's the LAST one."
"Which is the last one?"
"The hypermedia one!"
mca
http://amundsen.com/blog/
On Wed, Jun 3, 2009 at 10:06, Peter Keane <pkeane@...> wrote:
> Perhaps LAST -- "links as state transitions." I can image all sorts
> of "Who's on first" style routines as people talk about the "last REST
> constraint...."
>
> --peter
>
> On Wed, Jun 3, 2009 at 8:58 AM, Josh Sled <jsled@...> wrote:
> > Nick Gall <nick.gall@...> writes:
> >> HATEOAS: HYDEPR
> >
> > "HATEOAS" is atrocious, and "HYDEPR" is not much better.
> >
> > I've never found "the hypermedia constraint" or "hypermedia" to be
> > insufficient.
> >
> > --
> > ...jsled
> > http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
> >
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
2009/6/3 António Mota <amsmota@...>: > > > For me, actually the more important word in the expanded HATEOAS is not > Hipermedia, but Engine. Implying that is not sufficient for a > representation "to have links", it's important to understand that those > links are the way used by the server to drive the application from one > state to another. The links per se are useless, unless they are a > expression of the intentions of the server. > > If you receive a representation of a resource from the server with no > links, it's nevertheless complying with HATEOAS, meaning that there is > no more states to go from there (imagine a "goodbye" page). > > Now if you use Javascript to insert a bunch of links in it, that > doesn't make it more HATEOAS... Actually, if you do that you are > breaking HATEOAS, because you can insert a link that you know that > corresponds to some state of the application (now it has hypermedia), > but since it was not originated by the server, it's not a engine to > anything... You don't break the H but you break the E... :) Per the code-on-demand bits of REST, presumably the server DID originate that javascript, so I don't think that breaks HATEOS. --peter > > So I think I'll stick with HATEOAS. > > Josh Sled wrote: >> Nick Gall <nick.gall@...> writes: >> >>> HATEOAS: HYDEPR >>> >> >> "HATEOAS" is atrocious, and "HYDEPR" is not much better. >> >> I've never found "the hypermedia constraint" or "hypermedia" to be >> insufficient. >> >> > >
That was a illustration I used to make my point, not necessarily something technically correct. But forget about the last paragraph if you don't like images, my point is still the same. My point, HATEAOS is not about the Hipermedia, it's about the Engine that uses the Hipermedia. A "goodbye" page with no links in it doesn't break HATEOAS, it simply means that there are no states to go. Peter Keane wrote: > 2009/6/3 António Mota <amsmota@...>: > >> For me, actually the more important word in the expanded HATEOAS is not >> Hipermedia, but Engine. Implying that is not sufficient for a >> representation "to have links", it's important to understand that those >> links are the way used by the server to drive the application from one >> state to another. The links per se are useless, unless they are a >> expression of the intentions of the server. >> >> If you receive a representation of a resource from the server with no >> links, it's nevertheless complying with HATEOAS, meaning that there is >> no more states to go from there (imagine a "goodbye" page). >> >> Now if you use Javascript to insert a bunch of links in it, that >> doesn't make it more HATEOAS... Actually, if you do that you are >> breaking HATEOAS, because you can insert a link that you know that >> corresponds to some state of the application (now it has hypermedia), >> but since it was not originated by the server, it's not a engine to >> anything... You don't break the H but you break the E... :) >> > > Per the code-on-demand bits of REST, presumably the server DID > originate that javascript, so I don't think that breaks HATEOS. > > --peter > > >> So I think I'll stick with HATEOAS. >> >> Josh Sled wrote: >> >>> Nick Gall <nick.gall@...> writes: >>> >>> >>>> HATEOAS: HYDEPR >>>> >>>> >>> "HATEOAS" is atrocious, and "HYDEPR" is not much better. >>> >>> I've never found "the hypermedia constraint" or "hypermedia" to be >>> insufficient. >>> >>> >>> >> >>
On Wednesday 03 June 2009, Nick Gall wrote: > In the never ending quest to replace the acronym HATEOAS with > something more intuitive and pronounceable, I humbly submit Yet > Another Attempt To Replace The Acronym (YAATRTA) HATEOAS: HYDEPR > (HYpermedia DEscribes PRotocols). Credit goes to Jim Webber for > coining the term "Hypermedia Describes Protocols". My "value add" was > simply to turn it into an acronym: HYDEPR (pronounced HIGH-de-pur). > (Hey, that's what analysts do. :-)) Here's how to use it in context: > "Perhaps the most important RESTful 'uniform interface' constraint is > the HYDEPR constraint (formerly known as the HATEOAS constraint)." > > Read this post for more information: Epiphany: Replace HATEOAS With > "Hypermedia Describes > Protocols"<http://blogs.gartner.com/nick_gall/2009/06/02/epiphany-rep >lace-hateoas-with-hypermedia-describes-protocols/> -- Nick I've read your posting and Jim Webber's presentation you're referring to. For maximum impact I'll put it very boldly and far beyond my level of understanding: HATEOAS by any other name is a pipe dream. The server can put all kinds of links into its responses, but that doesn't do any good unless the client knows what to look for. So, looking at Jim's restbucks example, how does the client know to look for LINK elements with rel="payment"? If this isn't a protocol I don't know what is. Sure, some interactions are lifted into a generic protocol of interaction with resources. That doesn't mean the protocol isn't there from the start. If the client doesn't know about the protocol it is paralyzed for it has no idea which of the available links to follow to achieve an intended effect. HATEOAS provides a level of indirection between state transitions and their associated endpoints. HATEOAS provides a generic way to indicate available state transitions. These are worthwhile features, but that's about all. Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
On Jun 3, 2009, at 8:08 AM, Michael Schuerig wrote: > HATEOAS provides a level of indirection between state transitions and > their associated endpoints. HATEOAS provides a generic way to indicate > available state transitions. These are worthwhile features, but that's > about all. Well said. It seems to me that, some times, this term gets stretched wildly to imply a broader concept or philosophy. It is nothing more than an indirection to communicate possible state transitions, and requires clients to *fully* understand the syntax and semantics of each transition. Is that a big deal? May be, or may be not. It just depends on the application. Subbu
It may not exist now, but it must be possible to produce an hypermedia format capable of fully describing state-transition semantics/syntax? Cheers, Mike Subbu Allamaraju wrote: > On Jun 3, 2009, at 8:08 AM, Michael Schuerig wrote: > > >> HATEOAS provides a level of indirection between state transitions and >> their associated endpoints. HATEOAS provides a generic way to indicate >> available state transitions. These are worthwhile features, but that's >> about all. >> > > Well said. It seems to me that, some times, this term gets stretched > wildly to imply a broader concept or philosophy. It is nothing more > than an indirection to communicate possible state transitions, and > requires clients to *fully* understand the syntax and semantics of > each transition. Is that a big deal? May be, or may be not. It just > depends on the application. > > Subbu > > > ------------------------------------ > > Yahoo! Groups Links > > > >
I don't think that REST is "a data-centric interchange" at its core philosophy. IMHO, the data-centricity is how we as a community have erroneously been using the term REST. I think that it's important to challenge this impedance mismatch now :). I think that this is one of the points that Roy Fielding was trying to make half a year ago: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven <http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven> -Solomon On Wed, Jun 3, 2009 at 10:01 AM, Greg Young <gregoryyoung1@...> wrote: > > > Let's assume for a moment that the following supposition is true: At > their heart many systems that we develop are behavior not data > centric. > > REST is essentially a data centric interchange. We can in various ways > build behavioral interfaces as data representations. Seb gave good > examples using his rel links, another good example would be modelling > a resource as a state machine. These solutions do however offer a > rather large impedance mismatch with that of our behavioral system > (often times creating a large language gap as an example). > > When does this impedance mismatch become appropriate to take on? > > Greg > > -- > It is the mark of an educated mind to be able to entertain a thought > without accepting it. > >
Subbu Allamaraju wrote: > It is nothing more than an indirection to communicate possible state > transitions, and > requires clients to *fully* understand the syntax and semantics of > each transition. Is that a big deal? May be, or may be not. It just > depends on the application. That is not completely accurate, a client can understand *only* a sub-set of those transitions. That's what make HATEOAS so effective in decoupling clients, your server can extend the services it provides at any point without breaking the existing clients, that will continue to work as they were, without the new functionalities of course. But new clients can be built that use those new capabilities and they can coexist with the older clients without a problem. Actually, it's a little bit like OSGi, but in a different context, of course...
Let's not forget that Code-On-Demand can, maybe in a limited way, do that... > The transitions may be determined (or limited by) the client�s > knowledge of media types and resource communication mechanisms, both > of which may be improved on-the-fly (e.g., code-on-demand). Mike Kelly wrote: > > > It may not exist now, but it must be possible to produce an hypermedia > format capable of fully describing state-transition semantics/syntax? > > Cheers, > Mike > > Subbu Allamaraju wrote: > > On Jun 3, 2009, at 8:08 AM, Michael Schuerig wrote: > > > > > >> HATEOAS provides a level of indirection between state transitions and > >> their associated endpoints. HATEOAS provides a generic way to indicate > >> available state transitions. These are worthwhile features, but that's > >> about all. > >> > > > > Well said. It seems to me that, some times, this term gets stretched > > wildly to imply a broader concept or philosophy. It is nothing more > > than an indirection to communicate possible state transitions, and > > requires clients to *fully* understand the syntax and semantics of > > each transition. Is that a big deal? May be, or may be not. It just > > depends on the application. > > > > Subbu > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > >
António Mota <amsmota@...> writes:
> For me, actually the more important word in the expanded HATEOAS is not
> Hipermedia, but Engine. Implying that is not sufficient for a representation
> "to have links", it's important to understand that those links are the way
> used by the server to drive the application from one state to another. The
> links per se are useless, unless they are a expression of the intentions of
> the server.
Sure. "Hypermedia", "the Hypermedia constraint", "HATEOAS", "HYDEPR"
... whatever the term, needs to be fully expanded, described,
ascertained, used, &c.
But for the purposes of having a short way to refer to the concept, I've
never found "the hypermedia constraint" or "hypermedia" to be
insufficient. And they're far preferable to "HATEOAS".
--
...jsled
http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
Please read it again. For any transition that the client wants to follow, it needs to *fully* understand the syntax and semantics. Subbu On Jun 3, 2009, at 8:58 AM, António Mota wrote: > Subbu Allamaraju wrote: >> It is nothing more than an indirection to communicate possible >> state transitions, and >> requires clients to *fully* understand the syntax and semantics of >> each transition. Is that a big deal? May be, or may be not. It just >> depends on the application. > That is not completely accurate, a client can understand *only* a > sub-set of those transitions. That's what make HATEOAS so effective > in decoupling clients, your server can extend the services it > provides at any point without breaking the existing clients, that > will continue to work as they were, without the new functionalities > of course. But new clients can be built that use those new > capabilities and they can coexist with the older clients without a > problem. Actually, it's a little bit like OSGi, but in a different > context, of course... > >
I think you're too late.... Nick Gall wrote: > > > > In the never ending quest to replace the acronym HATEOAS with something > more intuitive and pronounceable, I humbly submit Yet Another Attempt To > Replace The Acronym (YAATRTA) HATEOAS: HYDEPR (HYpermedia DEscribes > PRotocols). Credit goes to Jim Webber for coining the term "Hypermedia > Describes Protocols". My "value add" was simply to turn it into an > acronym: HYDEPR (pronounced HIGH-de-pur). (Hey, that's what analysts do. > :-)) > > > Here's how to use it in context: "Perhaps the most important RESTful > 'uniform interface' constraint is the HYDEPR constraint (formerly known > as the HATEOAS constraint)." > > Read this post for more information: > > > Epiphany: Replace HATEOAS With "Hypermedia Describes Protocols" > <http://blogs.gartner.com/nick_gall/2009/06/02/epiphany-replace-hateoas-with-hypermedia-describes-protocols/> > > > -- Nick > > -- > Nick Gall > Phone: +1.781.608.5871 > AOL IM: Nicholas Gall > Yahoo IM: nick_gall_1117 > MSN IM: (same as email) > Google Talk: (same as email) > Email: nick.gall AT-SIGN gmail DOT com > Weblog: http://ironick.typepad.com/ironick/ > <http://ironick.typepad.com/ironick/> > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
What I expect to happen here, although it has not happened yet due to the feeble attempts to do the same using WS-* ideas, is for at least semi-standard application protocols to emerge using hypertext as their engine. You can see that with the recent proposals for RESTful transaction protocols, although I don't think anybody has nailed it yet. Likewise I expect to standardized protocols for ordering, fulfillment and payment, both C2B and B2B. And lots of room for creativity about how to abstract and generalize. In other words, clients will start to understand how to follow hypertext-driven state transitions, to some extent using code-on-demand. So we are still at the beginning of this conversation. Not too late for anything.
> > Well said. It seems to me that, some times, this term gets stretched > wildly to imply a broader concept or philosophy. It is nothing more > than an indirection to communicate possible state transitions, and > requires clients to *fully* understand the syntax and semantics of > each transition. Is that a big deal? May be, or may be not. It just > depends on the application. I did. Maybe you forgot to write the "for any transition that the client wants to follow" part? Nevertheless, I think it's important to stress the point I mentioned when talking about HATEOAS, because, imho, it's one of the big advantages of using a Restfull style. Subbu Allamaraju wrote: > Please read it again. For any transition that the client wants to > follow, it needs to *fully* understand the syntax and semantics. > > Subbu > > On Jun 3, 2009, at 8:58 AM, Ant�nio Mota wrote: > >> Subbu Allamaraju wrote: >>> It is nothing more than an indirection to communicate possible state >>> transitions, and >>> requires clients to *fully* understand the syntax and semantics of >>> each transition. Is that a big deal? May be, or may be not. It just >>> depends on the application. >> That is not completely accurate, a client can understand *only* a >> sub-set of those transitions. That's what make HATEOAS so effective >> in decoupling clients, your server can extend the services it >> provides at any point without breaking the existing clients, that >> will continue to work as they were, without the new functionalities >> of course. But new clients can be built that use those new >> capabilities and they can coexist with the older clients without a >> problem. Actually, it's a little bit like OSGi, but in a different >> context, of course... >> >> >
Josh Sled wrote: > Sure. "Hypermedia", "the Hypermedia constraint", "HATEOAS", "HYDEPR" > ... whatever the term, needs to be fully expanded, described, > ascertained, used, &c. > > I agree it has to be fully described. However, I think much of the confusion and misunderstanding surrounding HATEOAS is precisely because the simplistic way of explaining it in terms of just "hipermedia", or "the hipermedia constraint", or even "connectedness". > TheRESTful Web Services > <http://www.oreilly.com/catalog/9780596529260/> book doesn’t help the > situation by renaming the hypertext engine as /connectedness/. That > does nothing but obscure its role as the driving force in RESTful > applications. So at least maybe we say as Roy Fielding said it, and at least refer to HATEOAS in a simplified way as *hypertext engine*, or *hipermedia engine*? As I said earlier, I think the key word here is "engine". Or maybe this is one of those concepts that have no simplification? > But for the purposes of having a short way to refer to the concept, I've > never found "the hypermedia constraint" or "hypermedia" to be > insufficient. And they're far preferable to "HATEOAS". > >
On Jun 3, 2009, at 9:21 AM, António Mota wrote: > I did. Maybe you forgot to write the "for any transition that the > client wants to follow" part? :) > Nevertheless, I think it's important to stress the point I mentioned > when talking about HATEOAS, because, imho, it's one of the big > advantages of using a Restfull style. That is usually the characteristic of extensible formats. Subbu
Greg, Before I attempt a response to this, I have to clarify a point. Links are the server instructing the client which state transitions are available, and the client knows how to manage those state transitions because the semantics of those are carried out of band. Hence rel=change-of-address and rel=replace-address carry different meanings that help the client decide which state transition to apply. So while partial state is exchanged back and forth between clients and servers, such an exchange is not context-free. While the interface is generic, and the state descriptive, the trigger of the state change (the navigation) is completely dependent on the current context in which the link was given. I'm very much unsure if the interchange is data centric, I would say it is an exchange of data that is driven by links that are qualified in intent and relationship. In other words, I'm unsure what the difference is between a command available contextually to a client called ImMovingCommand and an address resource that gets sent to a link for which the relationship is communicated OOB as an address change. I'm pretty sure I'm missing the very big boat there, so any help to clarify would be very much appreciated. Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Greg Young Sent: 03 June 2009 15:02 To: Rest List Subject: [rest-discuss] Impedance Mismatch Let's assume for a moment that the following supposition is true: At their heart many systems that we develop are behavior not data centric. REST is essentially a data centric interchange. We can in various ways build behavioral interfaces as data representations. Seb gave good examples using his rel links, another good example would be modelling a resource as a state machine. These solutions do however offer a rather large impedance mismatch with that of our behavioral system (often times creating a large language gap as an example). When does this impedance mismatch become appropriate to take on? Greg -- It is the mark of an educated mind to be able to entertain a thought without accepting it. ------------------------------------ Yahoo! Groups Links _________________________________________________________________ MSN straight to your mobile - news, entertainment, videos and more. http://clk.atdmt.com/UKM/go/147991039/direct/01/
Are you saying that when I click on a link, my browser has some understanding of the semantics of the link? Are these semantics just "this is the link I follow when I get a click event on a certain area of the screen"? Or are you saying that there is something more than this? Just trying to understand your statement in the context of an HTML browser. Thanks, Andrew --- In rest-discuss@yahoogroups.com, Subbu Allamaraju <subbu@...> wrote: > > Please read it again. For any transition that the client wants to > follow, it needs to *fully* understand the syntax and semantics. > > Subbu > > On Jun 3, 2009, at 8:58 AM, António Mota wrote: > > > Subbu Allamaraju wrote: > >> It is nothing more than an indirection to communicate possible > >> state transitions, and > >> requires clients to *fully* understand the syntax and semantics of > >> each transition. Is that a big deal? May be, or may be not. It just > >> depends on the application. > > That is not completely accurate, a client can understand *only* a > > sub-set of those transitions. That's what make HATEOAS so effective > > in decoupling clients, your server can extend the services it > > provides at any point without breaking the existing clients, that > > will continue to work as they were, without the new functionalities > > of course. But new clients can be built that use those new > > capabilities and they can coexist with the older clients without a > > problem. Actually, it's a little bit like OSGi, but in a different > > context, of course... > > > > >
--- In rest-discuss@yahoogroups.com, Nick Gall <nick.gall@...> wrote: > > In the never ending quest to replace the acronym HATEOAS with something more > intuitive and pronounceable, Just find better things to do. If you are seriously pronouncing "HATEOAS" often, then you are "doing it wrong". The idea that you need to say "Hypermedia as the engine of application state" more than once in a conversation is the flaw in your thinking. In other words, what you are effectively doing, without consciously realizing it, is saying, "Can we get away with this, if it sounds cool and everyone agrees, forgetting about the agenda of, you know, teaching people to think?" Sorry, putting my foot down: Just find better things to do.
--- In rest-discuss@yahoogroups.com, Peter Keane <pkeane@...> wrote: > > Perhaps LAST -- "links as state transitions." I can image all sorts > of "Who's on first" style routines as people talk about the "last REST > constraint...." > > --peter If you are trying to dumb things down for people, then the best route is to provide reference implementations they can simply copy. (I'm being serious. People tend to learn best constructively.) We shouldn't be surprised as engineers if we start getting paid like air conditioner repairmen, because we can't handle stringing together a few words in a sentence. I don't know why, but programmers are universally afraid of providing examples and prefer instead to talk in terms of theory. Execute. Exemplify. Don't theorize. Otherwise you are just wasting bandwidth painting bike sheds.
--- In rest-discuss@yahoogroups.com, Greg Young <gregoryyoung1@...> wrote: > > Let's assume for a moment that the following supposition is true: At > their heart many systems that we develop are behavior not data > centric. > > REST is essentially a data centric interchange. We can in various ways > build behavioral interfaces as data representations. Seb gave good > examples using his rel links, another good example would be modelling > a resource as a state machine. These solutions do however offer a > rather large impedance mismatch with that of our behavioral system > (often times creating a large language gap as an example). > > When does this impedance mismatch become appropriate to take on? > > Greg So, the first step is to recognize this is not at all a REST question. Put it to you this way, at what point does it make more sense to write things declaratively than imperatively? With declarative, you are saying "be"; with imperative, you are saying "do". This is a pretty huge consequence. In order to be, you need to answer what. In order to do, you need to answer how. If you can make the leap from do to be, then <s>do</s> be it. For one, I think you can reason about reliability of your systems much easier if everything is declarative. Most software problems are related to poorly configured software, in part because we often don't know what our settings do, or we can't easily understand how our production environments differ from development environments. For me, the value of a declarative system is that my CEO can query and drill-down into any aspect of my system's design. There are no blackboxes; just mathematical reasoning. I also think "modeling a resource as a state machine" is an oversimplification. There are many ways to implement a state machine, not all of them correct from a decoupling standpoint. Moreover, modeling programs as Finite State Automata is not easy, however once done correctly the end result is highly robust software that correctly reuses itself. You can very easily stick guards everywhere in your state machine and then say, "Look at this state machine I built", and "mommy might stick it on the fridge", but you've just littered your architecture with termites eager to decay every arch. A state machine without guards is effectively a declarative solution, by the way. Provided, of course, that the state machines it cooperates with are designed the same way. Pat Helland actually has a CIDR paper that sort of addresses such a theme: Can we really continue building reliable components out of unreliable ones? He uses a quick sand metaphor, and doesn't deal directly with object state machines, but the lesson applies regardless.
On Thu, Jun 4, 2009 at 1:13 AM, johnzabroski <johnzabroski@...> wrote: > > > --- In rest-discuss@yahoogroups.com, Nick Gall <nick.gall@...> wrote: > > > > In the never ending quest to replace the acronym HATEOAS with something more > > intuitive and pronounceable, > > Just find better things to do. I'm ALWAYS looking for better things to do. > If you are seriously pronouncing "HATEOAS" often, then you are "doing it wrong". The idea that you need to say "Hypermedia as the engine of application state" more than once in a conversation is the flaw in your thinking. I'm NOT saying HATEOAS more than once in a conversation, on average. The problem is that I have a hundred or more conversations about REST in the course of a year. Plus, I just don't like words with the word HATE in them. :-) -- Nick
On Mon, Jun 1, 2009 at 3:22 PM, Devdatta <dev.akhawe@...> wrote: >> >> Even later... well, you get the point. So, how do folks generally >> deal with resources that happen to have many potential next states? >> At a certain point the list of links can get ridiculous, leading to >> the temptation to templatize the URL - but then that parameterization >> feels wrong. Thoughts? >> > > Forms ? select/option etc. > The mime type of html will tell you how to understand that form and > how to make requests using that form. So, one thing I left out was that the results are already in defined format - namely, a custom extension of OpenSearch, which is an extension to Atom. I suppose I could plug the form constructs in but I'm not sure the best way to convey that these are the forms they know and love through the content type alone. The "link" element was already supported in the the root content type (atom). I suppose there's no magic here, I just have to create a schema for it and clients need to learn it. --tim
> For me, the value of a declarative system is that my CEO can query and > drill-down into any aspect of my system's design. There are no blackboxes; > just mathematical reasoning. I am not sure I want to touch this one with a 30 foot pole but ... Do CEO's need access to *any* aspect of the system or those with business value? More often than not what your CEO is interested in is not your transactional objects but roll ups etc and analysis performed upon them. I understand the argument for OLAP but using this as an argument for REST seems to me to be like using the fact that they where skates as an argument for why I should watch tennis. Beyond that for a transactional system there is still a "black box" in terms of how data affects other data. Consider an example of a resource that tells me sales for the day and a resource that accepts sales. There is a direct link between these two that may or may not be exposed. BTW the solutions I use are fairly far from imperative. Currently I am using what would be categorized as MEST (yes there can be argument if MEST is just the new buzzword for messaging). I have lately been using resources on my read side and messaging on my transactional side. I have attempted at using REST more completely but am running into the fact that it just doesn't seem to make any sense whatsoever in a complex transactional situation. On Thu, Jun 4, 2009 at 2:50 AM, johnzabroski <johnzabroski@...> wrote: > > > --- In rest-discuss@yahoogroups.com, Greg Young <gregoryyoung1@...> wrote: >> >> Let's assume for a moment that the following supposition is true: At >> their heart many systems that we develop are behavior not data >> centric. >> >> REST is essentially a data centric interchange. We can in various ways >> build behavioral interfaces as data representations. Seb gave good >> examples using his rel links, another good example would be modelling >> a resource as a state machine. These solutions do however offer a >> rather large impedance mismatch with that of our behavioral system >> (often times creating a large language gap as an example). >> >> When does this impedance mismatch become appropriate to take on? >> >> Greg > > So, the first step is to recognize this is not at all a REST question. Put > it to you this way, at what point does it make more sense to write things > declaratively than imperatively? > > With declarative, you are saying "be"; with imperative, you are saying "do". > > This is a pretty huge consequence. In order to be, you need to answer what. > In order to do, you need to answer how. If you can make the leap from do to > be, then <s>do</s> be it. For one, I think you can reason about reliability > of your systems much easier if everything is declarative. Most software > problems are related to poorly configured software, in part because we often > don't know what our settings do, or we can't easily understand how our > production environments differ from development environments. > > For me, the value of a declarative system is that my CEO can query and > drill-down into any aspect of my system's design. There are no blackboxes; > just mathematical reasoning. > > I also think "modeling a resource as a state machine" is an > oversimplification. There are many ways to implement a state machine, not > all of them correct from a decoupling standpoint. Moreover, modeling > programs as Finite State Automata is not easy, however once done correctly > the end result is highly robust software that correctly reuses itself. You > can very easily stick guards everywhere in your state machine and then say, > "Look at this state machine I built", and "mommy might stick it on the > fridge", but you've just littered your architecture with termites eager to > decay every arch. > > A state machine without guards is effectively a declarative solution, by the > way. Provided, of course, that the state machines it cooperates with are > designed the same way. Pat Helland actually has a CIDR paper that sort of > addresses such a theme: Can we really continue building reliable components > out of unreliable ones? He uses a quick sand metaphor, and doesn't deal > directly with object state machines, but the lesson applies regardless. > > -- It is the mark of an educated mind to be able to entertain a thought without accepting it.
Greg Young wrote: > > Beyond that for a transactional system there is still a "black box" in > terms of how data affects other data. Consider an example of a > resource that tells me sales for the day and a resource that accepts > sales. There is a direct link between these two that may or may not be > exposed. > Appropriate resource design and use of hyperlinks can expose those relationships. If the resource that accepts sales also lists all sales, then a list for today should be treated as a subset of that resource POST /sales GET /sales GET /sales;today Affect of POST can easily be understood by proxy caches and intermediaries so that doesn't seem black box Cheers, Mike
Sure for a proxy cache or for intermediaries But its still a black box for a CEO!!! which was the context under discussion. On Thu, Jun 4, 2009 at 8:48 AM, Mike Kelly <mike@...> wrote: > Greg Young wrote: >> >> Beyond that for a transactional system there is still a "black box" in >> terms of how data affects other data. Consider an example of a >> resource that tells me sales for the day and a resource that accepts >> sales. There is a direct link between these two that may or may not be >> exposed. >> > > Appropriate resource design and use of hyperlinks can expose those > relationships. > > If the resource that accepts sales also lists all sales, then a list for > today should be treated as a subset of that resource > > POST /sales > GET /sales > > GET /sales;today > > Affect of POST can easily be understood by proxy caches and intermediaries > so that doesn't seem black box > > Cheers, > Mike > -- It is the mark of an educated mind to be able to entertain a thought without accepting it.
.. are you implying most CEOs are less intelligent than a proxy?! Maybe we just need to use better/more appropriate grammar in our URIs? Greg Young wrote: > Sure for a proxy cache or for intermediaries > > > But its still a black box for a CEO!!! which was the context under discussion. > > > > > On Thu, Jun 4, 2009 at 8:48 AM, Mike Kelly <mike@...> wrote: > >> Greg Young wrote: >> >>> Beyond that for a transactional system there is still a "black box" in >>> terms of how data affects other data. Consider an example of a >>> resource that tells me sales for the day and a resource that accepts >>> sales. There is a direct link between these two that may or may not be >>> exposed. >>> >>> >> Appropriate resource design and use of hyperlinks can expose those >> relationships. >> >> If the resource that accepts sales also lists all sales, then a list for >> today should be treated as a subset of that resource >> >> POST /sales >> GET /sales >> >> GET /sales;today >> >> Affect of POST can easily be understood by proxy caches and intermediaries >> so that doesn't seem black box >> >> Cheers, >> Mike >> >> > > > >
What I am saying is that a proxy only needs to know that there is *a* relationship between two things a CEO needs to understand what that relationship is. On Thu, Jun 4, 2009 at 8:58 AM, Mike Kelly <mike@...> wrote: > .. are you implying most CEOs are less intelligent than a proxy?! > > Maybe we just need to use better/more appropriate grammar in our URIs? > > > > Greg Young wrote: >> >> Sure for a proxy cache or for intermediaries >> >> >> But its still a black box for a CEO!!! which was the context under >> discussion. >> >> >> >> >> On Thu, Jun 4, 2009 at 8:48 AM, Mike Kelly <mike@...> wrote: >> >>> >>> Greg Young wrote: >>> >>>> >>>> Beyond that for a transactional system there is still a "black box" in >>>> terms of how data affects other data. Consider an example of a >>>> resource that tells me sales for the day and a resource that accepts >>>> sales. There is a direct link between these two that may or may not be >>>> exposed. >>>> >>>> >>> >>> Appropriate resource design and use of hyperlinks can expose those >>> relationships. >>> >>> If the resource that accepts sales also lists all sales, then a list for >>> today should be treated as a subset of that resource >>> >>> POST /sales >>> GET /sales >>> >>> GET /sales;today >>> >>> Affect of POST can easily be understood by proxy caches and >>> intermediaries >>> so that doesn't seem black box >>> >>> Cheers, >>> Mike >>> >>> >> >> >> >> > > -- It is the mark of an educated mind to be able to entertain a thought without accepting it.
I was pretty sure I replied but can't find the message anywhere, so trying again: > Consider a different use case ... consider setting a default address > (out of a series of existing addresses) onto a customer. Now imagine > that there are 5 different ways this can happen. This rather quickly > seems to sprial out of control. You seem to be implying that those 5 different ways, and different flows, would result in the same resource being modified in the same way without context. Stop me if i'm wrong, but if you were to change an address in 5 different ways using commands, you would have 5 different commands? In ReST, the state that is shared between client and server isn't without workflows. You modify a resource (the concept of address) by sending one or many representations to one or many URIs. Nothing prevents you from operating various workflows through the use of various URIs, or if modelled on a state machine, to keep track of where in the workflow a client application is, and instruct on what to do next, based on the current state being sent back and forth by the client. I think i'm still missing the point as to why changing an address in 5 different ways through 5 different flows would be impossible or messy in ReST? The only way I could understand such a statement would be in the case where you believe that they all end up triggering the same operation on the same resource, which is not a ReST constraint per se. Seb
> Maybe we just need to use better/more appropriate grammar in our URIs? What does the URI or putting a "grammar" on it bring to the party? It seems to me to be an orthogonal concern. Seb
Let`s try a non-CRUD example. On Thu, Jun 4, 2009 at 9:06 AM, Sebastien Lambla <seb@...> wrote: > I was pretty sure I replied but can't find the message anywhere, so trying > again: > >> Consider a different use case ... consider setting a default address >> (out of a series of existing addresses) onto a customer. Now imagine >> that there are 5 different ways this can happen. This rather quickly >> seems to sprial out of control. > > You seem to be implying that those 5 different ways, and different flows, > would result in the same resource being modified in the same way without > context. > > Stop me if i'm wrong, but if you were to change an address in 5 different > ways using commands, you would have 5 different commands? > > In ReST, the state that is shared between client and server isn't without > workflows. You modify a resource (the concept of address) by sending one or > many representations to one or many URIs. > > Nothing prevents you from operating various workflows through the use of > various URIs, or if modelled on a state machine, to keep track of where in > the workflow a client application is, and instruct on what to do next, based > on the current state being sent back and forth by the client. > > I think i'm still missing the point as to why changing an address in 5 > different ways through 5 different flows would be impossible or messy in > ReST? The only way I could understand such a statement would be in the case > where you believe that they all end up triggering the same operation on the > same resource, which is not a ReST constraint per se. > > Seb > > -- It is the mark of an educated mind to be able to entertain a thought without accepting it.
I do like Ian Robinson's categorization of "hypermedia controls", aka links and forms. Although I'm not sure the term control is understood in the same way in all development communities, it does carry the meaning well enough. Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Nick Gall Sent: 04 June 2009 11:00 To: johnzabroski Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: YAATRTA HATEOAS On Thu, Jun 4, 2009 at 1:13 AM, johnzabroski <johnzabroski@...> wrote: > > > --- In rest-discuss@yahoogroups.com, Nick Gall <nick.gall@...> wrote: > > > > In the never ending quest to replace the acronym HATEOAS with something more > > intuitive and pronounceable, > > Just find better things to do. I'm ALWAYS looking for better things to do. > If you are seriously pronouncing "HATEOAS" often, then you are "doing it wrong". The idea that you need to say "Hypermedia as the engine of application state" more than once in a conversation is the flaw in your thinking. I'm NOT saying HATEOAS more than once in a conversation, on average. The problem is that I have a hundred or more conversations about REST in the course of a year. Plus, I just don't like words with the word HATE in them. :-) -- Nick ------------------------------------ Yahoo! Groups Links
Browsers know the semantics of certain things for sure. For instance, they know how to "present" markup, and know about certain kinds of links. But they do not know the meaning of most state transitions. The user is guessing the semantics based on the UI presented, and is driving state transitions. Subbu On Jun 3, 2009, at 7:54 PM, wahbedahbe wrote: > Are you saying that when I click on a link, my browser has some > understanding of the semantics of the link? > > Are these semantics just "this is the link I follow when I get a > click event on a certain area of the screen"? Or are you saying that > there is something more than this? > > Just trying to understand your statement in the context of an HTML > browser. > Thanks, > > Andrew > > --- In rest-discuss@yahoogroups.com, Subbu Allamaraju <subbu@...> > wrote: >> >> Please read it again. For any transition that the client wants to >> follow, it needs to *fully* understand the syntax and semantics. >> >> Subbu >> >> On Jun 3, 2009, at 8:58 AM, António Mota wrote: >> >>> Subbu Allamaraju wrote: >>>> It is nothing more than an indirection to communicate possible >>>> state transitions, and >>>> requires clients to *fully* understand the syntax and semantics of >>>> each transition. Is that a big deal? May be, or may be not. It just >>>> depends on the application. >>> That is not completely accurate, a client can understand *only* a >>> sub-set of those transitions. That's what make HATEOAS so effective >>> in decoupling clients, your server can extend the services it >>> provides at any point without breaking the existing clients, that >>> will continue to work as they were, without the new functionalities >>> of course. But new clients can be built that use those new >>> capabilities and they can coexist with the older clients without a >>> problem. Actually, it's a little bit like OSGi, but in a different >>> context, of course... >>> >>> >> > >
A URI pattern/grammar for subsets which clearly indicate their relationship to the parent is necessary to make efficient use of things like cache-invalidation mechanisms. The 'grammar' of that pattern should make sense in natural human language as well, so CEOs can observe those relationships the same way intermediaries do. The example used for 'data affecting other data' was a sales collection and a subset of today's sales; if resources are identified appropriately then there are no 'side effects', just one effect. Cheers, Mike Sebastien Lambla wrote: >> Maybe we just need to use better/more appropriate grammar in our URIs? >> > > What does the URI or putting a "grammar" on it bring to the party? > > It seems to me to be an orthogonal concern. > > Seb > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
A 'collection' of today's sales... sure there is no real black box there as its so obvious. I was talking about summary information. Where the information on today's sales might be roll ups based on say product types and geographical locations (note that sales don't even HAVE geographical location information on them when they are placed). What about when a given sale will become available in the report (we have to assume the possibility that the report is eventually consistent). REST will not magically make this transparent as was originally claimed. On Thu, Jun 4, 2009 at 9:42 AM, Mike Kelly <mike@...> wrote: > A URI pattern/grammar for subsets which clearly indicate their relationship > to the parent is necessary to make efficient use of things like > cache-invalidation mechanisms. > > The 'grammar' of that pattern should make sense in natural human language as > well, so CEOs can observe those relationships the same way intermediaries > do. > > The example used for 'data affecting other data' was a sales collection and > a subset of today's sales; if resources are identified appropriately then > there are no 'side effects', just one effect. > > Cheers, > Mike > > Sebastien Lambla wrote: >>> >>> Maybe we just need to use better/more appropriate grammar in our URIs? >>> >> >> What does the URI or putting a "grammar" on it bring to the party? >> >> It seems to me to be an orthogonal concern. >> >> Seb >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > -- It is the mark of an educated mind to be able to entertain a thought without accepting it.
The idea of registering "rel" values is an interesting step toward establishing additional semantic information that can be used in state engine applications. Ideally, machine-to-machine interactions should only use a single state transition link - the "next" state transition that is "valid right now." However, this level of interaction model is insufficient in some situations in which I find myself. In my own case, I've been using "rel" attributes on links in order to reduce the amount of tight coupling needed to build flexible RESTful clients for machine-to-machine interactions. Using a pre-defined collection of "rel" values allows me to include additional semantic information in representations that servers can use to 'decorate' links. This information help can allow clients to determine which state transitions to use at a particular time. I also decorate forms (and their input collections) with additional information such as an input name, whether an input is required and, in many cases, the quality requirements for that input (usually in the form of a regular expression filter). With this information, clients that understand this metadata can use the information to a) populate and animate a UI to help a human build a valid representation to present to the server or b) the machine client can scan the metadata in the response and build the next state transition itself. mca http://amundsen.com/blog/ On Thu, Jun 4, 2009 at 09:18, Subbu Allamaraju <subbu@...> wrote: > Browsers know the semantics of certain things for sure. For instance, > they know how to "present" markup, and know about certain kinds of > links. But they do not know the meaning of most state transitions. The > user is guessing the semantics based on the UI presented, and is > driving state transitions. > > Subbu > > On Jun 3, 2009, at 7:54 PM, wahbedahbe wrote: > > > Are you saying that when I click on a link, my browser has some > > understanding of the semantics of the link? > > > > Are these semantics just "this is the link I follow when I get a > > click event on a certain area of the screen"? Or are you saying that > > there is something more than this? > > > > Just trying to understand your statement in the context of an HTML > > browser. > > Thanks, > > > > Andrew > > > > --- In rest-discuss@yahoogroups.com, Subbu Allamaraju <subbu@...> > > wrote: > >> > >> Please read it again. For any transition that the client wants to > >> follow, it needs to *fully* understand the syntax and semantics. > >> > >> Subbu > >> > >> On Jun 3, 2009, at 8:58 AM, António Mota wrote: > >> > >>> Subbu Allamaraju wrote: > >>>> It is nothing more than an indirection to communicate possible > >>>> state transitions, and > >>>> requires clients to *fully* understand the syntax and semantics of > >>>> each transition. Is that a big deal? May be, or may be not. It just > >>>> depends on the application. > >>> That is not completely accurate, a client can understand *only* a > >>> sub-set of those transitions. That's what make HATEOAS so effective > >>> in decoupling clients, your server can extend the services it > >>> provides at any point without breaking the existing clients, that > >>> will continue to work as they were, without the new functionalities > >>> of course. But new clients can be built that use those new > >>> capabilities and they can coexist with the older clients without a > >>> problem. Actually, it's a little bit like OSGi, but in a different > >>> context, of course... > >>> > >>> > >> > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Sorry, I was going on the original example you gave. Appreciate the subsequent example you've just given is more complex but it doesn't sound like it would contain any relationships that couldn't be modeled transparently using more granular resources and hyperlinks. Greg Young wrote: > A 'collection' of today's sales... sure there is no real black box > there as its so obvious. > > I was talking about summary information. Where the information on > today's sales might be roll ups based on say product types and > geographical locations (note that sales don't even HAVE geographical > location information on them when they are placed). What about when a > given sale will become available in the report (we have to assume the > possibility that the report is eventually consistent). REST will not > magically make this transparent as was originally claimed. > > On Thu, Jun 4, 2009 at 9:42 AM, Mike Kelly <mike@...> wrote: > >> A URI pattern/grammar for subsets which clearly indicate their relationship >> to the parent is necessary to make efficient use of things like >> cache-invalidation mechanisms. >> >> The 'grammar' of that pattern should make sense in natural human language as >> well, so CEOs can observe those relationships the same way intermediaries >> do. >> >> The example used for 'data affecting other data' was a sales collection and >> a subset of today's sales; if resources are identified appropriately then >> there are no 'side effects', just one effect. >> >> Cheers, >> Mike >> >> Sebastien Lambla wrote: >> >>>> Maybe we just need to use better/more appropriate grammar in our URIs? >>>> >>>> >>> What does the URI or putting a "grammar" on it bring to the party? >>> >>> It seems to me to be an orthogonal concern. >>> >>> Seb >>> >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >>> >>> >> > > > >
> A URI pattern/grammar for subsets which clearly indicate their > relationship to the parent is necessary to make efficient use of things > like cache-invalidation mechanisms. Are you saying that proxies don't respect the opaqueness of URIs and rely on URL segments to do cache invalidation? I was aware of the historical query string non-caching, but not of this. This is a whole new can of worms being opened. Can you point to reference documentation showing the behaviour for intermediaries doing cache-invalidation? A quick summary on google didn't seem to trigger any result. Seb
Please model for me the placing of an order to the system and the geographical report in a transparent fashion. The geocoding is an external service. Keep in mind that the report is eventually consistent and denormalized. Let`s also throw in some categorization of what is in the order EG: let`s imagine that we have some sort of feature detection running on our orders to apply what can be arbitrary categorizations. If I remember correctly this can be modeled so a 'CEO can easily understand it' Cheers, Greg On Thu, Jun 4, 2009 at 10:04 AM, Mike Kelly <mike@...> wrote: > Sorry, I was going on the original example you gave. > > Appreciate the subsequent example you've just given is more complex but it > doesn't sound like it would contain any relationships that couldn't be > modeled transparently using more granular resources and hyperlinks. > > > > Greg Young wrote: >> >> A 'collection' of today's sales... sure there is no real black box >> there as its so obvious. >> >> I was talking about summary information. Where the information on >> today's sales might be roll ups based on say product types and >> geographical locations (note that sales don't even HAVE geographical >> location information on them when they are placed). What about when a >> given sale will become available in the report (we have to assume the >> possibility that the report is eventually consistent). REST will not >> magically make this transparent as was originally claimed. >> >> On Thu, Jun 4, 2009 at 9:42 AM, Mike Kelly <mike@...> wrote: >> >>> >>> A URI pattern/grammar for subsets which clearly indicate their >>> relationship >>> to the parent is necessary to make efficient use of things like >>> cache-invalidation mechanisms. >>> >>> The 'grammar' of that pattern should make sense in natural human language >>> as >>> well, so CEOs can observe those relationships the same way intermediaries >>> do. >>> >>> The example used for 'data affecting other data' was a sales collection >>> and >>> a subset of today's sales; if resources are identified appropriately then >>> there are no 'side effects', just one effect. >>> >>> Cheers, >>> Mike >>> >>> Sebastien Lambla wrote: >>> >>>>> >>>>> Maybe we just need to use better/more appropriate grammar in our URIs? >>>>> >>>>> >>>> >>>> What does the URI or putting a "grammar" on it bring to the party? >>>> >>>> It seems to me to be an orthogonal concern. >>>> >>>> Seb >>>> >>>> >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>>> >>>> >>> >>> >> >> >> >> > > -- It is the mark of an educated mind to be able to entertain a thought without accepting it.
I think the supremely accurate term that's cropped up in the literature is "affordances", but I chose "controls" to help visualize the levers and buttons an app might offer up to a client to help it make this thing (the application protocol) move forward. On the topic of shared knowledge: Of course there's some out-of-band knowledge re. the semantic context of those controls: the value of the rel tag must be understood by the client before it can decide whether it's worth operating a control in pursuit of its goal. Best if that out-of-band or prior knowledge is captured in or referenced from a well-known media type. If it's a really rich media type, if the processing model described by the type tells you what kinds of verbs, headers, status codes, and representation formats help you manipulate resources by way of that media type, an attributed link may be all you need to offer up to the client. Alternatively, the service can offer up a form that helps guide the client to supplying the necessary representation to progress its goals. That form would be attributed with something like a rel tag, so the client again understands what part submitting this form has to play in the application protocol. In addition, the form fields might also be attributed to further aid in the semantic comprehension of the form. All those annotations must of necessity be understood by the client - there's no magic. --- In rest-discuss@yahoogroups.com, "Sebastien Lambla" <seb@...> wrote: > > I do like Ian Robinson's categorization of "hypermedia controls", aka links > and forms. Although I'm not sure the term control is understood in the same > way in all development communities, it does carry the meaning well enough. > > Seb > > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On > Behalf Of Nick Gall > Sent: 04 June 2009 11:00 > To: johnzabroski > Cc: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Re: YAATRTA HATEOAS > > On Thu, Jun 4, 2009 at 1:13 AM, johnzabroski <johnzabroski@...> wrote: > > > > > > --- In rest-discuss@yahoogroups.com, Nick Gall <nick.gall@> wrote: > > > > > > In the never ending quest to replace the acronym HATEOAS with something > more > > > intuitive and pronounceable, > > > > Just find better things to do. > > I'm ALWAYS looking for better things to do. > > > If you are seriously pronouncing "HATEOAS" often, then you are "doing it > wrong". The idea that you need to say "Hypermedia as the engine of > application state" more than once in a conversation is the flaw in your > thinking. > > I'm NOT saying HATEOAS more than once in a conversation, on average. > The problem is that I have a hundred or more conversations about REST > in the course of a year. > > Plus, I just don't like words with the word HATE in them. :-) > > -- Nick > > > ------------------------------------ > > Yahoo! Groups Links >
--- In rest-discuss@yahoogroups.com, Nick Gall <nick.gall@...> wrote: > > On Thu, Jun 4, 2009 at 1:13 AM, johnzabroski <johnzabroski@...> wrote: > > > > > > --- In rest-discuss@yahoogroups.com, Nick Gall <nick.gall@> wrote: > > > > > > In the never ending quest to replace the acronym HATEOAS with something more > > > intuitive and pronounceable, > > > > Just find better things to do. > > I'm ALWAYS looking for better things to do. > > > If you are seriously pronouncing "HATEOAS" often, then you are "doing it wrong". The idea that you need to say "Hypermedia as the engine of application state" more than once in a conversation is the flaw in your thinking. > > I'm NOT saying HATEOAS more than once in a conversation, on average. > The problem is that I have a hundred or more conversations about REST > in the course of a year. > > Plus, I just don't like words with the word HATE in them. :-) > > -- Nick > I think you are falling into a trap that kills most Software-as-a-Service companies. The idea that what seems like a problem to you should actually be acted upon. When, more likely, just because 1% of your customer base requests a feature, doesn't mean you have to do it. You have to consider the silent 99% minority who probably don't want to see that change. You have more than a 100 conversations about REST in the course of a year... Okay... how does coining this acronym make you a more effective communicator? For one, the acronym HATEOAS is already searchable everywhere on the Internet. So by ditching it, you are wasting time trying to get bees to re-pollinate the flowers. Again... I am not criticizing you for wanting to be the best you are for what you do... I would *never* do such a thing. I am simply saying that, as an outsider looking at this, you are misapplying your gifted talents. Just find something _better_ to do. Also, when I said to "find", I guess I made the faulty assumption you are like me and already have 3453 items on your todo list. I didn't mean "*think* of something better to do". Better opportunities should be around us than to paint bike sheds.
Sebastien Lambla wrote: >> A URI pattern/grammar for subsets which clearly indicate their >> relationship to the parent is necessary to make efficient use of things >> like cache-invalidation mechanisms. >> > > Are you saying that proxies don't respect the opaqueness of URIs and rely on > URL segments to do cache invalidation? I was aware of the historical query > string non-caching, but not of this. This is a whole new can of worms being > opened. > I can't see any problem using this on the server side provided the app is implemented properly. I agree that client side would get nasty unless; - the caching rules are disabled by default and the server is required to indicate compatibility to clients (in a header or something like that) - or the rules are implemented as code on demand I would prefer the latter approach. > Can you point to reference documentation showing the behaviour for > intermediaries doing cache-invalidation? A quick summary on google didn't > seem to trigger any result. I can't find any at the moment. An implementation for use in a RESTful web app would be relatively simple though, I think.
On Thu, Jun 4, 2009 at 11:12 AM, johnzabroski <johnzabroski@...> wrote: > You have more than a 100 conversations about REST in the course of a year... > Okay... how does coining this acronym make you a more effective > communicator? For one, the acronym HATEOAS is already searchable everywhere > on the Internet. So by ditching it, you are wasting time trying to get bees > to re-pollinate the flowers. Forget the new acronym if you don't like it. I was proposing it half in jest. But as we all know all too well, humor doesn't come across well in email. My main point is that the concept that "Hypermedia Describes Protocols" is an illuminating variation on HATEOAS -- at least for two of us (Jim Webber and me). Overall, trying to understand a deep concept like HATEOAS from different perspectives is pretty high on my personal list of important things to do. But that's just me. -- Nick
--- In rest-discuss@yahoogroups.com, Greg Young <gregoryyoung1@...> wrote: > > > For me, the value of a declarative system is that my CEO can query and > > drill-down into any aspect of my system's design. There are no blackboxes; > > just mathematical reasoning. > > I am not sure I want to touch this one with a 30 foot pole but ... At this point, I am just sort of following the discussion. I am a little in awe. I can't even comprehend where it's going or where it's coming from. I guess nobody bothers to wonder that. Don't get me wrong, it's interesting and all, but I'm specifically trying to tell you, "Are you asking the right questions?" And it looks like the replies you are getting don't care whether you are asking the right questions. Instead, they want to discuss REST, which is only fair, considering your audience is here to discuss REST. So your audience is here for REST, but you are starting this thread in essence to ask a more general question (IMHO). > Do CEO's need access to *any* aspect of the system or those with > business value? More often than not what your CEO is interested in is > not your transactional objects but roll ups etc and analysis performed > upon them. I understand the argument for OLAP but using this as an > argument for REST seems to me to be like using the fact that they > where skates as an argument for why I should watch tennis. First off, I _wish_ tennis players had to wear skates, but Rafael Nadal would protest if his edge on clay was taken away. :) Second, the only argument I use for REST is continuous deployment. Hypermedia naturally models this. All the other stuff about REST is simply a checklist of "Are we programming with our pants down?" things Roy Fielding came up with to ensure we buckle our pants around our waist. About my CEO: he used to be a nuclear reactor engineer in the Navy. This might surprise you, but our non-technical people have influenced the architecture where I work just as much as the technical people. In fact, some of our best design decisions were pushed by non-programmers. The technical people still built the infrastructure, but the non-technical people pointed out some silly inefficiencies in the way we were doing certain things. The way we do multi-tenancy, for instance, was pushed by a former marketing guy. Sometimes people who don't do tech for a living can have a better understanding of a deep, technical subject area than the tech guy. Ever heard of Con Kolivas? He is the guy who has basically revolutionized the design of operating system schedulers, with his Staircase Deadline scheduler. Know what he does for a living? Anesthesiologist. He actually wrote better C code than a professional, highly trained C programmer, _without_ _knowing_ _C_. I am serious. My major point here is that your push back that "this is what a CEO should be doing" is fuzzy thinking. Being wrong is acceptable, so long as your thinking is concrete. You could be right, even, but not with the line of reasoning you provided. > Beyond that for a transactional system there is still a "black box" in > terms of how data affects other data. Consider an example of a > resource that tells me sales for the day and a resource that accepts > sales. There is a direct link between these two that may or may not be > exposed. Well, you can use a complex event processor to define a virtual event. However, I don't think that is the real money sink effecting most Fortune 500 companies today. I don't have nearly enough experience to say for sure, but I read a piece today off Michael Feathers' twitter (thanks to Colin Jack retweeting it) about American Airlines. http://dustincurtis.com/dear_dustin_curtis.html That letter pretty much exemplifies the sort of process problems that seem to swarm large institutions. Whenever my consultant friends describe their latest "work of art" they're brought in to "restore", it sounds a lot like AA-style problems. And you know what? I feel for that guy at AA. I would never suggest software as the solution, since its a technical solution for a non-technical problem. However, there is a reason why companies like iRise make ENORMOUS profits helping people improve their technical workflow between the sort of 200 person division with multiple sub departments like at AA. > BTW the solutions I use are fairly far from imperative. Currently I am > using what would be categorized as MEST (yes there can be argument if > MEST is just the new buzzword for messaging). I have lately been using > resources on my read side and messaging on my transactional side. I > have attempted at using REST more completely but am running into the > fact that it just doesn't seem to make any sense whatsoever in a > complex transactional situation. Ok, you are "far from imperative", but are you completely self-describing and referentially transparent? It sounds to me like you want this in your system, but are struggling with how? In the example you give above, you are talking solely in terms of physical implementation characteristics. Even an OLAP cube is an implementation detail! When we're talking at this level, be clear that _today_ what most companies do is have _human technical people_ do the work of _compilers_. Choosing to analyze things with a cube is an implementation detail that really ought to be solved by a compiler. You also sort of sidestepped my comment on using guards. A guard is in its essence an imperative construct, because it effectively defines WHEN something should occur... rather than simply THAT something should occur. Again, hard distinction between "do" and "be". Once you have something that simply "is", you have basically decoupled syntax from semantics as much as possible.
Hi Nick, > My main point is that the concept that "Hypermedia Describes > Protocols" is > an illuminating variation on HATEOAS -- at least for two of us (Jim > Webber > and me). I should say at this point (as Seb has) that my "HATEOAS Describes Protocols" mantra was deeply influenced by Ian Robinson's "Good Web formats expose hypermedia controls" theme. Fortunately Ian and I (and Savas) are writing this kind of stuff up in our book, so we hope something of a consistent theme will emerge. Whether or not it passes muster with the eminent subscribers to this list remains to be seen however. Jim
On Fri, Jun 5, 2009 at 5:56 AM, Jim Webber <jim@...> wrote: > > > Hi Nick, > > > My main point is that the concept that "Hypermedia Describes > > Protocols" is > > an illuminating variation on HATEOAS -- at least for two of us (Jim > > Webber > > and me). > > I should say at this point (as Seb has) that my "HATEOAS Describes > Protocols" mantra was deeply influenced by Ian Robinson's "Good Web > formats expose hypermedia controls" theme. "HATEOS Describes Protocols" not "Hypermedia Describes Protocols". Why the switch? Freudian Slip? The former seems redundant and doubly obscure. Looking forward to the book. -- Nick
Sincerely, I don't understand why the need to name things that are already named in a comprehensive way, specially taking into account that it was Roy Fielding to use the expression, and I have to assume that he knows what is talking about. > The RESTful Web Services book doesn�t help the situation by renaming > the hypertext engine as /connectedness/. That does nothing but obscure > its role as the driving force in RESTful applications. *hypertext engine*, or if we want to generalize it a little more, **hipermedia engine* *. As I said earlier, it's a mistake (in my point of view, of course) to talk about *hipermedia* here dissociating it of the word *engine*. We should refer to it not as "hipermedia engine" but as "hipermedia engine" :) To all those alternatives the same thing that Fielding said about "connectedness" can be applied. Nick Gall wrote: > > > On Fri, Jun 5, 2009 at 5:56 AM, Jim Webber <jim@... > <mailto:jim%40webber.name>> wrote: > > > > > > Hi Nick, > > > > > My main point is that the concept that "Hypermedia Describes > > > Protocols" is > > > an illuminating variation on HATEOAS -- at least for two of us (Jim > > > Webber > > > and me). > > > > I should say at this point (as Seb has) that my "HATEOAS Describes > > Protocols" mantra was deeply influenced by Ian Robinson's "Good Web > > formats expose hypermedia controls" theme. > > "HATEOS Describes Protocols" not "Hypermedia Describes Protocols". Why > the switch? Freudian Slip? The former seems redundant and doubly > obscure. > > Looking forward to the book. > > -- Nick > >
Hey Nick, > "HATEOS Describes Protocols" not "Hypermedia Describes Protocols". Why > the switch? Freudian Slip? The former seems redundant and doubly > obscure. Eek! That's a pre-coffee typing error. HATEOAS is of course, the metamodel. Hypermedia describes the actual protocol. > Looking forward to the book. Thanks. I think there's going to be a slew of good books from the folks on this list (Subbu + Mike, Stefan). Jim
Hi Antonio, > Sincerely, I don't understand why the need to name things that are > already named in a comprehensive way, specially taking into account > that > it was Roy Fielding to use the expression, and I have to assume that > he > knows what is talking about. I think Nick was just trying to humorously capture what he considered to be a useful way of thinking about HATEOAS. No renaming needed, it's already got the friendly name "hypermedia constraint" anyway. Nonetheless given that HATEOAS is the confusing bit from REST, any help we can offer in assisting other folks to understand it is a good thing. After all most people are still at the point in the REST learning curve where they think REST == nice URLs. (In fact not even nice URIs!). Jim
Well, yes, but my point was that using expressions like "Hypermedia Describes Protocols" does nothing but obscure the role of the hipermedia engine as the driving force in RESTful applications (hmmm, I think I read this somewhere... :) And again, my point was also that we should stress the concept of *engine* when talking about "hipermedia constraint". If you use only this expression it will make people think that "having links" is enough, when actually it's "having links as a way to change application states". Maybe it sounds like a small difference, but "having links" suggest a static thing, while "engine" implies a dynamic thing. "Having links" stresses the role of the client, while "engine" stresses the fact that is the server, not the client, that is responsible for the application and the states available at any point. The client is only responsible for choosing a path between the ones the server determines. And those can be different any time the client accesses one same resource if the server (application) logic determines it. So, basically, I think that talking about "hipermedia" without "engine" its very reducing and diminishing when explaining HATEAOS. But the again, it's just my opinion... :) Jim Webber wrote: > > > Hi Antonio, > > > Sincerely, I don't understand why the need to name things that are > > already named in a comprehensive way, specially taking into account > > that > > it was Roy Fielding to use the expression, and I have to assume that > > he > > knows what is talking about. > > I think Nick was just trying to humorously capture what he considered > to be a useful way of thinking about HATEOAS. No renaming needed, it's > already got the friendly name "hypermedia constraint" anyway. > > Nonetheless given that HATEOAS is the confusing bit from REST, any > help we can offer in assisting other folks to understand it is a good > thing. After all most people are still at the point in the REST > learning curve where they think REST == nice URLs. (In fact not even > nice URIs!). > > Jim > >
For what it's worth, I think "Hypermedia constrains protocols" would be better. If you think about it, a browser speaks a general protocol defined by HTTP + media formats + the full URI space. Hypermedia puts constraints on the URI space and the HTTP methods that can be used on the URIs. It also puts constraints on the media formats for data that can be sent in a request using forms e.g. instead of just any old form-url-encoded data, it has to be something with the format: foo=<foo_val>&bar=<bar_val> because foo and bar are inputs in the form. Hypermedia is constraining the general protocol at each state in the client's execution rather than defining some entirely new protocol. Just my 2 cents... Andrew Wahbe --- In rest-discuss@yahoogroups.com, Jim Webber <jim@...> wrote: > > Hi Nick, > > > My main point is that the concept that "Hypermedia Describes > > Protocols" is > > an illuminating variation on HATEOAS -- at least for two of us (Jim > > Webber > > and me). > > I should say at this point (as Seb has) that my "HATEOAS Describes > Protocols" mantra was deeply influenced by Ian Robinson's "Good Web > formats expose hypermedia controls" theme. Fortunately Ian and I (and > Savas) are writing this kind of stuff up in our book, so we hope > something of a consistent theme will emerge. > > Whether or not it passes muster with the eminent subscribers to this > list remains to be seen however. > > Jim >
"Hypermedia describes protocols", "Hypermedia constrains protocols", are these really attempts to explain the "hipermedia constraint" in Rest, or are attempts to create buzzwords with marketing value? I mean, I think buzzwords can be a good thing in spreading technologies (like it was the case with AJAX), but I don't think there's a value in these kind of expressions to explain what's explained by the author itself, on the contrary it only adds to the confusion. wahbedahbe wrote: > > > For what it's worth, I think "Hypermedia constrains protocols" would > be better. If you think about it, a browser speaks a general protocol > defined by HTTP + media formats + the full URI space. Hypermedia puts > constraints on the URI space and the HTTP methods that can be used on > the URIs. It also puts constraints on the media formats for data that > can be sent in a request using forms e.g. instead of just any old > form-url-encoded data, it has to be something with the format: > foo=<foo_val>&bar=<bar_val> because foo and bar are inputs in the form. > > Hypermedia is constraining the general protocol at each state in the > client's execution rather than defining some entirely new protocol. > > Just my 2 cents... > > Andrew Wahbe > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Jim Webber <jim@...> wrote: > > > > Hi Nick, > > > > > My main point is that the concept that "Hypermedia Describes > > > Protocols" is > > > an illuminating variation on HATEOAS -- at least for two of us (Jim > > > Webber > > > and me). > > > > I should say at this point (as Seb has) that my "HATEOAS Describes > > Protocols" mantra was deeply influenced by Ian Robinson's "Good Web > > formats expose hypermedia controls" theme. Fortunately Ian and I (and > > Savas) are writing this kind of stuff up in our book, so we hope > > something of a consistent theme will emerge. > > > > Whether or not it passes muster with the eminent subscribers to this > > list remains to be seen however. > > > > Jim > > > >
I can't speak for anyone else, but for me, it's an attempt to reach a common understanding in (and outside of if possible) the REST community on what HATEOAS is, how it can be achieved and what its benefits are. I don't think there is consensus at all, even on this list of REST proponents. I, for one, disagree very strongly with many of the things said about HATEOAS on this list. I don't think it's about buzzwords at all. I'd love to be able to say "Let's take a RESTful approach here." in a room of random developers that I might be working with and have them all understand what I meant. We are not there now -- REST means different things to different people and HATEOAS is the main area where opinions and understanding differ. Roy has said many times that his thesis didn't go into enough detail on HATEOAS and RESTful hypermedia design. There is a void in REST literature that needs filling IMHO. 2009/6/5 António Mota <amsmota@...> > "Hypermedia describes protocols", "Hypermedia constrains protocols", are > these really attempts to explain the "hipermedia constraint" in Rest, or are > attempts to create buzzwords with marketing value? > > I mean, I think buzzwords can be a good thing in spreading technologies > (like it was the case with AJAX), but I don't think there's a value in these > kind of expressions to explain what's explained by the author itself, on the > contrary it only adds to the confusion. > > > wahbedahbe wrote: > >> >> >> For what it's worth, I think "Hypermedia constrains protocols" would be >> better. If you think about it, a browser speaks a general protocol defined >> by HTTP + media formats + the full URI space. Hypermedia puts constraints on >> the URI space and the HTTP methods that can be used on the URIs. It also >> puts constraints on the media formats for data that can be sent in a request >> using forms e.g. instead of just any old form-url-encoded data, it has to be >> something with the format: foo=<foo_val>&bar=<bar_val> because foo and bar >> are inputs in the form. >> >> Hypermedia is constraining the general protocol at each state in the >> client's execution rather than defining some entirely new protocol. >> >> Just my 2 cents... >> >> Andrew Wahbe >> >> --- In rest-discuss@yahoogroups.com <mailto: >> rest-discuss%40yahoogroups.com <rest-discuss%2540yahoogroups.com>>, Jim >> Webber <jim@...> wrote: >> > >> > Hi Nick, >> > >> > > My main point is that the concept that "Hypermedia Describes >> > > Protocols" is >> > > an illuminating variation on HATEOAS -- at least for two of us (Jim >> > > Webber >> > > and me). >> > >> > I should say at this point (as Seb has) that my "HATEOAS Describes >> > Protocols" mantra was deeply influenced by Ian Robinson's "Good Web >> > formats expose hypermedia controls" theme. Fortunately Ian and I (and >> > Savas) are writing this kind of stuff up in our book, so we hope >> > something of a consistent theme will emerge. >> > >> > Whether or not it passes muster with the eminent subscribers to this >> > list remains to be seen however. >> > >> > Jim >> > >> >> >> > > -- Andrew Wahbe
If you are in a room of random developers that know what HTTP and hipertext, or links, are, what in "all application state transitions must be driven by client selection of server-provided choices that are present in the received representations" do you think they won't understand? And why do you think that talking about "Hypermedia constrains protocols" will be more clear than the aforementioned citation? I think, of course, that a common understanding is important, but I think it should be achieved by simplification and clarification and not by adding more layers of sophistication and complexity. But then again, it's just my opinion, nothing else... Andrew Wahbe wrote: > I can't speak for anyone else, but for me, it's an attempt to reach a > common understanding in (and outside of if possible) the REST > community on what HATEOAS is, how it can be achieved and what its > benefits are. I don't think there is consensus at all, even on this > list of REST proponents. I, for one, disagree very strongly with many > of the things said about HATEOAS on this list. > > I don't think it's about buzzwords at all. I'd love to be able to say > "Let's take a RESTful approach here." in a room of random developers > that I might be working with and have them all understand what I > meant. We are not there now -- REST means different things to > different people and HATEOAS is the main area where opinions and > understanding differ. > > Roy has said many times that his thesis didn't go into enough detail > on HATEOAS and RESTful hypermedia design. There is a void in REST > literature that needs filling IMHO. > > 2009/6/5 Ant�nio Mota <amsmota@... <mailto:amsmota@...>> > > "Hypermedia describes protocols", "Hypermedia constrains > protocols", are these really attempts to explain the "hipermedia > constraint" in Rest, or are attempts to create buzzwords with > marketing value? > > I mean, I think buzzwords can be a good thing in spreading > technologies (like it was the case with AJAX), but I don't think > there's a value in these kind of expressions to explain what's > explained by the author itself, on the contrary it only adds to > the confusion. > > > wahbedahbe wrote: > > > > For what it's worth, I think "Hypermedia constrains protocols" > would be better. If you think about it, a browser speaks a > general protocol defined by HTTP + media formats + the full > URI space. Hypermedia puts constraints on the URI space and > the HTTP methods that can be used on the URIs. It also puts > constraints on the media formats for data that can be sent in > a request using forms e.g. instead of just any old > form-url-encoded data, it has to be something with the format: > foo=<foo_val>&bar=<bar_val> because foo and bar are inputs in > the form. > > Hypermedia is constraining the general protocol at each state > in the client's execution rather than defining some entirely > new protocol. > > Just my 2 cents... > > Andrew Wahbe >
2009/6/5 António Mota <amsmota@...>: > Well, yes, but my point was that using expressions like "Hypermedia > Describes Protocols" does nothing but obscure the role of the hipermedia > engine as the driving force in RESTful applications (hmmm, I think I > read this somewhere... :) > > And again, my point was also that we should stress the concept of > *engine* when talking about "hipermedia constraint". If you use only > this expression it will make people think that "having links" is enough, > when actually it's "having links as a way to change application states". > Maybe it sounds like a small difference, but "having links" suggest a > static thing, while "engine" implies a dynamic thing. "Having links" > stresses the role of the client, while "engine" stresses the fact that > is the server, not the client, that is responsible for the application > and the states available at any point. The client is only responsible > for choosing a path between the ones the server determines. And those > can be different any time the client accesses one same resource if the > server (application) logic determines it. > > So, basically, I think that talking about "hipermedia" without "engine" > its very reducing and diminishing when explaining HATEAOS. I know I'm going to regret wading into this, but here goes... I think HATEOAS is misleading. It suggests that hypermedia, e.g., the set of HTML documents exchanged among a user agent and a set of servers, IS the engine. Hypermedia is NOT the engine, the user agent is the engine. As the basis of this claim, I offer the following close reading of Roy Fielding's thesis. Roy introduces the term HATEOAS in Section 5.1.5 (Uniform Interface) as follows: *REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state. These constraints will be discussed in **Section 5.2*<http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2> *.* Oddly, most of the discussion of HATEOAS actually takes place in Section 5.3.3 (Data View). I quote it in full at the end of the email. The basic question is, "What is the engine that drives the application through its various states?" This final paragraph is the key: *The model application is therefore an engine that moves from one state to the next by examining and choosing from among the alternative state transitions in the current set of representations. *Not surprisingly, this exactly matches the user interface of a hypermedia browser. However, the style does not assume that all applications are browsers. In fact, the application details are hidden from the server by the generic connector interface, and thus a user agent could equally be an automated robot performing information retrieval for an indexing service, a personal agent looking for data that matches certain criteria, or a maintenance spider busy patrolling the information for broken references or modified content [39]. * * I read this as saying that the engine is the browser or, more generally, any user agent. Further support for this reading is provided by this paragraph: The application state is controlled and stored by the user agent and can be composed of representations from multiple servers. In addition to freeing the server from the scalability problems of storing state, this allows the user to directly manipulate the state (e.g., a Web browser's history), anticipate changes to that state (e.g., link maps and prefetching of representations), and jump from one application to another (e.g., bookmarks and URI-entry dialogs). Again, it is the user agent that is *driving *the transitions between states, hence the user agent is acting as the engine -- not hypermedia. To be precise, in the case of a web browser user agent, it is the combination of the browser software (user agent) and the human using the browser (user) that is the "engine of application state", i.e. that is driving the transitions between states. It is the human user who "examin[es] and choos[es] from among the alternative state transitions."* *It is the human user who decides to *jump from one state to another* or to *manipulate the browser state*, i.e., use the back button. Here is one more critical paragraph: REST concentrates all of the control state into the representations received in response to interactions. The goal is to improve server scalability by eliminating any need for the server to maintain an awareness of the client state beyond the current request. An application's state is therefore defined by its pending requests, the topology of connected components (some of which may be filtering buffered data), the active requests on those connectors, the data flow of representations in response to those requests, and the processing of those representations as they are received by the user agent. Let's list that constituents of an application's state for clarity: 1. pending requests 2. the topology of connected components 3. the active requests on those connectors 4. the data flow of representations in response to those requests 5. the processing of those representations as they are received by the user agent Of these five constituents, (2)-(5) are *reactive. *Only (1) is *proactive*. Again, it is pending requests that *drive* the application state. Where do such pending requests come from? In the case of a browser, they ultimately come from the human user. By clicking on links and filling in forms, the human user *drives *the application. Accordingly, the combination of the human user and the user agent browser act as the engine of state, i.e., the engine driving application state transitions. Note that REST has a somewhat unusual concept of application: Since REST is specifically targeted at distributed information systems, it views an application as a cohesive structure of information and control alternatives through which a user can perform a desired task. For example, looking-up a word in an on-line dictionary is one application, as is touring through a virtual museum, or reviewing a set of class notes to study for an exam. Each application defines goals for the underlying system, against which the system's performance can be measured. So hypermedia is the *cohesive structure of information and control alternatives*. Hypermedia is a *structure*, not an *engine. * Hypermedia is merely a representation of application state (as a network structure of control information), not what drives it. The *engine* that drives application state is the user performing the task, and the user agent assisting in that performance. In conclusion, since hypermedia is NOT the engine of application state, merely its representation, Hypermedia *As *The Engine Of Application State, is a misleading and inaccurate term. More accurate terms would be 1. Hypermedia as the *Representation *of Application State 2. Hypermedia Directs/Guides the Engine of Application State 3. Hypermedia Describes (Application) Protocols (State). To misquote Alfred Korzybski<http://en.wikipedia.org/wiki/Map-territory_relation>, "The map (hypermedia) is not the engine." Fire away! (Roy, I'm really hoping you'll chime in.) -- Nick [Quote of full section] 5.3.3 Data View A data view of an architecture reveals the application state as information flows through the components. Since REST is specifically targeted at distributed information systems, it views an application as a cohesive structure of information and control alternatives through which a user can perform a desired task. For example, looking-up a word in an on-line dictionary is one application, as is touring through a virtual museum, or reviewing a set of class notes to study for an exam. Each application defines goals for the underlying system, against which the system's performance can be measured. Component interactions occur in the form of dynamically sized messages. Small or medium-grain messages are used for control semantics, but the bulk of application work is accomplished via large-grain messages containing a complete resource representation. The most frequent form of request semantics is that of retrieving a representation of a resource (e.g., the "GET" method in HTTP), which can often be cached for later reuse. REST concentrates all of the control state into the representations received in response to interactions. The goal is to improve server scalability by eliminating any need for the server to maintain an awareness of the client state beyond the current request. An application's state is therefore defined by its pending requests, the topology of connected components (some of which may be filtering buffered data), the active requests on those connectors, the data flow of representations in response to those requests, and the processing of those representations as they are received by the user agent. An application reaches a steady-state whenever it has no outstanding requests; i.e., it has no pending requests and all of the responses to its current set of requests have been completely received or received to the point where they can be treated as a representation data stream. For a browser application, this state corresponds to a "web page," including the primary representation and ancillary representations, such as in-line images, embedded applets, and style sheets. The significance of application steady-states is seen in their impact on both user-perceived performance and the burstiness of network request traffic. The user-perceived performance of a browser application is determined by the latency between steady-states: the period of time between the selection of a hypermedia link on one web page and the point when usable information has been rendered for the next web page. The optimization of browser performance is therefore centered around reducing this communication latency. Since REST-based architectures communicate primarily through the transfer of representations of resources, latency can be impacted by both the design of the communication protocols and the design of the representation data formats. The ability to incrementally render the response data as it is received is determined by the design of the media type and the availability of layout information (visual dimensions of in-line objects) within each representation. An interesting observation is that the most efficient network request is one that doesn't use the network. In other words, the ability to reuse a cached response results in a considerable improvement in application performance. Although use of a cache adds some latency to each individual request due to lookup overhead, the average request latency is significantly reduced when even a small percentage of requests result in usable cache hits. The next control state of an application resides in the representation of the first requested resource, so obtaining that first representation is a priority. REST interaction is therefore improved by protocols that "respond first and think later." In other words, a protocol that requires multiple interactions per user action, in order to do things like negotiate feature capabilities prior to sending a content response, will be perceptively slower than a protocol that sends whatever is most likely to be optimal first and then provides a list of alternatives for the client to retrieve if the first response is unsatisfactory. The application state is controlled and stored by the user agent and can be composed of representations from multiple servers. In addition to freeing the server from the scalability problems of storing state, this allows the user to directly manipulate the state (e.g., a Web browser's history), anticipate changes to that state (e.g., link maps and prefetching of representations), and jump from one application to another (e.g., bookmarks and URI-entry dialogs). The model application is therefore an engine that moves from one state to the next by examining and choosing from among the alternative state transitions in the current set of representations. Not surprisingly, this exactly matches the user interface of a hypermedia browser. However, the style does not assume that all applications are browsers. In fact, the application details are hidden from the server by the generic connector interface, and thus a user agent could equally be an automated robot performing information retrieval for an indexing service, a personal agent looking for data that matches certain criteria, or a maintenance spider busy patrolling the information for broken references or modified content [39<http://www.ics.uci.edu/~fielding/pubs/dissertation/references.htm#ref_39> ].
Hello Antonio, > "Hypermedia describes protocols", "Hypermedia constrains protocols", > are > these really attempts to explain the "hipermedia constraint" in > Rest, or > are attempts to create buzzwords with marketing value? They are legitimate attempts to better describe what, for most people, is the least tangible part of REST. So the "engine" phrase suits your mind, fine. Others may prefer something protocol-centric (I do). I don't see anyone trying to squeeze marketing value out of this, I see people trying to understand and help other people to do the same. > I mean, I think buzzwords can be a good thing in spreading > technologies > (like it was the case with AJAX), but I don't think there's a value in > these kind of expressions to explain what's explained by the author > itself, on the contrary it only adds to the confusion. You are, of course, entitled to that opinion. But I rather like "Hypermedia constrains/describes protocols." It is a useful vehicle for engaging people in this difficult subject area. While the formal HATEOAS might be useful as a tool for communicating at a PhD level (its original intent), perhaps there are better terms that can be used in a broader software engineering context. Until I hear compelling reasons to abandon these thoughts, I'll keep on using them. So far, nothing compelling on this list. Jim
2009/6/5 Nick Gall <nick.gall@...>:
>
> I know I'm going to regret wading into this, but here goes...
> I think HATEOAS is misleading. It suggests that hypermedia, e.g., the set of
> HTML documents exchanged among a user agent and a set of servers, IS the
> engine.
> Hypermedia is NOT the engine, the user agent is the engine.
Funny enough, that is very similar to my interpretation (or my
intuition, perhaps). I never though of the "hipermedia" as *being* the
engine, for me "hipermedia engine" is what I'll call to a engine that
uses hipermedia for it's intents, in this case drive a application
from state to state. Like, let's say a "electrical motor" is a motor
that uses electricity to performs it's function, moving a car for
example. It's not the electricity that *is* the motor.
The only difference from your last phrase to what I think, is that for
me "the server is the engine". I say this because, although is the
client that really "drives" the state transitions, by choosing one of
the alternatives presented to him, is really the server that allows or
disallows the possible transitions.
I will say that the client drives the state transitions, but the
server drives the client (into what transitions he can follow). So in
the end is the server that takes care of all the state transitions
available in a application.
Nevertheless, I will not argue about this point, who drives who,
because I think it's almost just a question of "flavour", doesn't have
a impact in the overall question.
I think that by seeing this "hipermedia engine" in the analogy with
"electrical motor", the phrase I quoted from Roy Fielding is clear
enough to explain HATEOAS to people that has any kind of development
background.
"all application state transitions must be driven by client selection
of server-provided choices that are present in the received
representations"
The other attempts that we've been talking, "Hypermedia describes
protocols"/"Hypermedia constrains protocols", from my point of view
(and nothing more than that) are erroneous in putting emphasis in the
"hipermedia" and "protocols", that are simple the means used by the
server and the client to drive the application state transitions.
Even more if we take into account observations like this:
"A REST API should not be dependent on any single communication
protocol, though its successful mapping to a given protocol may be
dependent on the availability of metadata, choice of methods, etc."
*should not be dependent on any single communication protocol* and we
put the emphasis on how "hipermedia" describes or constrains the
"protocol"? When the API should not depend on it? Each protocol is
different, so why would eventually what be true for HTTP be also true
for other protocols?
I would cite again Roy Fielding, with my own addenda:
"it doesn’t help the situation by renaming the hypertext engine as
{"Hypermedia describes protocols", "Hypermedia constrains protocols",
"connectedness". That does nothing but obscure its role as the driving
force in RESTful applications."
I think, for the sake of simplification and clarification, that the
way Roy Fielding describes it, *hypertext engine* is, let's say,
simple and clear enough, taken "hypertext engine" as the engine that
uses hipertext to drive application state transitions, as in
"electrical motor" as the engine that uses electricity to move my car
to drive me to work every morning, when I trade my gas car by a
electrical, of course (btw, is the car who drives me or is I that
drive the car?).
Now, again, this is the way I see it, nothing more than that, and I
don't want to piss anyone with my positions, but sometimes I get the
feeling that people are only looking for support for their positions
and tend to dismiss opinions otherwise.
But then again opinions tend to be like that, each one has it's own...
No harm done here, I think...
Nick Gall wrote: > > Plus, I just don't like words with the word HATE in them. :-) > +1. I swear, the only place I know of where folks insist that conjunctions rate an acronym character, is this list! Which is why I stubbornly insist on using "HEAS", which stands for Hypermedia as the Engine of Application State. Like in a book title, only the important words rate capitalization, let alone elevation to acronym status. -Eric
On Friday 05 June 2009, Jim Webber wrote: > You are, of course, entitled to that opinion. But I rather like > "Hypermedia constrains/describes protocols." It is a useful vehicle > for engaging people in this difficult subject area. While the formal > HATEOAS might be useful as a tool for communicating at a PhD level > (its original intent), perhaps there are better terms that can be > used in a broader software engineering context. Follow links to advance the conversation. It seems to me that part of the problem of making sense of "the engine of application state" is that in this context the term state is used in an ambiguous way. In Representational State Transfer, the state that is transferred is a snapshot of a resource, represented in a concrete format. Resource state, in most cases, is persistent and sharable. The "application" state advanced by following links is entirely different state. It is private to each client, the server only constrains which transitions are legal, but otherwise doesn't know about this state. Therefore, this kind of state is ephemeral. Because of these properties, and because I think "application" state is not distinctive enough, I prefer the term conversational state. Still, HATEOAS by any name is a rather weak constraint as I've pointed out in other replies and elsewhere[*]. I'd like to repeat in particular, that focusing on this aspect gives quite a lot of prominence to an rather trivial aspect of RESTful interaction. The much larger, almost completely overshadowed problem is how to specify the intended meaning of the offered links. Michael [*] e.g. http://www.schuerig.de/michael/blog/index.php/2009/06/04/humble-hateoas/ -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
Jim Webber wrote: > Antonio wrote: > > I mean, I think buzzwords can be a good thing in spreading > > technologies > > (like it was the case with AJAX), but I don't think there's a value in > > these kind of expressions to explain what's explained by the author > > itself, on the contrary it only adds to the confusion. > > You are, of course, entitled to that opinion. But I rather like > "Hypermedia constrains/describes protocols." It is a useful vehicle > for engaging people in this difficult subject area. While the formal > HATEOAS might be useful as a tool for communicating at a PhD level > (its original intent), perhaps there are better terms that can be used > in a broader software engineering context. > > Until I hear compelling reasons to abandon these thoughts, I'll keep > on using them. So far, nothing compelling on this list. I don't have a PhD, but the terms/neologisms presented here haven't helped my understanding on this principle, or provided a grasp I could use to help others. So far, nothing compelling on this list. Bill
On Jun 5, 2009, at 5:58 PM, Bill de hOra wrote: > I don't have a PhD, but the terms/neologisms presented here haven't > helped my understanding on this principle, or provided a grasp I could > use to help others. So far, nothing compelling on this list. +1 Subbu
On Jun 5, 2009, at 8:19 PM, Nick Gall wrote: > I think HATEOAS is misleading. It suggests that hypermedia, e.g., > the set of HTML documents exchanged among a user agent and a set of > servers, IS the engine. Umm, that's like complaining that one cannot achieve zen through meditation because they are only on sale at BestBuy. Consider other definitions of the term hypermedia (hint: as an interaction style). ....Roy
On Sat, Jun 6, 2009 at 3:42 AM, Roy T. Fielding <fielding@...> wrote:
>
> On Jun 5, 2009, at 8:19 PM, Nick Gall wrote:
>
>> I think HATEOAS is misleading. It suggests that hypermedia, e.g., the set
of HTML documents exchanged among a user agent and a set of servers, IS the
engine.
>
> Umm, that's like complaining that one cannot achieve zen
> through meditation because they are only on sale at BestBuy.
>
> Consider other definitions of the term hypermedia (hint: as an
> interaction style).
>
> ....Roy
Roy,
I knew I'd regret wading into this! :-)
I thought REST was the STYLE! Now we have the style of a style? I.e, REST is
substyle of the hypermedia style?
So I went back to the thesis to see how you defined *hypermedia* only to
discover that you don't. AFAICT there is only one definition constraining *
hypermedia* in your thesis: *Hypermedia is defined **by** the presence of
application control information embedded within, or as a layer above, the
presentation of information.* (I notice that assertion is not footnoted.)
This sentence really doesn't help since it uses the phrase *defined **by* not
*defined **as. *Thus, the sentence doesn't say what hypermedia is, it merely
imposes a constraint on the concept of hypermedia -- whatever it may be.
Nowhere in the thesis is hypermedia (style) ever cited or defined. Or is
hypermedia not a style but the single architectural constraint provided by
the quoted sentence? All this kind of makes it a moving target.
[Current debate aside, I'd be really interested to hear your definition of
hypermedia, or even just see some pointers to others' definitions that
define it as an interaction style. All the definitions I've ever seen define
hypermedia <http://en.wikipedia.org/wiki/Hypermedia> as a kind of media.
Even Ted Nelson defined hypermedia as simply the
medium<http://www.scribd.com/doc/454074/A-File-Structure-for-the-Complex-The-Changing-And-the-Indeterminate>,
not the style.]
*
*
*All that being said, my argument still holds. Even if hypermedia is defined
as an interaction style (or something like a style) as opposed to a
particular kind of media (data), a style is no more an engine than a set of
HTML documents is an engine.
To support this claim, let me quote further from the paragraph in which the
above quoted sentence appears (4.1.3):
*
*Hypermedia is defined by the presence of application control information
embedded within, or as a layer above, the presentation of information.
Distributed hypermedia allows the presentation and control information to be
stored at remote locations. By its nature, **user actions within a
distributed hypermedia system require the transfer of large amounts of data*
* from where the data is stored to where it is used.*
*
Yet again, we see that it is user actions (powered by the browser software)
that singled out as driving the application ("user actions ... require the
transfer of ... data") -- the system does not drive itself.
Saying that hypermedia (the entire system or style) is the engine is like
saying the entire automobile (or the architectural style called
"automobile") is the engine. It may be true in a Zen-like way (and believe
me, I LOVE mystical philosophies), but it is utterly confusing to 99% of
humanity.
It is far clearer to the rest of humanity to say that the browser (or more
generally user agent) is the engine of state, the user is the driver of
state, and hypermedia is the representation of state.
-- Nick
*
On Jun 6, 2009, at 12:40 PM, Nick Gall wrote:
> I thought REST was the STYLE! Now we have the style of a style?
> I.e, REST is substyle of the hypermedia style?
REST is a composition of constraints that come from many styles.
> So I went back to the thesis to see how you defined hypermedia only
> to discover that you don't. AFAICT there is only one definition
> constraining hypermedia in your thesis: Hypermedia is defined by
> the presence of application control information embedded within, or
> as a layer above, the presentation of information. (I notice that
> assertion is not footnoted.)
>
> This sentence really doesn't help since it uses the phrase defined
> by not defined as.Thus, the sentence doesn't say what hypermedia
> is, it merely imposes a constraint on the concept of hypermedia --
> whatever it may be. Nowhere in the thesis is hypermedia (style)
> ever cited or defined. Or is hypermedia not a style but the single
> architectural constraint provided by the quoted sentence? All this
> kind of makes it a moving target.
*shrug* I didn't think it needed to be "defined as" (at the time).
Too many of my friends are experts in hypertext research and they
probably would have poked mercilessly at my final defense.
> [Current debate aside, I'd be really interested to hear your
> definition of hypermedia, or even just see some pointers to others'
> definitions that define it as an interaction style. All the
> definitions I've ever seen define hypermedia as a kind of media.
> Even Ted Nelson defined hypermedia as simply the medium, not the
> style.]
See slide 35 (pp. 50-53) of
http://roy.gbiv.com/talks/200804_REST_ApacheCon.pdf
> All that being said, my argument still holds. Even if hypermedia is
> defined as an interaction style (or something like a style) as
> opposed to a particular kind of media (data), a style is no more an
> engine than a set of HTML documents is an engine.
Oh, really? I wonder what you think engine means.
http://en.wikipedia.org/wiki/The_Engine
An engine is a system for transforming input into some form
of output. The engine in a car is a system for transforming
gasoline into torque that can be applied to a drive axle.
My little bullet of a constraint
"hypermedia as the engine of application state"
does not say that the engine is a hypertext document. It describes
the engine as being a hypermedia system, much like a car's engine
would be described as an internal combustion system.
I did not cite any specific reference for that because (AFAIK)
there doesn't exist any specific reference. I was doing synthesis.
Nelson's definition is tied to what he cared about -- non-linear
writing as a form of poetry. Conklin was entirely focused on
graphical user interfaces, so his definition is tied directly to
GUI affordances. My observation is something that I considered
to be inherent in the design largely because the Web was based
on Engelbart's view of hypertext, but AFAIK Engelbart never actually
defined the term other than by how it was used in Augment/NLS
> To support this claim, let me quote further from the paragraph in
> which the above quoted sentence appears (4.1.3):
>
> Hypermedia is defined by the presence of application control
> information embedded within, or as a layer above, the presentation
> of information. Distributed hypermedia allows the presentation and
> control information to be stored at remote locations. By its
> nature, user actions within a distributed hypermedia system require
> the transfer of large amounts of data from where the data is stored
> to where it is used.
>
> Yet again, we see that it is user actions (powered by the browser
> software) that singled out as driving the application ("user
> actions ... require the transfer of ... data") -- the system does
> not drive itself.
User actions are part of the system being designed. That paragraph
is talking about a design constraint imposed by the requirement that
the information be distributed all around the world. Aside from using
the same term in two different ways, I don't see how that has anything
to do with your point. Whether or not the system drives itself is
irrelevant.
> Saying that hypermedia (the entire system or style) is the engine
> is like saying the entire automobile (or the architectural style
> called "automobile") is the engine. It may be true in a Zen-like
> way (and believe me, I LOVE mystical philosophies), but it is
> utterly confusing to 99% of humanity.
99% of humanity was not my audience, and I'll hasten to bet that less
than 1% of humanity knows what an engine means even for something as
mundane as an automobile.
> It is far clearer to the rest of humanity to say that the browser
> (or more generally user agent) is the engine of state, the user is
> the driver of state, and hypermedia is the representation of state.
The only reason that is clearer to the rest of humanity is
because it is wrong. It's like saying the Web is defined by
what a user sees in MSIE. I don't care how easy it may be for
a non-educated user to understand that definition: it is wrong
and I have no interest in peddling simplified forms.
....Roy
On Jun 6, 2009, at 8:39 AM, Roy T. Fielding wrote: > "hypermedia as the engine of application state" Maybe its just me, but I think the phrase is great. The concept takes time (at least for me) to get into ones brain[1] and a 'simpler' wording would not help that. It would only help in making you think you got it before you are actually there. IOW, once you reach the point that you think "hypermedia as the engine of application state" is lucidly clear it pretty much proves that you understood the damn thing :-) Aside: Roy, did you actually craft the phrase and are you satisfied with it? Jan [1] Which is surprising, actually, because it is so clear :-)
Roy, in a number of postings you write, that you did not in depth cover the topic of media type design in your dissertation. No problem I thought, I'll just fire a search on fielding+"media type design" and should be taken to a bunch email snippets that I could harvest. Turns out that most hits are on the phrase "the entire topic of media type design which I left out because I ran out of time" :-) Could you suggest a more suitable query or a certain forum or even references I could take as a starting point for an investigation? Thanks, Jan
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > Roy, > > in a number of postings you write, that you did not in depth cover the > topic of media type design in your dissertation. No problem I thought, > I'll just fire a search on fielding+"media type design" and should be > taken to a bunch email snippets that I could harvest. > > Turns out that most hits are on the phrase "the entire topic of media > type design which I left out because I ran out of time" :-) > > Could you suggest a more suitable query or a certain forum or even > references I could take as a starting point for an investigation? Jan, it is so funny that you post this question. I was thinking of e-mailing Roy to say, "Why don't you add stuff on media types to your thesis, and then resubmit it to UC-Irvine?" After all, his REST thesis _has_ to be the most widely read Computer Science thesis in history. If he feels he left something important out of it, why not go back and "complete" it? This would obviously be unprecedented, but my question is _why not_.
On Sun, Jun 7, 2009 at 4:44 AM, Mark Little <nmcl2001@...> wrote: > Hi Bob! It's been a while :-) Hasn't it, though? And isn't this about where we left off lo those many years ago? (By the way, I'm not actually pushing BTP here, only the provisional-final model for RESTful transactions, which I think could be a lot simpler than BTP. And I do think it is not only possible but sometimes necessary to do something that would look a lot like transactions in a RESTful environment.) > The compensating transaction model in the reference Bill sent round is what > you're looking for I think (cf atoms in BTP). I don't see much detail about compensation in the reference Bill sent, unless I missed something. It says: <excerpt> The two proposals are: 1. classic transactions obeying ACID properties; 2. compensation based transactions avoiding the need to lock resources for extended periods of time. Approach 1 is discussed in depth whilst the second will be covered in a subsequent wiki. </excerpt> My understand of compensation is that it is a do-undo model, where the participants actually do the work in Phase 1, and then undo it in Phase 2 if the transaction aborts. Provisional-Final is a do provisionally, and then do finally model, where the participants do the work provisionally in Phase 1, and then finalize it (or cancel it) in Phase 2. In an protocol sense, I think the decision of which of those approaches to use could be left up to the participants, but if we're talking Java implementation details, we probably need to make the approaches explicit. I'm familiar with problems in compensation from working in fast-paced manufacturing environments. If an order is accepted, work begins, or goods are shipped. I've seen it happen in EDI environments with no transactional controls, and it's expensive to undo. Much better to mark it provisional and only start work once the transaction commits.
FWIW, these are the two references I use when discussing sagas and compensating transactions: About Sagas: http://www.cs.cornell.edu/andru/cs711/2002fa/reading/sagas.pdf Details on Compensating Transactions http://www-ctp.di.fct.unl.pt/~cf/Papers/ibm2002.pdf I've found that, when the operation may be long-running, the number of resources more than a few, and/of the resources are kept within multiple namespaces, the "do/undo" model is preferable. mca http://amundsen.com/blog/ On Sun, Jun 7, 2009 at 08:29, Bob Haugen <bob.haugen@...> wrote: > On Sun, Jun 7, 2009 at 4:44 AM, Mark Little <nmcl2001@...> wrote: > > Hi Bob! It's been a while :-) > > Hasn't it, though? And isn't this about where we left off lo those > many years ago? > > (By the way, I'm not actually pushing BTP here, only the > provisional-final model for RESTful transactions, which I think could > be a lot simpler than BTP. And I do think it is not only possible but > sometimes necessary to do something that would look a lot like > transactions in a RESTful environment.) > > > The compensating transaction model in the reference Bill sent round is > what > > you're looking for I think (cf atoms in BTP). > > I don't see much detail about compensation in the reference Bill sent, > unless I missed something. > > It says: > <excerpt> > The two proposals are: > 1. classic transactions obeying ACID properties; > 2. compensation based transactions avoiding the need to lock > resources for extended periods of time. > > Approach 1 is discussed in depth whilst the second will be covered in > a subsequent wiki. > </excerpt> > > My understand of compensation is that it is a do-undo model, where the > participants actually do the work in Phase 1, and then undo it in > Phase 2 if the transaction aborts. > > Provisional-Final is a do provisionally, and then do finally model, > where the participants do the work provisionally in Phase 1, and then > finalize it (or cancel it) in Phase 2. > > In an protocol sense, I think the decision of which of those > approaches to use could be left up to the participants, but if we're > talking Java implementation details, we probably need to make the > approaches explicit. > > I'm familiar with problems in compensation from working in fast-paced > manufacturing environments. If an order is accepted, work begins, or > goods are shipped. I've seen it happen in EDI environments with no > transactional controls, and it's expensive to undo. Much better to > mark it provisional and only start work once the transaction commits. > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi, can you provide a specific scenario you have in mind that would require transactions? (would help me think) Jan On Jun 7, 2009, at 9:30 AM, mike amundsen wrote: > > > FWIW, these are the two references I use when discussing sagas and > compensating transactions: > > About Sagas: > http://www.cs.cornell.edu/andru/cs711/2002fa/reading/sagas.pdf > > Details on Compensating Transactions > http://www-ctp.di.fct.unl.pt/~cf/Papers/ibm2002.pdf > > I've found that, when the operation may be long-running, the number > of resources more than a few, and/of the resources are kept within > multiple namespaces, the "do/undo" model is preferable. > > mca > http://amundsen.com/blog/ > > > > On Sun, Jun 7, 2009 at 08:29, Bob Haugen <bob.haugen@...> wrote: > On Sun, Jun 7, 2009 at 4:44 AM, Mark Little <nmcl2001@...> > wrote: > > Hi Bob! It's been a while :-) > > Hasn't it, though? And isn't this about where we left off lo those > many years ago? > > (By the way, I'm not actually pushing BTP here, only the > provisional-final model for RESTful transactions, which I think could > be a lot simpler than BTP. And I do think it is not only possible but > sometimes necessary to do something that would look a lot like > transactions in a RESTful environment.) > > > The compensating transaction model in the reference Bill sent > round is what > > you're looking for I think (cf atoms in BTP). > > I don't see much detail about compensation in the reference Bill sent, > unless I missed something. > > It says: > <excerpt> > The two proposals are: > 1. classic transactions obeying ACID properties; > 2. compensation based transactions avoiding the need to lock > resources for extended periods of time. > > Approach 1 is discussed in depth whilst the second will be covered in > a subsequent wiki. > </excerpt> > > My understand of compensation is that it is a do-undo model, where the > participants actually do the work in Phase 1, and then undo it in > Phase 2 if the transaction aborts. > > Provisional-Final is a do provisionally, and then do finally model, > where the participants do the work provisionally in Phase 1, and then > finalize it (or cancel it) in Phase 2. > > In an protocol sense, I think the decision of which of those > approaches to use could be left up to the participants, but if we're > talking Java implementation details, we probably need to make the > approaches explicit. > > I'm familiar with problems in compensation from working in fast-paced > manufacturing environments. If an order is accepted, work begins, or > goods are shipped. I've seen it happen in EDI environments with no > transactional controls, and it's expensive to undo. Much better to > mark it provisional and only start work once the transaction commits. > > > ------------------------------------ > > Yahoo! Groups Links > > > > > > >
On Sun, Jun 7, 2009 at 8:30 AM, mike amundsen <mamund@...> wrote: > I've found that, when the operation may be long-running, the number of > resources more than a few, and/of the resources are kept within multiple > namespaces, the "do/undo" model is preferable. What if you can't undo, or if undoing is expensive? Then the provisional-final model is better.
On Sun, Jun 7, 2009 at 8:54 AM, Jan Algermissen <algermissen1971@...> wrote: > can you provide a specific scenario you have in mind that would > require transactions? (would help me think) B2B order-fulfillment scenarios. For example, a quote-to-order sequence is a form of provisional-final 2-phase-commit transaction (with no locking). Does not match all of the ACID properties, but I don't think you can do that over the open Web.
Bob: If you have a model where undo is not an option, Saga won't be of much use. I've run into this very often, but it happens. In those cases, I usually end up enlisting standard 2PC transactions over TCP/IP directly w/o HTTP. That may mean blocked HTTP interactions or - more often - creating a "pending resource" that clients can check over time (usually seconds) for a final result. mca http://amundsen.com/blog/ On Sun, Jun 7, 2009 at 10:20, Bob Haugen <bob.haugen@...> wrote: > On Sun, Jun 7, 2009 at 8:30 AM, mike amundsen <mamund@...> wrote: > > I've found that, when the operation may be long-running, the number of > > resources more than a few, and/of the resources are kept within multiple > > namespaces, the "do/undo" model is preferable. > > What if you can't undo, or if undoing is expensive? Then the > provisional-final model is better. >
ha! made a slight slip: "I've run into this very often, but it happens. " should have been: "I've NOT run into this very often, but it happens. " mca http://amundsen.com/blog/ On Sun, Jun 7, 2009 at 10:25, mike amundsen <mamund@...> wrote: > Bob: > > If you have a model where undo is not an option, Saga won't be of much use. > I've run into this very often, but it happens. In those cases, I usually end > up enlisting standard 2PC transactions over TCP/IP directly w/o HTTP. That > may mean blocked HTTP interactions or - more often - creating a "pending > resource" that clients can check over time (usually seconds) for a final > result. > > > mca > http://amundsen.com/blog/ > > > > On Sun, Jun 7, 2009 at 10:20, Bob Haugen <bob.haugen@...> wrote: > >> On Sun, Jun 7, 2009 at 8:30 AM, mike amundsen <mamund@...> wrote: >> > I've found that, when the operation may be long-running, the number of >> > resources more than a few, and/of the resources are kept within multiple >> > namespaces, the "do/undo" model is preferable. >> >> What if you can't undo, or if undoing is expensive? Then the >> provisional-final model is better. >> > >
I use Sagas to model long-running operations that "enlist" multiple completions in a single unit. In other words, a client may send a representation to the server and the server, in turn, engages in a number of resource interactions (usually creating resources along the way) in order to complete the work. If one of the interactions cannot be completed it may mean previous interactions need to be 'rolled back' or canceled. The classic case I use is modeling order placement and fulfillment. For example a client may assemble a representation of an online order and send it to a server. That server may then need to create an "order" resource, a "stock" resource to debit stock "shipping" resource to schedule shipping, and a "payment" resource to cover the costs of the work. These steps might happen in parallel and might even involve other servers. I prefer using the Saga model since it is an "optimistic" pattern and I find that easier to model over HTTP. On the more pragmatic side, I can model the initial interaction set w/o employing the details of the saga (implementing either 'forward compensation' or 'backward compensation' steps). I can then add the compensation work later in the implementation process (sometimes weeks or months!) without much disruption to clients or proxies, etc. mca. http://amundsen.com/blog/
On Sun, Jun 7, 2009 at 9:25 AM, mike amundsen <mamund@...> wrote: > If you have a model where undo is not an option, Saga won't be of much use. > I've run into this very often, but it happens. In those cases, I usually end > up enlisting standard 2PC transactions over TCP/IP directly w/o HTTP. That > may mean blocked HTTP interactions or - more often - creating a "pending > resource" that clients can check over time (usually seconds) for a final > result. What's wrong with a provisional-final scenario in your opinion? The context is long-running transactions. How long do you want to block?
Bob: Well, most often my experience on transactions has been driven by back-end resources that are already committed to 2PC (databases, etc.) so I rarely had options to change that model - it was the easy path<g>. Usually these types of transactions happened within the same namespace against local resources. There were a few cases where the transactions involved remote servers and that was handled by modeling the action as a local transaction anyway. All these items were in the range of a few seconds or less. In cases where I was in the position to control the details, I found Sagas (including the option of forward and backward recovery) more appealing and, for me, easier to implement over HTTP than provisional-final. At the time I learned this I was working with transactions that could take minutes even more than an hour to sort out all parties. Often it meant working out details of alternate resolutions (hence the appeal of forward recovery modeling) and/or could mean offering clients the opportunity to resubmit work using alternate data. Sagas made that relatively easy to do w/o the need for locking resources along the way. Can you point me to some refs on provisional-final implementations? I'd be happy to look at them again. mca http://amundsen.com/blog/ On Sun, Jun 7, 2009 at 10:30, Bob Haugen <bob.haugen@...> wrote: > On Sun, Jun 7, 2009 at 9:25 AM, mike amundsen <mamund@...> wrote: > > If you have a model where undo is not an option, Saga won't be of much > use. > > I've run into this very often, but it happens. In those cases, I usually > end > > up enlisting standard 2PC transactions over TCP/IP directly w/o HTTP. > That > > may mean blocked HTTP interactions or - more often - creating a "pending > > resource" that clients can check over time (usually seconds) for a final > > result. > > What's wrong with a provisional-final scenario in your opinion? > > The context is long-running transactions. How long do you want to block? >
On Sun, Jun 7, 2009 at 9:41 AM, mike amundsen<mamund@...> wrote: > Can you point me to some refs on provisional-final implementations? I'd be > happy to look at them again. Just google for: provisional-final transaction or "Tentative Business Operations" or "escrow transactional method" Or, from this group: http://tech.groups.yahoo.com/group/rest-discuss/message/8755 There's another term or two for it, but I can't remember it right now, and can't find the reference. Or, if you have ever done a quote-to-order sequence, you have done what amounts to a provisional-final transaction. Or, options are provisional-final transactions.
Thanks, Bob. mca http://amundsen.com/blog/ On Sun, Jun 7, 2009 at 10:53, Bob Haugen <bob.haugen@...> wrote: > On Sun, Jun 7, 2009 at 9:41 AM, mike amundsen<mamund@...> wrote: > > Can you point me to some refs on provisional-final implementations? I'd > be > > happy to look at them again. > > Just google for: > provisional-final transaction > or > "Tentative Business Operations" > or > "escrow transactional method" > > Or, from this group: > http://tech.groups.yahoo.com/group/rest-discuss/message/8755 > > There's another term or two for it, but I can't remember it right now, > and can't find the reference. > > Or, if you have ever done a quote-to-order sequence, you have done > what amounts to a provisional-final transaction. > > Or, options are provisional-final transactions. >
What I cannot get my head around is that a media type (when you look at existing media types) apparently combines expectations about how the sent message affects the state of the recipient and about which parser the message should be dispatched to. When you look at an example In a reply (October 3rd 2006) Roy provided the following example: "Think of it this way: your browser receives two messages, one says it is application/quicken and the other says it is application/logfile. Both have identical content consisting of an invoice. Should your browser assume that both should be processed as an invoice just because they have the same content? Why should the browser behave any differently if the data format happens to be an instance of XML?" This quote implies an orthogonality between media type and XML schema that is just not applied in existing media type specifications. When viewed this way, media type specifications would need to define a processing expectation and reference one or more applicable message schemas. I like this view because it allows forms like AtomPub's <accept> element to be much more expressive: that a resource accepts application/xhtml+xml does not really reveal much in terms of choosing an appropriate state transition, whereas <accept>application/order</ accept> does. (Assuming that the defined processing expectation is 'look at that order and let me know if you will fullfill it or not. It would also prevent media type explosion when dealing with more diverse (enterprise) domain types[1] than found in the HTML-Web world. OTH this view contradicts the common (AFAIU) understanding that media types represent a series of compatible schemas. Clues? Jan [1] Thinking of Account, Party, Product, Contract,... - people often seem to want to make up a media type for each one of them, which is IMHO not the right way to apply media types to the enterprise domain. On Jun 6, 2009, at 8:07 PM, johnzabroski wrote: > --- In rest-discuss@yahoogroups.com, Jan Algermissen > <algermissen1971@...> wrote: >> >> Roy, >> >> in a number of postings you write, that you did not in depth cover >> the >> topic of media type design in your dissertation. No problem I >> thought, >> I'll just fire a search on fielding+"media type design" and should be >> taken to a bunch email snippets that I could harvest. >> >> Turns out that most hits are on the phrase "the entire topic of media >> type design which I left out because I ran out of time" :-) >> >> Could you suggest a more suitable query or a certain forum or even >> references I could take as a starting point for an investigation? > > > Jan, it is so funny that you post this question. > > I was thinking of e-mailing Roy to say, "Why don't you add stuff on > media types to your thesis, and then resubmit it to UC-Irvine?" > After all, his REST thesis _has_ to be the most widely read Computer > Science thesis in history. If he feels he left something important > out of it, why not go back and "complete" it? This would obviously > be unprecedented, but my question is _why not_. > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hi Bob! It's been a while :-) The compensating transaction model in the reference Bill sent round is what you're looking for I think (cf atoms in BTP). This work began life in HP way back before we started working on BTP and there was definitely some cross-pollination. Mark. On 2 Jun 2009, at 17:07, Bob Haugen wrote: > I have not studied the proposal in depth yet, so I may comment more > after I do so. > > But my immediate response is that I think another less-well-known > transaction pattern is more appropriate for the Web in general and > ReST in particular. > > That is variously called "provisional-final" or "options" (among > other names). > > It does not require locking, nor does it require compensating actions > (which are either troublesome or impossible). > > The basics are: > 1. In the first phase, all participants update their resources > provisionally (whether by state or by separate provisional resource), > 2. Upon commit, all participants update their resources in their final > state (or create final resources). > 3. Upon abort or cancel, all participants delete their provisional > resources, or mark them cancelled, or create new cancelled resources. > > The pattern also allows selective commits or cancels, for example for > a bidding process. > > It was implemented in OASIS BTP, which could also be made RESTful > without a lot of work. > http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=business-transaction > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Sunday 07 June 2009, Jan Algermissen wrote: > It would also prevent media type explosion when dealing with more > diverse (enterprise) domain types[1] than found in the HTML-Web > world. [...] > [1] Thinking of Account, Party, Product, Contract,... - people often > seem to want to make up a media type for each one of them, which is > IMHO not the right way to apply media types to the enterprise domain. Why isn't it? And what would be the right way? Michael -- Michael Schuerig mailto:michael@... http://www.schuerig.de/michael/
On Fri, Jun 5, 2009 at 5:58 PM, Bill de hOra<bill@...> wrote: > I don't have a PhD, but the terms/neologisms presented here haven't > helped my understanding on this principle, or provided a grasp I could > use to help others. So far, nothing compelling on this list. I'm going to chime in as to what I think this all means, relying on intuition and observation based upon reading the O'Reilly book, rather than formal vocabularies and PhD's. It seems to me that, whatever acronym you want to use, the "Hypermedia Engine" is basically the concept that resources should include not just their actual data, but representations of State applicable state changes for that resource. Simply, resources include links for other operations. There are several reasons for this. One, is robustness. Since the commands are included within the resource, the actual mechanism for executing that command (the specific text of the link reference), can change over time without the client having to know or care about that detail. This maintains the "URI are opaque" concept. Thus, while systems move, and evolve, the clients can maintain stability. Two, extensibility. If the resource is itself naturally extensible, i.e. XML, JSON, or some other format, then the command set for a resource can change over time but older clients can remain stable. Three, is discovery. When you have an extensible interface, bundling the available state changes can allow a developer of a client to be instantly aware of new functionality. This can happen before they're documented, before they're announced, etc. We have all been to a website and seen new links and actions appear over time with perhaps hardly 2 line announcement from the site owner. Now, given this, there are some conditions. Some formats simply aren't extensible. You can't augment a JPEG, for example. But opaque formats tend to get wrappers for that very reason. There is no expectation for clients (and by clients, I mean programs) to necessarily "intuit" what new commands do, or when they should be used. If the command set changes, it's implied that someone, somewhere, will need to update their client code to make use of those new services. Also, obviously, the extra command set adds data to the overall payload, making it less efficient. We tolerate this on a human driven web page because the discoverability is related to ease of use (no expectation that a user would want to type in the address bar "http://www.example.com/item/12345/reviews" or whatever. No, they'll just click a link. But, arguably, if you want to maintain the extensibility and robustness of the system, and allow it to change beneath the clients feet, you as a producer are obligated to provide this information every time, in every packet. Given that, tho, there is no expectation that a resource format remain static at all. If one day the service sends back format A, and the next it sends out incompatible format B, then that's just the truth of it and clients will have to abruptly deal with that situation. Clearly, if you're using XML, it would be kind to change the schema declaration so that a client can "fail fast", rather than slogging through unfamiliar XML. It would be nice if the service provider sent out some notice about the pending change as well, so as not to disrupt clients. But, be that as it may, since the state changes are bundled within the payload, on the off chance someone is blind sided by a format change, the new packet maintains its "discoverability" ideally allowing a developer to adapt quickly to the format as is, without necessarily having the formal documentation from the provider. So, through these mechanisms, the network can remain robust, and even have a bit of "self-repairability". Certainly not in an automated sense, humans will be involved, but the quality is still there and it can be effective. Imagine the format changes from a consumer on the opposite side of the world. That consumer may well be able to be back up and running with no input from the provider, rather than going through a length email exchange 12 hrs apart. Fixing the problem in 1hr vs several days of playing email tag because neither party is conscious the same time as the other. Anyway, that's the way I read this. This is my understanding of what is trying to be accomplished. Regards, Will Hartung
On 07.06.2009, at 21:17, Will Hartung wrote: > It seems to me that, whatever acronym you want to use, the "Hypermedia > Engine" is basically the concept that resources should include not > just their actual data, but representations of State applicable state > changes for that resource. I disagree. What's changing is not (or at least doesn't have to be) the state of the resource, but the state of the application. When my browser displays an HTML page to me, the application state is "user views page X". There might be links and forms included that enable me and the browser to change this application state to "user views page Y" or "user edits resource Z". Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Will, On Jun 7, 2009, at 3:17 PM, Will Hartung wrote: > > I'm going to chime in as to what I think this all means, relying on > intuition and observation based upon reading the O'Reilly book, rather > than formal vocabularies and PhD's. Roy's dissertation is the definitive source and I rally suggest you wirk through it. It is very readable and besides defining REST it is very good material on software architecture and principled design in general. Changed my mind entirely. > > It seems to me that, whatever acronym you want to use, the "Hypermedia > Engine" is basically the concept that resources should include not > just their actual data, but representations of State applicable state > changes for that resource. This is not what "hypermedia as the engine of application state is about", Stefan is correct. It refers to the notion that the (Web) application is driven by the client traversing links and that the current state of the application resides on the client. If the client does not act, the application does not proceed. Think application==one perticular book-buying process someone makes in Amazon. You can interupt the execution of the buying application at any time. When you return toyour browser you can keep going. The application will have waited for you. Your use of the term 'command' seems like you are thinking RPC - do you? Jan > > > Simply, resources include links for other operations. > > There are several reasons for this. > > One, is robustness. > > Since the commands are included within the resource, the actual > mechanism for executing that command (the specific text of the link > reference), can change over time without the client having to know or > care about that detail. This maintains the "URI are opaque" concept. > > Thus, while systems move, and evolve, the clients can maintain > stability. > > Two, extensibility. > > If the resource is itself naturally extensible, i.e. XML, JSON, or > some other format, then the command set for a resource can change over > time but older clients can remain stable. > > Three, is discovery. > > When you have an extensible interface, bundling the available state > changes can allow a developer of a client to be instantly aware of new > functionality. This can happen before they're documented, before > they're announced, etc. > > We have all been to a website and seen new links and actions appear > over time with perhaps hardly 2 line announcement from the site owner. > > Now, given this, there are some conditions. > > Some formats simply aren't extensible. You can't augment a JPEG, for > example. But opaque formats tend to get wrappers for that very reason. > > There is no expectation for clients (and by clients, I mean programs) > to necessarily "intuit" what new commands do, or when they should be > used. If the command set changes, it's implied that someone, > somewhere, will need to update their client code to make use of those > new services. > > Also, obviously, the extra command set adds data to the overall > payload, making it less efficient. > > We tolerate this on a human driven web page because the > discoverability is related to ease of use (no expectation that a user > would want to type in the address bar > "http://www.example.com/item/12345/reviews" or whatever. No, they'll > just click a link. > > But, arguably, if you want to maintain the extensibility and > robustness of the system, and allow it to change beneath the clients > feet, you as a producer are obligated to provide this information > every time, in every packet. > > Given that, tho, there is no expectation that a resource format remain > static at all. If one day the service sends back format A, and the > next it sends out incompatible format B, then that's just the truth of > it and clients will have to abruptly deal with that situation. > Clearly, if you're using XML, it would be kind to change the schema > declaration so that a client can "fail fast", rather than slogging > through unfamiliar XML. > > It would be nice if the service provider sent out some notice about > the pending change as well, so as not to disrupt clients. But, be that > as it may, since the state changes are bundled within the payload, on > the off chance someone is blind sided by a format change, the new > packet maintains its "discoverability" ideally allowing a developer to > adapt quickly to the format as is, without necessarily having the > formal documentation from the provider. > > So, through these mechanisms, the network can remain robust, and even > have a bit of "self-repairability". Certainly not in an automated > sense, humans will be involved, but the quality is still there and it > can be effective. > > Imagine the format changes from a consumer on the opposite side of the > world. That consumer may well be able to be back up and running with > no input from the provider, rather than going through a length email > exchange 12 hrs apart. Fixing the problem in 1hr vs several days of > playing email tag because neither party is conscious the same time as > the other. > > Anyway, that's the way I read this. This is my understanding of what > is trying to be accomplished. > > Regards, > > Will Hartung > > > ------------------------------------ > > Yahoo! Groups Links > > >
> I like this view because it allows forms like AtomPub's <accept> > element to be much more expressive: that a resource accepts > application/xhtml+xml does not really reveal much in terms of choosing > an appropriate state transition, whereas <accept>application/order</ > accept> does. (Assuming that the defined processing expectation is > 'look at that order and let me know if you will fullfill it or not. application/order is an illegal media type, b/c it is not registered by IANA. you want something like: application/vnd.rest-discuss.order Media Type: application Media Subtype: Vendor Tree - vnd.rest-discuss.order or application/prs.jan-algermissen.order Media Type: application Media Subtype: Personal Tree - prs.rest-discuss.order Note the use of faceted names. See: RFC 4288 Sec 3.2 Vendor Tree http://tools.ietf.org/html/rfc4288#section-3.2 RFC 4288 Sec 3.3 Personal or Vanity Tree http://tools.ietf.org/html/rfc4288#section-3.3 Although I can see use cases for an order media type and such stuff, this is not how I do things. Where have you seen such examples?
On Jun 7, 2009, at 8:22 PM, johnzabroski wrote: > >> I like this view because it allows forms like AtomPub's <accept> >> element to be much more expressive: that a resource accepts >> application/xhtml+xml does not really reveal much in terms of >> choosing >> an appropriate state transition, whereas <accept>application/order</ >> accept> does. (Assuming that the defined processing expectation is >> 'look at that order and let me know if you will fullfill it or not. > > application/order is an illegal media type, b/c it is not registered > by IANA. > > you want something like: > > application/vnd.rest-discuss.order Sure - application/order was simply an example. > Although I can see use cases for an order media type and such stuff, > this is not how I do things. Well, I am focussing on applying REST to machine to machine scenarios and you cannot really get very far with the media types from the human Web. When there are no humans involved to solve the last semantic layer your media types just have to be a bit more expressive. This does not mean that you need media types for every domain class you come accross. Rather, we will IMHO need media types that enable some core collaboration patterns (e.g. order acceptance). > Where have you seen such examples? Not of the detail application/order. OTH, if the UBL was not intended as a pure message passing format but would go about and define its own media types we would be pretty close. UBL has the whole order- acceptance collaboration well defined and minting a set of media types from that would not be a problem, IMHO. NewsML2 (it is has a media type) contains a contract (publicationStatus) between sender and receiver through which the sender can control the publishing state of a news item by the receiver. So, a resource that accepts NewsML2 implicitly agrees to this contract. It will take a while, but hopefully we are getting there. Jan > > > ------------------------------------ > > Yahoo! Groups Links > > >
On 08.06.2009, at 03:25, Jan Algermissen wrote: > Not of the detail application/order. OTH, if the UBL was not intended > as a pure message passing format but would go about and define its own > media types we would be pretty close. UBL has the whole order- > acceptance collaboration well defined and minting a set of media types > from that would not be a problem, IMHO. I agree. I've recently started to advocate using a media type for a collection of related documents - something about halfway between application/xml and application/vnd.my-company.order+xml (e.g. in this case, it might be application/ubl-procurement+xml). Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Jun 8, 2009, at 1:40 AM, Stefan Tilkov wrote: > On 08.06.2009, at 03:25, Jan Algermissen wrote: >> Not of the detail application/order. OTH, if the UBL was not intended >> as a pure message passing format but would go about and define its own >> media types we would be pretty close. UBL has the whole order- >> acceptance collaboration well defined and minting a set of media types >> from that would not be a problem, IMHO. > I agree. I've recently started to advocate using a media type for a > collection of related documents - something about halfway between > application/xml and application/vnd.my-company.order+xml (e.g. in this > case, it might be application/ubl-procurement+xml). Yes, this aligns with the thoughts I had yesterday after sending the emails. I'd now argue that the media type identifies the application (application kind) that is supposed to handle the message. In this view, it would be perfectly fine to have a bunch of related schemas (or root elements of one schema) because the application that can handle the media type would need to know how to deal with them anyway. This shifts my original thinking from the media type sort of conveying the sender's intent to telling the recipient in what application context to process it. And this is I think why Roy used application/quicken and not something like application/order. The media type is not there to say *what* the message is, but in what application context it would be understood correctly. application/ubl-procurement+xml emphasizes that quite nicely, I think. Jan
I had posted a similar query at http://stackoverflow.com/questions/880881/rest-media-type-explosion. I did get quite a few interesting answers, appreciate your thoughts on this. Suresh On Mon, Jun 8, 2009 at 11:10 AM, Stefan Tilkov <stefan.tilkov@...>wrote: > > > On 08.06.2009, at 03:25, Jan Algermissen wrote: > > > Not of the detail application/order. OTH, if the UBL was not intended > > as a pure message passing format but would go about and define its own > > media types we would be pretty close. UBL has the whole order- > > acceptance collaboration well defined and minting a set of media types > > from that would not be a problem, IMHO. > > I agree. I've recently started to advocate using a media type for a > collection of related documents - something about halfway between > application/xml and application/vnd.my-company.order+xml (e.g. in this > case, it might be application/ubl-procurement+xml). > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > -- When the facts change, I change my mind. What do you do, sir?
> And this is I think why Roy used application/quicken and not >something like application/order. The media type is not there to say >*what* the message is, but in what application context it would be >understood correctly. > > application/ubl-procurement+xml emphasizes that quite nicely, I think. > > Jan > (Mistakenly sent this to Jan directly) But if that application context is large and consumption is not genericized ala the way browsers simply display anything application/xml, then beyond just knowing what a media type represents, a consumer also needs to peek into to it to determine what it contains. If an order fulfillment service only deals with orders this may not be a problem, but if it also deals with invoices, a client would have to inspect the representation to determine whether an "order" or a "invoice" was returned. I would think its needs to represent both application context and data.
In http://osdir.com/ml/web.services.rest/2005-07/msg00012.html I ppicked up this: application/vnd.somebody.purchaseorder.v13+xml Would that actually be a valid? I cannot determine from the registration RFC if the third dot is valid. What about: application/vnd.somebody.procurement.purchaseorder.v13+xml (I am heading for a media type that bundles some common syntactical features but might use a load of different possible XML root elements. In addition, these are expected to evolve independently, hence the version on the actual type). An alternative solution would be to stick the version into a version parameter but I am not able to see the consequences, yet. Thanks for any thoughts. Jan Regarding the poking: the application would IMHO have a defined behavior for each document type anyhow and be known to do the correct thing for an invoice or order, or? On Monday, June 08, 2009, at 07:42PM, "Ebenezer Ikonne" <amaeze@...> wrote: >> And this is I think why Roy used application/quicken and not >something like application/order. The media type is not there to say >*what* the message is, but in what application context it would be >understood correctly. >> >> application/ubl-procurement+xml emphasizes that quite nicely, I think. >> >> Jan >> > >(Mistakenly sent this to Jan directly) > >But if that application context is large and consumption is not genericized ala the way browsers simply display anything application/xml, then beyond just knowing what a media type represents, a consumer also needs to peek into to it to determine what it contains. If an order fulfillment service only deals with orders this may not be a problem, but if it also deals with invoices, a client would have to inspect the representation to determine whether an "order" or a "invoice" was returned. I would think its needs to represent both application context and data. > > > >------------------------------------ > >Yahoo! Groups Links > > > > >
I believe your first example is valid and used, not too sure about the second. I prefer to version my media types but I'm sure that's also a point of debate. How are you distinguishing with unique media types? On Mon, Jun 8, 2009 at 5:22 PM, Jan Algermissen <algermissen1971@...>wrote: > In http://osdir.com/ml/web.services.rest/2005-07/msg00012.html I ppicked > up this: > > application/vnd.somebody.purchaseorder.v13+xml > > Would that actually be a valid? I cannot determine from the registration > RFC if the third dot is valid. > > What about: > > application/vnd.somebody.procurement.purchaseorder.v13+xml > > (I am heading for a media type that bundles some common syntactical > features but might use a load of different possible XML root elements. In > addition, these are expected to evolve independently, hence the version on > the actual type). > > An alternative solution would be to stick the version into a version > parameter but I am not able to see the consequences, yet. > > Thanks for any thoughts. > > Jan > > Regarding the poking: the application would IMHO have a defined behavior > for each document type anyhow and be known to do the correct thing for an > invoice or order, or? > > > > > > On Monday, June 08, 2009, at 07:42PM, "Ebenezer Ikonne" <amaeze@...> > wrote: > >> And this is I think why Roy used application/quicken and not >something > like application/order. The media type is not there to say >*what* the > message is, but in what application context it would be >understood > correctly. > >> > >> application/ubl-procurement+xml emphasizes that quite nicely, I think. > >> > >> Jan > >> > > > >(Mistakenly sent this to Jan directly) > > > >But if that application context is large and consumption is not > genericized ala the way browsers simply display anything application/xml, > then beyond just knowing what a media type represents, a consumer also needs > to peek into to it to determine what it contains. If an order fulfillment > service only deals with orders this may not be a problem, but if it also > deals with invoices, a client would have to inspect the representation to > determine whether an "order" or a "invoice" was returned. I would think its > needs to represent both application context and data. > > > > > > > >------------------------------------ > > > >Yahoo! Groups Links > > > > > > > > > > >
On Sat, Jun 6, 2009 at 8:39 AM, Roy T. Fielding <fielding@...> wrote:
>
> On Jun 6, 2009, at 12:40 PM, Nick Gall wrote:
>>
>> I thought REST was the STYLE! Now we have the style of a style? I.e, REST
is substyle of the hypermedia style?
>
> REST is a composition of constraints that come from many styles.
>
>> So I went back to the thesis to see how you defined hypermedia only to
discover that you don't. AFAICT there is only one definition constraining
hypermedia in your thesis: Hypermedia is defined by the presence of
application control information embedded within, or as a layer above, the
presentation of information. (I notice that assertion is not footnoted.)
>>
>> This sentence really doesn't help since it uses the phrase defined by not
defined as.Thus, the sentence doesn't say what hypermedia is, it merely
imposes a constraint on the concept of hypermedia -- whatever it may be.
Nowhere in the thesis is hypermedia (style) ever cited or defined. Or is
hypermedia not a style but the single architectural constraint provided by
the quoted sentence? All this kind of makes it a moving target.
>
> *shrug* I didn't think it needed to be "defined as" (at the time).
> Too many of my friends are experts in hypertext research and they
> probably would have poked mercilessly at my final defense.
>
>> [Current debate aside, I'd be really interested to hear your definition
of hypermedia, or even just see some pointers to others' definitions that
define it as an interaction style. All the definitions I've ever seen define
hypermedia as a kind of media. Even Ted Nelson defined hypermedia as simply
the medium, not the style.]
>
> See slide 35 (pp. 50-53) of
>
> http://roy.gbiv.com/talks/200804_REST_ApacheCon.pdf
Thanks for the pointer. It was immensely useful. I recommend EVERYONE
interested in REST read it. Is there audio of this talk anywhere? I'd love
to hear the details about these slides.
One question I was about to ask you is apparently answered in this pitch.
HATEOAS can be replaced with HITEOAS:
"Hypermedia IS the Engine of Application State." (see slide 75)
I find IS far clearer than AS. I think others will as well.
>> All that being said, my argument still holds. Even if hypermedia is
defined as an interaction style (or something like a style) as opposed to a
particular kind of media (data), a style is no more an engine than a set of
HTML documents is an engine.
>
> Oh, really? I wonder what you think engine means.
>
> http://en.wikipedia.org/wiki/The_Engine
>
> An engine is a system for transforming input into some form
> of output. The engine in a car is a system for transforming
> gasoline into torque that can be applied to a drive axle.
>
> My little bullet of a constraint
>
> "hypermedia as the engine of application state"
>
> does not say that the engine is a hypertext document. It describes
> the engine as being a hypermedia system, much like a car's engine
> would be described as an internal combustion system.
I think its fair to say that your thesis is vague regarding what hypermedia
refers to. When "hypermedia" is used as a noun and not as an adjective (eg
hypermedia document, hypermedia link) it is not clear what it refers to. The
same thing is true in the presentation you link to above. On pages 50-53,
you highlight three different definitions of "hypertext", none of which
specifically refer to a system. The first two (Nelson's and Conklin's)
specifically refer to hypermedia being a "text" and a "medium" -- not the
complete system surrounding such text/media. So even if YOU mean
hypertext/hypermedia=complete system (not just the media of the system),
most people won't know that -- hence the confusion I've been alluding to
from the beginning of this thread.
Even your definition can be read as referring to just the
documents/text/media:
When I say Hypertext, I mean ...
- The simultaneous presentation of information and controls such that the
information becomes the affordance through which the user obtains choices
and selects actions.
- Hypertext does not need to be HTML on a browser
- machines can follow links when they understand the data format and
relationship types
There is NO mention of the entire system. The focus is on blending controls
into information, aka a medium. "Hypertext does not need to be HTML"
suggests that hypertext can be another kind of document (eg XML document).
Roy, you have to admit that most people (even most developers) think of
hypermedia as a kind of document or media -- not the entire system
surrounding such media. Even wikipedia defines
hypertext<http://en.wikipedia.org/wiki/Hypertext>as the text, not the
system. Thus, when they first hear "hypermedia is/as
the engine of application state", they're going to think that a document or
set of documents is the engine. Yes, this initial misunderstanding can be
corrected by further explanation, but why cause the confusion in the first
place by using a term "hypermedia" that most people interpret as referring
to the documents/media. Why not at least change the term to
*
*
*Hypermedia System IS The Engine of Application State*?
You use the term "hypermedia system" extensively in your thesis so there
shouldn't be a problem with adding "system" to the term to clarify your
intent. (BTW, One would think that if "hypermedia" alone referred to the
system, that "hypermedia system" would be redundant.)
>
> I did not cite any specific reference for that because (AFAIK)
> there doesn't exist any specific reference. I was doing synthesis.
> Nelson's definition is tied to what he cared about -- non-linear
> writing as a form of poetry. Conklin was entirely focused on
> graphical user interfaces, so his definition is tied directly to
> GUI affordances. My observation is something that I considered
> to be inherent in the design largely because the Web was based
> on Engelbart's view of hypertext, but AFAIK Engelbart never actually
> defined the term other than by how it was used in Augment/NLS
>
>> To support this claim, let me quote further from the paragraph in which
the above quoted sentence appears (4.1.3):
>>
>> Hypermedia is defined by the presence of application control information
embedded within, or as a layer above, the presentation of information.
Distributed hypermedia allows the presentation and control information to be
stored at remote locations. By its nature, user actions within a distributed
hypermedia system require the transfer of large amounts of data from where
the data is stored to where it is used.
>>
>> Yet again, we see that it is user actions (powered by the browser
software) that singled out as driving the application ("user actions ...
require the transfer of ... data") -- the system does not drive itself.
>
> User actions are part of the system being designed. That paragraph
> is talking about a design constraint imposed by the requirement that
> the information be distributed all around the world. Aside from using
> the same term in two different ways, I don't see how that has anything
> to do with your point. Whether or not the system drives itself is
> irrelevant.
>
>> Saying that hypermedia (the entire system or style) is the engine is like
saying the entire automobile (or the architectural style called
"automobile") is the engine. It may be true in a Zen-like way (and believe
me, I LOVE mystical philosophies), but it is utterly confusing to 99% of
humanity.
>
> 99% of humanity was not my audience, and I'll hasten to bet that less
> than 1% of humanity knows what an engine means even for something as
> mundane as an automobile.
>
>> It is far clearer to the rest of humanity to say that the browser (or
more generally user agent) is the engine of state, the user is the driver of
state, and hypermedia is the representation of state.
>
> The only reason that is clearer to the rest of humanity is
> because it is wrong. It's like saying the Web is defined by
> what a user sees in MSIE. I don't care how easy it may be for
> a non-educated user to understand that definition: it is wrong
> and I have no interest in peddling simplified forms.
I shouldn't have referred to 99% of humanity. I should have said 99% of
developers are confused by HATEOAS and would be utterly confused by the
statement that the entire hypermedia system, including the user, the
browser, the server, etc. is the engine. If the entire set of components,
connectors, and data that constitute a hypermedia system are collectively
the engine, that doesn't really help us deal with where state should be
stored or processed. State could be diffused throughout the entire system.
Since the entire system is the engine, then the application state it is
transforming exist anywhere in the system. But we know that REST does not
allow that, since all application state must be on the client.
Let's look at how you describe HITEOAS on slide 49 to see if that helps the
current discussion:
REST Uniform Interface
Hypertext as the engine of application state
- A successful response indicates (or contains) a current *representation
* of the state of the identified resource; the *resource remains
hidden*behind the interface.
- Some *representations contain links* to potential next application
states, including *direction on how to transition* to those states when a
transition is selected.
- Each steady-state (Web page) embodies the current *application state*
- simple, visible, scalable, reliable, reusable, and cacheable
- All application state (not resource state) is kept on client
- All shared state (not session state) is kept on origin server
Allow me to tease out some possible implications of these assertions (do you
agree with these?):
1. Not all representations must contain links to next possible
application states -- contrary to some descriptions of HATEOAS by others
2. Besides links to next states, the representation may (must?) contain
metadata on how to transition to those states
3. The current application state is completely embodied in the set of
representations that constitute a web page (eg including inline
representations)
4. No resource state is kept on the client
5. Only representations of the resource state are kept on the client
6. No application state is kept on the server
7. Potential application states are generated on the server and returned
in resource representations
8. Since all application state is kept on the client, only the client can
initiate a state change
9. The server generates state transition options and the client selects
them
10. The server can never initiate a change in application state
11. The server can provide potential application state changes to the
client
12. The server can only initiate a change to a resource's state
13. The client can only initiate a change to a resource by requesting the
server initiate the change
14. Application state is not shared state
15. Application state is private to the client
One thing I love about implication (9) is that it calls to mind the old
adage "Man proposes but God disposes." In the case of REST, "the server
proposes (application state transition options), but the client disposes (by
selecting among the options)." Ted Nelson even refers to this adage in an
interview <http://news.bbc.co.uk/1/hi/sci/tech/1581891.stm>: "I think of it
as a form of writing - and writing is essentially what I would call a
two-God system, because God the author proposes and God the reader disposes.
The author is completely free to do anything on the page that he likes."
Thus the server guides but does not control the client.
Since it is always the client that initiates application state changes by
sending a new request to the server, it seems completely intuitive to me to
view the client as the engine. Just as the engine in a car initiates the
state change of the axle. Thus, for those who prefer an acronym with
"engine" in it, I think the following convey more clearly what is going on:
1. Client is the Engine of Application State Transitions Represented as
Hypermedia Links (CEASTRAHL)
2. Client Traversal of Hypermedia Links Is the Engine of Application
State (CTOHLIEAS)
3. Client is an Engine Driving a Hypermedia Representation Of a Protocol
(CEDHROP)
4. The Engine of Application State: Server Generating Representations
With Data-Guided Controls & Client Selecting Them
This discussion clarified HATEOAS a lot for me. Thanks Roy and everyone
else.
-- Nick
--
Nick Gall
Phone: +1.781.608.5871
AOL IM: Nicholas Gall
Yahoo IM: nick_gall_1117
MSN IM: (same as email)
Google Talk: (same as email)
Email: nick.gall AT-SIGN gmail DOT com
Weblog: http://ironick.typepad.com/ironick/
On 7 Jun 2009, at 13:29, Bob Haugen wrote: > On Sun, Jun 7, 2009 at 4:44 AM, Mark Little <nmcl2001@...> > wrote: >> Hi Bob! It's been a while :-) > > Hasn't it, though? And isn't this about where we left off lo those > many years ago? I don't think so. > > > (By the way, I'm not actually pushing BTP here, only the > provisional-final model for RESTful transactions, which I think could > be a lot simpler than BTP. No comment ;-) > And I do think it is not only possible but > sometimes necessary to do something that would look a lot like > transactions in a RESTful environment.) +1 > > >> The compensating transaction model in the reference Bill sent round >> is what >> you're looking for I think (cf atoms in BTP). > > I don't see much detail about compensation in the reference Bill sent, > unless I missed something. My bad for not checking. We gave a presentation on this last week at JavaOne, but the wiki page doesn't seem to have been updated. I'll check what's up and get back to you. > > > It says: > <excerpt> > The two proposals are: > 1. classic transactions obeying ACID properties; > 2. compensation based transactions avoiding the need to lock > resources for extended periods of time. > > Approach 1 is discussed in depth whilst the second will be covered in > a subsequent wiki. > </excerpt> > > My understand of compensation is that it is a do-undo model, where the > participants actually do the work in Phase 1, and then undo it in > Phase 2 if the transaction aborts. > > Provisional-Final is a do provisionally, and then do finally model, > where the participants do the work provisionally in Phase 1, and then > finalize it (or cancel it) in Phase 2. > > In an protocol sense, I think the decision of which of those > approaches to use could be left up to the participants, but if we're > talking Java implementation details, we probably need to make the > approaches explicit. It does have an impact on the implementation for sure, especially in the case of recovery. > > > I'm familiar with problems in compensation from working in fast-paced > manufacturing environments. If an order is accepted, work begins, or > goods are shipped. I've seen it happen in EDI environments with no > transactional controls, and it's expensive to undo. Much better to > mark it provisional and only start work once the transaction commits. Ah, here we go again ;-) Just remember: one size does not fit all ;-) Mark. > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Yes, but (and this is where we made mistakes in our BTP effort) ONE SIZE DOESN'T FIT ALL USE CASES. Let's make sure we can support multiple protocols if that's what is required. It'll save us all a lot of smoke and fire discussions. Mark. On 7 Jun 2009, at 15:20, Bob Haugen wrote: > On Sun, Jun 7, 2009 at 8:30 AM, mike amundsen <mamund@...> > wrote: >> I've found that, when the operation may be long-running, the number >> of >> resources more than a few, and/of the resources are kept within >> multiple >> namespaces, the "do/undo" model is preferable. > > What if you can't undo, or if undoing is expensive? Then the > provisional-final model is better. > > > ------------------------------------ > > Yahoo! Groups Links > > >
A lot of standardising this work started life in the OMG (cf Additional Structuring Mechanisms for the OTS). We found pretty much the same, i.e., that in some cases you really need protocol X, whereas in others it's Y. There's a very good reason that there are a plethora of Extended Transaction models and it's not just because it helps people publish papers ;-) (OK, that is sometimes the reason!) I don't want to needlessly waste time/effort here on the history of transactions (ACID and extended) or their evolution through standards (e.g., OASIS BTP, WS-CAF and WS-TX), but nothing that's happened over the last 40 years in this space indicates that there's some Uber Protocol out there that we're all missing and if only we could find it it would allow us to have a single implementation. Mark. On 7 Jun 2009, at 15:28, mike amundsen wrote: > > > I use Sagas to model long-running operations that "enlist" multiple > completions in a single unit. In other words, a client may send a > representation to the server and the server, in turn, engages in a > number of resource interactions (usually creating resources along > the way) in order to complete the work. If one of the interactions > cannot be completed it may mean previous interactions need to be > 'rolled back' or canceled. > > The classic case I use is modeling order placement and fulfillment. > For example a client may assemble a representation of an online > order and send it to a server. That server may then need to create > an "order" resource, a "stock" resource to debit stock "shipping" > resource to schedule shipping, and a "payment" resource to cover the > costs of the work. These steps might happen in parallel and might > even involve other servers. > > I prefer using the Saga model since it is an "optimistic" pattern > and I find that easier to model over HTTP. On the more pragmatic > side, I can model the initial interaction set w/o employing the > details of the saga (implementing either 'forward compensation' or > 'backward compensation' steps). I can then add the compensation > work later in the implementation process (sometimes weeks or > months!) without much disruption to clients or proxies, etc. > > mca. > http://amundsen.com/blog/ > > > >
On Tue, Jun 9, 2009 at 8:13 AM, Mark Little<nmcl2001@...> wrote: > On 7 Jun 2009, at 13:29, Bob Haugen wrote: >> ...isn't this about where we left off lo those >> many years ago? > > I don't think so. See next exchange of same views as before...but maybe we can make progress this time? >> I'm familiar with problems in compensation from working in fast-paced >> manufacturing environments. If an order is accepted, work begins, or >> goods are shipped. I've seen it happen in EDI environments with no >> transactional controls, and it's expensive to undo. Much better to >> mark it provisional and only start work once the transaction commits. > > Ah, here we go again ;-) Just remember: one size does not fit all ;-) My opinion remains the same, but maybe I can explain it better now (or not). I think at the protocol level (the interactions between coordinator and participants) one size can and will fit at least most. At the implementation level, e.g. the internal workings of the participants, many variations will happen. For example, I think the same basic coordinator-participant interactions can happen for compensation or provisional-final patterns. The participant can handle the two phases differently. Different participants in the same transaction can handle them differently. But the value of keeping the interaction protocol uniform is the same as the uniform interface constraint in ReST. I'm not saying it has to be that way, nor that exceptions won't exist, just that a single simple RESTful transaction protocol will get a lot more takeup than a group of battling protocols. I think another place we disagree is that is partly why I think the earlier transaction standardization efforts went into a hole and never found their way out. Too many variations, too many arcane theoretical arguments (some of which I participated in, and would like to get those months of my life back...)
On 9 Jun 2009, at 14:27, Bob Haugen wrote: > On Tue, Jun 9, 2009 at 8:13 AM, Mark Little<nmcl2001@...> > wrote: >> On 7 Jun 2009, at 13:29, Bob Haugen wrote: >>> ...isn't this about where we left off lo those >>> many years ago? >> >> I don't think so. > > See next exchange of same views as before...but maybe we can make > progress this time? Oh I hope so ;-) > > >>> I'm familiar with problems in compensation from working in fast- >>> paced >>> manufacturing environments. If an order is accepted, work begins, >>> or >>> goods are shipped. I've seen it happen in EDI environments with no >>> transactional controls, and it's expensive to undo. Much better to >>> mark it provisional and only start work once the transaction >>> commits. >> >> Ah, here we go again ;-) Just remember: one size does not fit all ;-) > > My opinion remains the same, but maybe I can explain it better now > (or not). > > I think at the protocol level (the interactions between coordinator > and participants) one size can and will fit at least most. I think it's the definition of "most" that we will end up debating until the cows come home. If we can agree that there will be a need for multiple protocols then that's a good start. > > > At the implementation level, e.g. the internal workings of the > participants, many variations will happen. Agreed. But mixing and matching implementations that obey different protocols can cause problems, so the basic assumptions under which the participant works, e.g., do/undo, provisional/final, would still need to be available in meta-data for instance. > > > For example, I think the same basic coordinator-participant > interactions can happen for compensation or provisional-final > patterns. The participant can handle the two phases differently. Only if the participant knows the "type" of transaction in which it has been enlisted. For instance, enlisting a strictly ACID-based participant in a compensating transaction is not necessarily the most efficient thing to do. > > Different participants in the same transaction can handle them > differently. Did you ever take a look at the WS-CAF BP protocol :-) ? The genesis of that is very close to what you are outlining, though of course it may not have been as clearly articulated back then. > > > But the value of keeping the interaction protocol uniform is the same > as the uniform interface constraint in ReST. > > I'm not saying it has to be that way, nor that exceptions won't exist, > just that a single simple RESTful transaction protocol will get a lot > more takeup than a group of battling protocols. They don't need to be "battling". I've always preferred the style of using the right tool for the right job. If that tool is a Swiss army- knife then great (as long as it doesn't become too bloated) as that also doesn't preclude the addition of other tools in your armory. We're going to try to develop this tx+REST approach entirely in the open, so it'd be good to have you participate. I think this also allows us to remove one of the other areas where we failed in BTP (and elsewhere): getting real users involved! The barrier to entry for OASIS, W3C and OMG is/was far too high. > > > I think another place we disagree is that is partly why I think the > earlier transaction standardization efforts went into a hole and never > found their way out. Too many variations, too many arcane theoretical > arguments (some of which I participated in, and would like to get > those months of my life back...) "months"? Did you come in towards the end ;-) ? I measure it in years! Mark.
On Tue, Jun 9, 2009 at 9:00 AM, Mark Little<nmcl2001@...> wrote: > > On 9 Jun 2009, at 14:27, Bob Haugen wrote: > I think it's the definition of "most" that we will end up debating until the > cows come home. If we can agree that there will be a need for multiple > protocols then that's a good start. For the most part, I don't. I think one generic protocol will be most useful, although I can believe that outliers will happen. >> At the implementation level, e.g. the internal workings of the >> participants, many variations will happen. > > Agreed. But mixing and matching implementations that obey different > protocols can cause problems, so the basic assumptions under which the > participant works, e.g., do/undo, provisional/final, would still need to be > available in meta-data for instance. Why does the coordinator need to care whether a given participant is using do/undo or provisional/final? And it certainly is not any business of any other participant's. >> For example, I think the same basic coordinator-participant >> interactions can happen for compensation or provisional-final >> patterns. The participant can handle the two phases differently. > > Only if the participant knows the "type" of transaction in which it has been > enlisted. For instance, enlisting a strictly ACID-based participant in a > compensating transaction is not necessarily the most efficient thing to do. I don't think ACID rules work on the open Web, so I would leave those participants out of a RESTful transaction protocol by design. > Did you ever take a look at the WS-CAF BP protocol :-) ? The genesis of that > is very close to what you are outlining, though of course it may not have > been as clearly articulated back then. I did, fairly deeply, back in the day, but remember it as more complicated than it should be for REST, I think. Could look again if it comes back to life in RESTful guise. BTP is too complicated for REST, too. I liked Peter Furniss's dirt simple abstract protocol sketch, will try to find it again and post it. > "months"? Did you come in towards the end ;-) ? I measure it in years! 2001-2002, but I did not spend all of those two years in arguments.
If you find yourself in need of a distributed transaction protocol, then how can you possibly say that your architecture is based on REST? I simply cannot see how you can get from one situation (of using RESTful application state on the client and hypermedia to determine all state transitions) to the next situation of needing distributed agreement of transaction semantics wherein the client has to tell the server how to manage its own resources. Most likely, the system you are thinking of is just doing CRUD operations on multiple servers. Each of those actions might be based on a RESTful architecture. When all of them are done and the client makes a final request to approve or cancel the changes, it might be interacting with a TM-style manager resource that tells all of the other services to commit the associated changes to a more persistent or public set of resources, just like a staging server might be used to prepare content prior to publication. The sum of all those actions might be equivalent to an ACID transaction. None of that matters to the REST client. As far as the client is concerned, it is only interacting with one resource at a time even when those interactions overlap asynchronously. There is no "transaction protocol" aside from whatever agreement mechanism is implemented in the back-end in accordance with the resource semantics (in a separate architecture that we don't care about here). There is no commit protocol other than the presentation of various options to the client at any given point in the application. There is no need for client-side agreement with the transaction protocol because the client is only capable of choosing from the choices provided by the server. If I am missing something, please let me know, but for now I consider "rest transaction" to be an oxymoron. ....Roy
On 9 Jun 2009, at 15:21, Bob Haugen wrote: > On Tue, Jun 9, 2009 at 9:00 AM, Mark Little<nmcl2001@...> > wrote: >> >> On 9 Jun 2009, at 14:27, Bob Haugen wrote: > >> I think it's the definition of "most" that we will end up debating >> until the >> cows come home. If we can agree that there will be a need for >> multiple >> protocols then that's a good start. > > For the most part, I don't. I think one generic protocol will be most > useful, although I can believe that outliers will happen. > >>> At the implementation level, e.g. the internal workings of the >>> participants, many variations will happen. >> >> Agreed. But mixing and matching implementations that obey different >> protocols can cause problems, so the basic assumptions under which >> the >> participant works, e.g., do/undo, provisional/final, would still >> need to be >> available in meta-data for instance. > > Why does the coordinator need to care whether a given participant is > using do/undo or provisional/final? If you believe that different signals mean different things even if they have the same name then it does matter. Failure of prepare is different to failure of "do", especially during recovery if you have a presumed-abort protocol. Flick that over to presumed-commit or presumed-nothing, then it really does make a difference. If I implement a participant under one set of assumptions and mix that with others, then the coordinator really does need to know (so does the participant to be honest.) > > > And it certainly is not any business of any other participant's. Different participants don't need to know about each other and that's not what I meant. But the contract between the coordinator and participant should be clear. > > >>> For example, I think the same basic coordinator-participant >>> interactions can happen for compensation or provisional-final >>> patterns. The participant can handle the two phases differently. >> >> Only if the participant knows the "type" of transaction in which it >> has been >> enlisted. For instance, enlisting a strictly ACID-based participant >> in a >> compensating transaction is not necessarily the most efficient >> thing to do. > > I don't think ACID rules work on the open Web, so I would leave those > participants out of a RESTful transaction protocol by design. Last time I looked HTTP was being used quite a bit in the corporate firewall. Now it can be argued that maybe we should leave those interactions to WS-*, but I know that a few people would really like to see that work done over REST too. > > >> Did you ever take a look at the WS-CAF BP protocol :-) ? The >> genesis of that >> is very close to what you are outlining, though of course it may >> not have >> been as clearly articulated back then. > > I did, fairly deeply, back in the day, but remember it as more > complicated than it should be for REST, I think. Could look again if > it comes back to life in RESTful guise. I blame SOAP for that ;-) > > > BTP is too complicated for REST, too. Definitely. As are a host of other protocols. > > > I liked Peter Furniss's dirt simple abstract protocol sketch, will try > to find it again and post it. I have it too somewhere. > > >> "months"? Did you come in towards the end ;-) ? I measure it in >> years! > > 2001-2002, but I did not spend all of those two years in arguments. Well I'm sure they won't be repeated. Anyway, got to run to an 8 hour meeting :-( Mark.
On Tue, Jun 9, 2009 at 10:11 AM, Roy T. Fielding<fielding@...> wrote: > If you find yourself in need of a distributed transaction > protocol, then how can you possibly say that your architecture > is based on REST? I simply cannot see how you can get from one > situation (of using RESTful application state on the client and > hypermedia to determine all state transitions) to the next > situation of needing distributed agreement of transaction semantics > wherein the client has to tell the server how to manage its own > resources. What I have in mind is a coordinator (a client) keeping track of the state of a transaction, and using normal REST interactions with participating server-managed resources to coordinate an agreement. I have not worked out all the details, but does that seem feasible to you? If so, I will try to work out more details and post them here. If not, I will still need to do something like that, but won't call it REST, and won't bother this list with the topic again. > Most likely, the system you are thinking of is just doing > CRUD operations on multiple servers. Each of those actions > might be based on a RESTful architecture. When all of them > are done and the client makes a final request to approve or > cancel the changes, it might be interacting with a TM-style > manager resource that tells all of the other services to commit > the associated changes to a more persistent or public set of > resources, just like a staging server might be used to prepare > content prior to publication. The sum of all those actions > might be equivalent to an ACID transaction. Won't follow all the ACID rules. In particular, won't be Isolated, and may or may not be Atomic.
On Tue, Jun 9, 2009 at 10:11 AM, Roy T. Fielding<fielding@...> wrote: > If you find yourself in need of a distributed transaction > protocol, then how can you possibly say that your architecture > is based on REST? I P.S. In this previous message to this group: http://tech.groups.yahoo.com/group/rest-discuss/message/4150 You appeared to be saying something different: "This topic has come up a few times on webdav and http-wg lists. The transaction is a resource, but the relationship between it and the requested resource can be accomplished via a header field that defines each request as a sequenced resource within a hierarchical transaction. In other words, ask the server for a transaction begin, send the URI it gives the client in each request as a header field with a request number appended to it, and finally abort or commit the transaction as a final request to the transaction's URI. That's basically how I did it for the still-vapor waka protocol." Do I misunderstand? By the way, that's not the design I have in mind, but I'd like to understand it better. The use cases I have in mind all involve B2B automated order-fulfillment and related scenarios.
Roy, Apologies for picking just one fragment from your reply, but It seems to me that the question is whether the way this On 09.06.2009, at 17:11, Roy T. Fielding wrote: > a TM-style > manager resource that tells all of the other services to commit > the associated changes to a more persistent or public set of > resources is done is worth being standardized. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Jun 9, 2009, at 6:18 PM, Bob Haugen wrote: > On Tue, Jun 9, 2009 at 10:11 AM, Roy T. Fielding<fielding@...> > wrote: > > If you find yourself in need of a distributed transaction > > protocol, then how can you possibly say that your architecture > > is based on REST? I > > P.S. In this previous message to this group: > http://tech.groups.yahoo.com/group/rest-discuss/message/4150 > You appeared to be saying something different: > > "This topic has come up a few times on webdav and http-wg lists. > The transaction is a resource, but the relationship between it and > the requested resource can be accomplished via a header field that > defines each request as a sequenced resource within a hierarchical > transaction. In other words, ask the server for a transaction begin, > send the URI it gives the client in each request as a header field > with a request number appended to it, and finally abort or commit > the transaction as a final request to the transaction's URI. That's > basically how I did it for the still-vapor waka protocol." > > Do I misunderstand? > No, I just found it to be useless for REST. It might still be needed for non-RESTful use of the same protocols. I tried out the above and then simplified it out of the protocol. My thinking is that the above exchange is equivalent to the server providing an independent set of resources to the client (i.e., any state-changing actions by the client are automatically isolated by being in a client-specific workspace) and then the commit is just another button on a web page (or the equivalent typed element/relation in your favorite media type). The problem is therefore teaching the client which action to select, not what transaction protocol to be aware of. > By the way, that's not the design I have in mind, but I'd like to > understand it better. > So would I. > The use cases I have in mind all involve B2B automated > order-fulfillment and related scenarios. > Yep, multiparty contract agreement? I know of many scenarios in which the resources depend on some sort of transaction-like semantics on the back-end because they are dealing with multiple parties. I don't know of any where the client needs to be aware of it. The same is generally true of ACID database transactions -- all of the work is done on the servers, with the client's awareness limited to the commit/cancel decision (or fail). Where REST differs is that the client can't make arbitrary changes to the database, so there is no need for a REST client to be aware of the begin-transaction semantic. At least that's my theory -- it could be wrong, ....Roy
On Jun 9, 2009, at 6:23 PM, Stefan Tilkov wrote: > Apologies for picking just one fragment from your reply, but It > seems to me that the question is whether the way this > > On 09.06.2009, at 17:11, Roy T. Fielding wrote: > >> a TM-style >> manager resource that tells all of the other services to commit >> the associated changes to a more persistent or public set of >> resources > > is done is worth being standardized. Maybe, but it is behind the resource interface. It is some other architecture which can (and should) be allowed to change over time without impacting the RESTful part of the system. [It might even be another RESTful architecture behind the interface -- the point is we don't know and cannot make any client assumptions based on it.] ....Roy
Here's a very sketchy version of what I had in mind: The scenario is an etailer getting bids, and then placing an order with the lowest bidder. The etailer POSTs a Request For Quotation (Amazon called these Request For Commitment in one meeting) to the appropriate URI of a few suppliers. The RFQ contains a URI for the suppliers to POST a Quote in response. Each Quote contains a URI (or two) for the etailer to follow up with an order or a rejection. The etailer then POSTs an order (still in a provisional state) to the successful bidder's URI. The order contains a URI for the supplier to accept the order, and maybe a URI for the supplier to decline the order if something has happened in the meantime that has made fulfillment impossible. When the etailer gets an acceptance from the supplier, the etailer might send rejection notices to the failed bidders. I'm sure Mark Little could suggest several refinements, but this is just a sketch for discussion purposes. It's a 2-phase commit agreement scenario, but not ACID: it's not Atomic, and the individual resources are not Isolated. Both the etailer and the suppliers play both client and server roles. Each message contains a URI or two for responses, where the recipient of a message will (possibly in a separate process) POST a response. My questions for Roy and this group: 1. Is this scenario RESTful? 2. If not, could it be made RESTful with some changes? 3. What else is wrong with it? Question for Mark: what's wrong with it from a transaction protocol viewpoint? Thanks, Bob Haugen
We wrote about this in the second edition of Principles of Transaction Processing, due out later this month.
We took our cue from "RESTful Web Services" and along with some help from Steve Vinoski, described modeling a transaction as a resource.
The best way to think about it is as a modern form of a psuedo-conversational transaction. After retrieving the data you store a local copy, release all locks (typically using a commit after a READ operation) and send the data to the client. The way this first came up to me was as the way you protect yourself during an update operation from the chance the user may decide to go get a cup of coffee while the locks are still held on the data, waiting for the user to change something for the update.
Anyway, the user then updates the data without the server having any knowledge whatsoever of what's going on in the client. When the server receives the updated data, the application reads the data again and compares it to the local copy. If anything has changed, it's an error. If not, go ahead and perform the update.
Obviously there are some holes in this, and other techniques that would also work, such as inserting a "manual" lock on the record (which is something I used to do before the databases I was using supported transactions) and checking that before performing the update (this would tell you whether someone else had updated the record while the user was "thinking").
But the point is that the problem is the resource's responsibility, not the transaction protocol's. In other words, there's no transaction protocol here, just a responsibility for the resource serving as the transaction to ensure things are correct.
Eric
________________________________
From: Bob Haugen <bob.haugen@...>
To: Roy T. Fielding <fielding@...>
Cc: Stefan Tilkov <stefan.tilkov@...>; REST-Discuss Discussion Group <rest-discuss@yahoogroups.com>
Sent: Tuesday, June 9, 2009 3:10:03 PM
Subject: Re: [rest-discuss] rest transactions
Here's a very sketchy version of what I had in mind:
The scenario is an etailer getting bids, and then placing an order
with the lowest bidder.
The etailer POSTs a Request For Quotation (Amazon called these Request
For Commitment in one meeting) to the appropriate URI of a few
suppliers. The RFQ contains a URI for the suppliers to POST a Quote
in response. Each Quote contains a URI (or two) for the etailer to
follow up with an order or a rejection.
The etailer then POSTs an order (still in a provisional state) to the
successful bidder's URI. The order contains a URI for the supplier to
accept the order, and maybe a URI for the supplier to decline the
order if something has happened in the meantime that has made
fulfillment impossible.
When the etailer gets an acceptance from the supplier, the etailer
might send rejection notices to the failed bidders.
I'm sure Mark Little could suggest several refinements, but this is
just a sketch for discussion purposes.
It's a 2-phase commit agreement scenario, but not ACID: it's not
Atomic, and the individual resources are not Isolated.
Both the etailer and the suppliers play both client and server roles.
Each message contains a URI or two for responses, where the recipient
of a message will (possibly in a separate process) POST a response.
My questions for Roy and this group:
1. Is this scenario RESTful?
2. If not, could it be made RESTful with some changes?
3. What else is wrong with it?
Question for Mark: what's wrong with it from a transaction protocol viewpoint?
Thanks,
Bob Haugen
Roy T. Fielding wrote: > > > The use cases I have in mind all involve B2B automated > > order-fulfillment and related scenarios. > > > > Yep, multiparty contract agreement? I know of many scenarios > in which the resources depend on some sort of transaction-like > semantics on the back-end because they are dealing with multiple > parties. I don't know of any where the client needs to be aware > of it. > > The same is generally true of ACID database transactions -- > all of the work is done on the servers, with the client's > awareness limited to the commit/cancel decision (or fail). > Where REST differs is that the client can't make arbitrary > changes to the database, so there is no need for a REST > client to be aware of the begin-transaction semantic. > This was what I meant in my original post that defining and standardizing a few link relationships might simplify things a lot, both at the client resource level and resource/TM interactions. Has anybody ever looked into applying HATEOAS to a transaction protocol? -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Instead of looking for a "transaction protocol" over HTTP, consider a resource that is acting as a coordinator. The client can then ask that coordinator resource do manage changes across those other resources atomically or in whatever manner makes sense for the application. As far as the client is concerned, it is not aware of any distributed transaction protocol. Most attempts at transactions over HTTP mistakenly try to make the client get involved in a stateful protocol over HTTP. You can of course, propose a few link relations if you think it helps clients discover those links. I can't help but ask how many applications are going to rely on a distributed transactions based on pessimistic locking over the web. Is this just a solution looking for a problem? Subbu On Tue, Jun 9, 2009 at 2:03 PM, Bill Burke <bburke@...> wrote: > > > > > Roy T. Fielding wrote: > > > > > The use cases I have in mind all involve B2B automated > > > order-fulfillment and related scenarios. > > > > > > > Yep, multiparty contract agreement? I know of many scenarios > > in which the resources depend on some sort of transaction-like > > semantics on the back-end because they are dealing with multiple > > parties. I don't know of any where the client needs to be aware > > of it. > > > > The same is generally true of ACID database transactions -- > > all of the work is done on the servers, with the client's > > awareness limited to the commit/cancel decision (or fail). > > Where REST differs is that the client can't make arbitrary > > changes to the database, so there is no need for a REST > > client to be aware of the begin-transaction semantic. > > > > This was what I meant in my original post that defining and > standardizing a few link relationships might simplify things a lot, both > at the client resource level and resource/TM interactions. Has anybody > ever looked into applying HATEOAS to a transaction protocol? > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > >
Subbu Allamaraju wrote: > Instead of looking for a "transaction protocol" over HTTP, consider a > resource that is acting as a coordinator. The client can then ask that > coordinator resource do manage changes across those other resources > atomically or in whatever manner makes sense for the application. As far > as the client is concerned, it is not aware of any distributed > transaction protocol. Most attempts at transactions over HTTP mistakenly > try to make the client get involved in a stateful protocol over HTTP. > > You can of course, propose a few link relations if you think it helps > clients discover those links. > > I can't help but ask how many applications are going to rely on a > distributed transactions based on pessimistic locking over the web. Is > this just a solution looking for a problem? > I think most would want to design towards a compensation model for transactions (is this what you mean by "saga" Bob?) rather than a 2pc based one even in non-RESTful distributed applications. Still, we do have customers that want distributed 2pc. (Whether they actually *need* it or not is another story, but you can't always argue with a customer and more importantly, a potential customer.) You don't think resource-oriented model is preferrable even if 2pc is required? With my original post, I wasn't looking for somebody to tell me whether or not transactions belong in REST. I've already heard the arguments over and over again, and I already know what the answer is going to be from people like Roy and yourself. I'm more interested in feedback on the API Michael Musgrave for JBoss's Transaction Manager. I want to see it revised and reworked for a compensation model as well. Then finally I'd like to sow the seeds to put together some RFC together for it. It just seems to me there is a lot of opportunity to take a new look at these older standards through resource-oriented glasses. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Hello all, During the last year or so we have been working on a RESTful transaction model in the University of Surrey. The goal was to have ACID Transactions, while carefully maintaining all the constraints of REST, including HATEOAS, oxymoronical as that may sound. The model, its motivations and some examples are described in the following location: http://www.opadoi.gr/RETROv0.1.pdf We intend to look into long-running transactions in the future but felt it better to focus on an ACID-based proof-of-concept before proceeding further. I was hoping to be able to polish it a little bit more before publishing to this list, but given the very interesting discussion, I think it is better to contribute a draft version now and incorporate feedback as it comes in. I hope the work is interesting to the list and all discussion is welcomed. Regards, Alexandros
On Tue, Jun 9, 2009 at 5:44 PM, Subbu Allamaraju<subbu@...> wrote: > I can't help but ask how many applications are going to rely on a > distributed transactions based on pessimistic locking over the web. Is this > just a solution looking for a problem? Not locking. That does not work over the open Web. But the use case I know of is automated B2B business exchanges. Lots of 'em. Amazon does tons per day, to fulfill your orders.
On Tue, Jun 9, 2009 at 5:59 PM, Bill Burke<bburke@...> wrote: > I think most would want to design towards a compensation model for > transactions (is this what you mean by "saga" Bob?) rather than a 2pc based > one even in non-RESTful distributed applications. Still, we do have > customers that want distributed 2pc. Compensation == 2pc. The second phase is the compensation. Don't need a 2nd phase if you don't need to undo. Which is the good part of compensation. The bad part is lots of times you can't undo. And somebody else mentioned "saga".
Oh, and one more bad thing about compensation: in a long-running interaction, how do you know when it's complete? Could it still be compensated tomorrow?
Bob: The "undo" link can have an absolute expiration value as a param, encoded as a pointer, etc. If the request for "undo" is not presented to the server before the absolute exp date-time, it is ignored (or a 4xx error is returned). mca http://amundsen.com/blog/ On Tue, Jun 9, 2009 at 22:21, Bob Haugen <bob.haugen@...> wrote: > Oh, and one more bad thing about compensation: in a long-running > interaction, how do you know when it's complete? Could it still be > compensated tomorrow? > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Jun 9, 2009, at 6:46 PM, Bob Haugen wrote: > On Tue, Jun 9, 2009 at 5:44 PM, Subbu Allamaraju<subbu@...> > wrote: >> I can't help but ask how many applications are going to rely on a >> distributed transactions based on pessimistic locking over the web. >> Is this >> just a solution looking for a problem? > > Not locking. That does not work over the open Web. > > But the use case I know of is automated B2B business exchanges. Lots > of 'em. Amazon does tons per day, to fulfill your orders. Of course, there are lots of such use cases, and are implemented as operations on resources. Resources define undo/compensation semantics without needing a new protocol. Subbu
One possible way to handle Saga-like transactions for HTTP w/ hypermedia
would be to employ a transaction media type that clients could request for
any resource. When clients can advertise support for transactions
(Accept:application/transaction+xml) and the server can determine if that is
acceptable.
Servers can act as "transaction leaders" by enlisting other parties in the
operation (without bother clients w/ the details) and displaying results
using the transaction representation including a "Fail URI" to cancel the
operation. If enlisted parties support the transaction media type, they can
also provide Fail URIs to be incorporated into the transaction
representation. Enlisted parties are free to start their own transactions if
they wish and as as "transaction leaders" for that nested operation.
Since the Saga pattern is optimistic, no "Commit" or "Success" URIs are
needed. The Fail URIs can be given an expiry value as a way to prevent
attempts to "undo" operations that are no longer "undo-able."
I've worked up some more details examples. If anyone is interested I can
share a link.
Below is a possible state of a transaction representation.
# REQUEST
GET /orders/1/
Host: example.org
Accept:application/transaction+xml
# RESPONSE
Host: example.org
Content-Type:application/transaction+xml
Content-Length:xxx
<transaction>
<link rel="self" type="application/transaction+xml" href="
http://example.org/orders/1" />
<link rel="alternate" type="application/order+xml" href="
http://example.org/orders/1" />
<status>202 Accepted</status>
<link rel="fail" href="
http://example.org/orders/1;fail;{utc-absolute-expiration};
inventory={uri-of-inventory-fail}" />
<enlisted-parties>
<party>
<name>inventory request</name>
<status>201 Created</status>
</party>
<party>
<name>shipping request</name>
<status>not-started</status>
</party>
<party>
<name>payment request</name>
<status>not-started</status>
</party>
</enlisted-parties>
</transaction>
mca
http://amundsen.com/blog/
On Tue, Jun 9, 2009 at 22:43, Subbu Allamaraju <subbu@...> wrote:
>
> On Jun 9, 2009, at 6:46 PM, Bob Haugen wrote:
>
> > On Tue, Jun 9, 2009 at 5:44 PM, Subbu Allamaraju<subbu@...>
> > wrote:
> >> I can't help but ask how many applications are going to rely on a
> >> distributed transactions based on pessimistic locking over the web.
> >> Is this
> >> just a solution looking for a problem?
> >
> > Not locking. That does not work over the open Web.
> >
> > But the use case I know of is automated B2B business exchanges. Lots
> > of 'em. Amazon does tons per day, to fulfill your orders.
>
> Of course, there are lots of such use cases, and are implemented as
> operations on resources. Resources define undo/compensation semantics
> without needing a new protocol.
>
> Subbu
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On 9 Jun 2009, at 20:27, Eric Newcomer wrote: > > > We wrote about this in the second edition of Principles of > Transaction Processing, due out later this month. > > We took our cue from "RESTful Web Services" and along with some help > from Steve Vinoski, described modeling a transaction as a resource. Yes, pretty similar to what we did prior to BTP (sort of the precursor approach to WS-Context). Mark. > > > The best way to think about it is as a modern form of a psuedo- > conversational transaction. After retrieving the data you store a > local copy, release all locks (typically using a commit after a READ > operation) and send the data to the client. The way this first came > up to me was as the way you protect yourself during an update > operation from the chance the user may decide to go get a cup of > coffee while the locks are still held on the data, waiting for the > user to change something for the update. > > Anyway, the user then updates the data without the server having any > knowledge whatsoever of what's going on in the client. When the > server receives the updated data, the application reads the data > again and compares it to the local copy. If anything has changed, > it's an error. If not, go ahead and perform the update. > > Obviously there are some holes in this, and other techniques that > would also work, such as inserting a "manual" lock on the record > (which is something I used to do before the databases I was using > supported transactions) and checking that before performing the > update (this would tell you whether someone else had updated the > record while the user was "thinking"). > > But the point is that the problem is the resource's responsibility, > not the transaction protocol's. In other words, there's no > transaction protocol here, just a responsibility for the resource > serving as the transaction to ensure things are correct. > > Eric > > From: Bob Haugen <bob.haugen@...> > To: Roy T. Fielding <fielding@...> > Cc: Stefan Tilkov <stefan.tilkov@...>; REST-Discuss Discussion > Group <rest-discuss@yahoogroups.com> > Sent: Tuesday, June 9, 2009 3:10:03 PM > Subject: Re: [rest-discuss] rest transactions > > Here's a very sketchy version of what I had in mind: > > The scenario is an etailer getting bids, and then placing an order > with the lowest bidder. > > The etailer POSTs a Request For Quotation (Amazon called these Request > For Commitment in one meeting) to the appropriate URI of a few > suppliers. The RFQ contains a URI for the suppliers to POST a Quote > in response. Each Quote contains a URI (or two) for the etailer to > follow up with an order or a rejection. > > The etailer then POSTs an order (still in a provisional state) to the > successful bidder's URI. The order contains a URI for the supplier to > accept the order, and maybe a URI for the supplier to decline the > order if something has happened in the meantime that has made > fulfillment impossible. > > When the etailer gets an acceptance from the supplier, the etailer > might send rejection notices to the failed bidders. > > I'm sure Mark Little could suggest several refinements, but this is > just a sketch for discussion purposes. > > It's a 2-phase commit agreement scenario, but not ACID: it's not > Atomic, and the individual resources are not Isolated. > > Both the etailer and the suppliers play both client and server roles. > Each message contains a URI or two for responses, where the recipient > of a message will (possibly in a separate process) POST a response. > > My questions for Roy and this group: > > 1. Is this scenario RESTful? > > 2. If not, could it be made RESTful with some changes? > > 3. What else is wrong with it? > > Question for Mark: what's wrong with it from a transaction protocol > viewpoint? > > Thanks, > Bob Haugen > > > > >
Hi Alexandros. I'd definitely be interested in discussing this protocol to compare/contrast it with the one we've implemented. If you're interested then maybe we could take this to the JBossTS forums? Mark. On 10 Jun 2009, at 00:36, Alexandros Marinos wrote: > > > Hello all, > > During the last year or so we have been working on a RESTful > transaction model in the University of Surrey. The goal was to have > ACID Transactions, while carefully maintaining all the constraints > of REST, including HATEOAS, oxymoronical as that may sound. > > The model, its motivations and some examples are described in the > following location: > http://www.opadoi.gr/RETROv0.1.pdf > > We intend to look into long-running transactions in the future but > felt it better to focus on an ACID-based proof-of-concept before > proceeding further. > > I was hoping to be able to polish it a little bit more before > publishing to this list, but given the very interesting discussion, > I think it is better to contribute a draft version now and > incorporate feedback as it comes in. > > I hope the work is interesting to the list and all discussion is > welcomed. > > Regards, > Alexandros > > >
Mike is putting the extended transaction protocol on our (JBoss) wiki. If there's interest I'll post when it's available. Mark. On 9 Jun 2009, at 23:59, Bill Burke wrote: > With my original post, I wasn't looking for somebody to tell me > whether > or not transactions belong in REST. I've already heard the arguments > over and over again, and I already know what the answer is going to be > from people like Roy and yourself. I'm more interested in feedback on > the API Michael Musgrave for JBoss's Transaction Manager. I want to > see > it revised and reworked for a compensation model as well. Then > finally > I'd like to sow the seeds to put together some RFC together for it. > It > just seems to me there is a lot of opportunity to take a new look at > these older standards through resource-oriented glasses.
First let's ignore ACID transactions. We can have a separate
discussion about whether or not ACID makes sense in the context of
REST in the same way there have been long discussions about whether or
not it makes sense in the concept of SOA or Web Services.
Second I think we're in agreement concerning not talking about the
client having to be aware of anything to do with the transaction. The
act of "start" and "commit" (for want of some terms) is an implicit
act by the client visiting specific pages/traverses specific links.
Therefore what we're talking about here are extended transactions,
where the various ACID properties have been relaxed in a controlled
manner, e.g., no atomicity guarantees, no requirement for durability.
Although I think the scenario you outline is valid, here's another one
to throw out there (there are others). Let's consider a scenario where
a client visits a number of different web sites, let's call them A, B
and C. Now the act of visiting A, B and C cause state changes within
these services which may be reflected at the client in the form of
different pages/links to then traverse. In fact let's assume that the
act of visiting A then causes the client to go to B and that
subsequently causes the client to visit C. It's possible that if the
client followed a different set of options then A may be followed by D
and then E. Further let's assume that the work done at A, B and C is
actually done, i.e., not provisional. One way to think about this is
that visiting B is "the result" of "doing" A and C is the same for B.
Also undoing the business transaction obviously doesn't have the same
affect as a rollback on an ACID transaction, where the work is
provisional and typically we try hard not to expose dirty data to
concurrent users.
Now each of these individual state changes are not coordinated, so if
we have a failure during C, the changes made at A and B remain in
effect and the user has to solicit some compensation through another
medium if necessary, e.g., the phone or email ("give me my money
back.") Let's assume that we want to undo everything at A, B and C if
we don't get a success from C, but if we do get past C then failures
at B aren't important to us, i.e., the transaction can be considered
to "commit" as long as A and C are successfully applied, but it is
only allowed to attempt to "commit"/complete if C is available.
It's possible that the state changes at A, B and C could be
coordinated through some ad hoc approach, e.g., some session
information embedded opaquely in the client response that allows each
site to know it's the same session (aka transaction in this case). In
that case the state changes for A and B could be undone if C fails and
under some well defined rules, e.g., that A and B don't hear from C
within some predetermined period of time. But that protocol needs to
be defined. Another approach is to say that the coordinator/service
interaction is entirely a back-end problem: somehow resources
representing A, B and C need to interact with each other (and
presumably a coordinator) through some out-of-band protocol of which
the client is unaware. There are various problems with this, including
the fact that it means A, B and C somehow need to know they are within
the same transaction, where the coordinator is located, etc. Nothing
insurmountable, but maybe there's a better way?
So, another option is the following: when the client first visits A
that implicitly starts the transaction and enlists a resource at A
with some coordinator residing elsewhere (the way in which the
coordinator is manipulated is RESTful too, creating a resource that
represents the transaction in question.) The client then gets to B,
which obtains the transaction URI through the client interaction and
does likewise with its own participant, before moving the state of the
system (and client) on to C. Finally when the client is finished it
clicks a "complete" link that ends the business transaction and as a
result instructs the transaction resource to do nothing in this case
(success). However, if the client clicked the "undo all" or maybe
"undo B" then the state change at B is undone, ultimately driving the
client to, say, C1. If there are failures, e.g., the client doesn't
get to C, then the transaction may undo all of the work done by A and
B automatically, or it may leave them as-is (typically defined by the
contract defined within the business transaction, set, say, by A if
it's the initiator.)
Mark.
On 9 Jun 2009, at 20:10, Bob Haugen wrote:
> Here's a very sketchy version of what I had in mind:
>
> The scenario is an etailer getting bids, and then placing an order
> with the lowest bidder.
>
> The etailer POSTs a Request For Quotation (Amazon called these Request
> For Commitment in one meeting) to the appropriate URI of a few
> suppliers. The RFQ contains a URI for the suppliers to POST a Quote
> in response. Each Quote contains a URI (or two) for the etailer to
> follow up with an order or a rejection.
>
> The etailer then POSTs an order (still in a provisional state) to the
> successful bidder's URI. The order contains a URI for the supplier to
> accept the order, and maybe a URI for the supplier to decline the
> order if something has happened in the meantime that has made
> fulfillment impossible.
>
> When the etailer gets an acceptance from the supplier, the etailer
> might send rejection notices to the failed bidders.
>
> I'm sure Mark Little could suggest several refinements, but this is
> just a sketch for discussion purposes.
>
> It's a 2-phase commit agreement scenario, but not ACID: it's not
> Atomic, and the individual resources are not Isolated.
>
> Both the etailer and the suppliers play both client and server roles.
> Each message contains a URI or two for responses, where the recipient
> of a message will (possibly in a separate process) POST a response.
>
> My questions for Roy and this group:
>
> 1. Is this scenario RESTful?
>
> 2. If not, could it be made RESTful with some changes?
>
> 3. What else is wrong with it?
>
> Question for Mark: what's wrong with it from a transaction protocol
> viewpoint?
>
> Thanks,
> Bob Haugen
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
Hi Mark, I'm definitely interested in any discussion in this space. If you can start the thread and forward the link, we can continue the discussion there. Perhaps others in the list would find this discussion interesting? Best, Alexandros On Wed, Jun 10, 2009 at 9:43 AM, Mark Little <nmcl2001@...> wrote: > Hi Alexandros. I'd definitely be interested in discussing this protocol to > compare/contrast it with the one we've implemented. If you're interested > then maybe we could take this to the JBossTS forums? > Mark. > > > On 10 Jun 2009, at 00:36, Alexandros Marinos wrote: > > > > Hello all, > > During the last year or so we have been working on a RESTful transaction > model in the University of Surrey. The goal was to have ACID Transactions, > while carefully maintaining all the constraints of REST, including HATEOAS, > oxymoronical as that may sound. > > The model, its motivations and some examples are described in the following > location: > http://www.opadoi.gr/RETROv0.1.pdf > > We intend to look into long-running transactions in the future but felt it > better to focus on an ACID-based proof-of-concept before proceeding further. > > > I was hoping to be able to polish it a little bit more before publishing to > this list, but given the very interesting discussion, I think it is better > to contribute a draft version now and incorporate feedback as it comes in. > > > I hope the work is interesting to the list and all discussion is welcomed. > > Regards, > Alexandros > > > > > >
Subbu Allamaraju wrote: > > > > > On Jun 9, 2009, at 6:46 PM, Bob Haugen wrote: > > > On Tue, Jun 9, 2009 at 5:44 PM, Subbu Allamaraju<subbu@... > <mailto:subbu%40subbu.org>> > > wrote: > >> I can't help but ask how many applications are going to rely on a > >> distributed transactions based on pessimistic locking over the web. > >> Is this > >> just a solution looking for a problem? > > > > Not locking. That does not work over the open Web. > > > > But the use case I know of is automated B2B business exchanges. Lots > > of 'em. Amazon does tons per day, to fulfill your orders. > > Of course, there are lots of such use cases, and are implemented as > operations on resources. Resources define undo/compensation semantics > without needing a new protocol. > One could say the same about linking and pub/sub. Lots of applications need to link, many applications need to pub and sub. Even a few simple standardizations can go a long way, IMO. Maybe its just a standardization of a few link relationships. Maybe a data format. Maybe a few standardized interactions. As much as these transactions guru's try to make it out to be, this TX stuff isn't rocket science. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
It does seem like a good idea to have an independent "coordinator" capable of notifying a group of participants (e.g. transactional resources) about an outcome of potential significance to all participants... That seems like a very logical extension of the RESTful model.
Eric
________________________________
From: Mark Little <nmcl2001@...>
To: Bob Haugen <bob.haugen@...>
Cc: Roy T. Fielding <fielding@...>; Stefan Tilkov <stefan.tilkov@...>; REST-Discuss Discussion Group <rest-discuss@yahoogroups.com>
Sent: Wednesday, June 10, 2009 4:40:51 AM
Subject: Re: [rest-discuss] rest transactions
First let's ignore ACID transactions. We can have a separate
discussion about whether or not ACID makes sense in the context of
REST in the same way there have been long discussions about whether or
not it makes sense in the concept of SOA or Web Services.
Second I think we're in agreement concerning not talking about the
client having to be aware of anything to do with the transaction. The
act of "start" and "commit" (for want of some terms) is an implicit
act by the client visiting specific pages/traverses specific links.
Therefore what we're talking about here are extended transactions,
where the various ACID properties have been relaxed in a controlled
manner, e.g., no atomicity guarantees, no requirement for durability.
Although I think the scenario you outline is valid, here's another one
to throw out there (there are others). Let's consider a scenario where
a client visits a number of different web sites, let's call them A, B
and C. Now the act of visiting A, B and C cause state changes within
these services which may be reflected at the client in the form of
different pages/links to then traverse. In fact let's assume that the
act of visiting A then causes the client to go to B and that
subsequently causes the client to visit C. It's possible that if the
client followed a different set of options then A may be followed by D
and then E. Further let's assume that the work done at A, B and C is
actually done, i.e., not provisional. One way to think about this is
that visiting B is "the result" of "doing" A and C is the same for B.
Also undoing the business transaction obviously doesn't have the same
affect as a rollback on an ACID transaction, where the work is
provisional and typically we try hard not to expose dirty data to
concurrent users.
Now each of these individual state changes are not coordinated, so if
we have a failure during C, the changes made at A and B remain in
effect and the user has to solicit some compensation through another
medium if necessary, e.g., the phone or email ("give me my money
back.") Let's assume that we want to undo everything at A, B and C if
we don't get a success from C, but if we do get past C then failures
at B aren't important to us, i.e., the transaction can be considered
to "commit" as long as A and C are successfully applied, but it is
only allowed to attempt to "commit"/complete if C is available.
It's possible that the state changes at A, B and C could be
coordinated through some ad hoc approach, e.g., some session
information embedded opaquely in the client response that allows each
site to know it's the same session (aka transaction in this case). In
that case the state changes for A and B could be undone if C fails and
under some well defined rules, e.g., that A and B don't hear from C
within some predetermined period of time. But that protocol needs to
be defined. Another approach is to say that the coordinator/ service
interaction is entirely a back-end problem: somehow resources
representing A, B and C need to interact with each other (and
presumably a coordinator) through some out-of-band protocol of which
the client is unaware. There are various problems with this, including
the fact that it means A, B and C somehow need to know they are within
the same transaction, where the coordinator is located, etc. Nothing
insurmountable, but maybe there's a better way?
So, another option is the following: when the client first visits A
that implicitly starts the transaction and enlists a resource at A
with some coordinator residing elsewhere (the way in which the
coordinator is manipulated is RESTful too, creating a resource that
represents the transaction in question.) The client then gets to B,
which obtains the transaction URI through the client interaction and
does likewise with its own participant, before moving the state of the
system (and client) on to C. Finally when the client is finished it
clicks a "complete" link that ends the business transaction and as a
result instructs the transaction resource to do nothing in this case
(success). However, if the client clicked the "undo all" or maybe
"undo B" then the state change at B is undone, ultimately driving the
client to, say, C1. If there are failures, e.g., the client doesn't
get to C, then the transaction may undo all of the work done by A and
B automatically, or it may leave them as-is (typically defined by the
contract defined within the business transaction, set, say, by A if
it's the initiator.)
Mark.
On 9 Jun 2009, at 20:10, Bob Haugen wrote:
> Here's a very sketchy version of what I had in mind:
>
> The scenario is an etailer getting bids, and then placing an order
> with the lowest bidder.
>
> The etailer POSTs a Request For Quotation (Amazon called these Request
> For Commitment in one meeting) to the appropriate URI of a few
> suppliers. The RFQ contains a URI for the suppliers to POST a Quote
> in response. Each Quote contains a URI (or two) for the etailer to
> follow up with an order or a rejection.
>
> The etailer then POSTs an order (still in a provisional state) to the
> successful bidder's URI. The order contains a URI for the supplier to
> accept the order, and maybe a URI for the supplier to decline the
> order if something has happened in the meantime that has made
> fulfillment impossible.
>
> When the etailer gets an acceptance from the supplier, the etailer
> might send rejection notices to the failed bidders.
>
> I'm sure Mark Little could suggest several refinements, but this is
> just a sketch for discussion purposes.
>
> It's a 2-phase commit agreement scenario, but not ACID: it's not
> Atomic, and the individual resources are not Isolated.
>
> Both the etailer and the suppliers play both client and server roles.
> Each message contains a URI or two for responses, where the recipient
> of a message will (possibly in a separate process) POST a response.
>
> My questions for Roy and this group:
>
> 1. Is this scenario RESTful?
>
> 2. If not, could it be made RESTful with some changes?
>
> 3. What else is wrong with it?
>
> Question for Mark: what's wrong with it from a transaction protocol
> viewpoint?
>
> Thanks,
> Bob Haugen
>
>
> ------------ --------- --------- ------
>
> Yahoo! Groups Links
>
>
>
Would be cool if you could link the thread. Mark Little wrote: > > > > Hi Alexandros. I'd definitely be interested in discussing this protocol > to compare/contrast it with the one we've implemented. If you're > interested then maybe we could take this to the JBossTS forums? > > > Mark. > > > On 10 Jun 2009, at 00:36, Alexandros Marinos wrote: > >> >> >> Hello all, >> >> During the last year or so we have been working on a RESTful >> transaction model in the University of Surrey. The goal was to have >> ACID Transactions, while carefully maintaining all the constraints of >> REST, including HATEOAS, oxymoronical as that may sound. >> >> The model, its motivations and some examples are described i! n the >> following location: >> http://www.opadoi.gr/RETROv0.1.pdf <http://www.opadoi.gr/RETROv0.1.pdf> >> >> We intend to look into long-running transactions in the future but >> felt it better to focus on an ACID-based proof-of-concept before >> proceeding further. >> >> I was hoping to be able to polish it a little bit more before >> publishing to this list, but given the very interesting discussion, I >> think it is better to contribute a draft version now and incorporate >> feedback as it comes in. >> >> I hope the work is interesting to the list and all discussion is welcomed. >> >> Regards, >> Alexandros >> >> > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
This is the problem I've had with some of the transactional specifications. Most of the edge conditions and heuristics error conditions are normally tied very closely to a business process and can't be statically defined in one uber algorithm. This is why I think looking through resource-oriented classes might help as RESTful applications are very good at telling us what state transitions are available. Bob Haugen wrote: > > > > Oh, and one more bad thing about compensation: in a long-running > interaction, how do you know when it's complete? Could it still be > compensated tomorrow? > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Mark Little wrote: > Second I think we're in agreement concerning not talking about the > client having to be aware of anything to do with the transaction. The > act of "start" and "commit" (for want of some terms) is an implicit > act by the client visiting specific pages/traverses specific links. > In integration scenarios the client is usually the one that is aware that something needs to be coordinated. not the server. In a SOA environment these clients are going to be interacting with self-contained silos of services that may or may not be aware of a transaction protocol, but may be compensatable nevertheless. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
It depends what you mean by "aware". In this case I'm not saying that the client has to have a step 1 that requires it to go to a site and start a transaction. That could/should be a natural part of interacting with the initiator (server A in the example). Mark. On 10 Jun 2009, at 14:02, Bill Burke wrote: > > > Mark Little wrote: >> Second I think we're in agreement concerning not talking about the >> client having to be aware of anything to do with the transaction. The >> act of "start" and "commit" (for want of some terms) is an implicit >> act by the client visiting specific pages/traverses specific links. >> > > In integration scenarios the client is usually the one that is aware > that something needs to be coordinated. not the server. In a SOA > environment these clients are going to be interacting with > self-contained silos of services that may or may not be aware of a > transaction protocol, but may be compensatable nevertheless. > > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > ------------------------------------ > > Yahoo! Groups Links > > >
I think you'll have both scenarios and it will depend on the application. If the server is guiding the client through transitions, then the client probably doesn't need to be aware of things much if at all. I was just stating that I think many clients, specifically in integration scenarios, will be very aware of what is going on. Mark Little wrote: > > > > It depends what you mean by "aware". In this case I'm not saying that > the client has to have a step 1 that requires it to go to a site and > start a transaction. That could/should be a natural part of > interacting with the initiator (server A in the example). > > Mark. > > On 10 Jun 2009, at 14:02, Bill Burke wrote: > > > > > > > Mark Little wrote: > >> Second I think we're in agreement concerning not talking about the > >> client having to be aware of anything to do with the transaction. The > >> act of "start" and "commit" (for want of some terms) is an implicit > >> act by the client visiting specific pages/traverses specific links. > >> > > > > In integration scenarios the client is usually the one that is aware > > that something needs to be coordinated. not the server. In a SOA > > environment these clients are going to be interacting with > > self-contained silos of services that may or may not be aware of a > > transaction protocol, but may be compensatable nevertheless. > > > > > > > > -- > > Bill Burke > > JBoss, a division of Red Hat > > http://bill.burkecentral.com <http://bill.burkecentral.com> > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Will do. Mark. On 10 Jun 2009, at 13:37, Bill Burke wrote: > Would be cool if you could link the thread. > > Mark Little wrote: >> >> >> >> Hi Alexandros. I'd definitely be interested in discussing this >> protocol >> to compare/contrast it with the one we've implemented. If you're >> interested then maybe we could take this to the JBossTS forums? >> >> >> Mark. >> >> >> On 10 Jun 2009, at 00:36, Alexandros Marinos wrote: >> >>> >>> >>> Hello all, >>> >>> During the last year or so we have been working on a RESTful >>> transaction model in the University of Surrey. The goal was to have >>> ACID Transactions, while carefully maintaining all the constraints >>> of >>> REST, including HATEOAS, oxymoronical as that may sound. >>> >>> The model, its motivations and some examples are described i! n the >>> following location: >>> http://www.opadoi.gr/RETROv0.1.pdf <http://www.opadoi.gr/RETROv0.1.pdf >>> > >>> >>> We intend to look into long-running transactions in the future but >>> felt it better to focus on an ACID-based proof-of-concept before >>> proceeding further. >>> >>> I was hoping to be able to polish it a little bit more before >>> publishing to this list, but given the very interesting >>> discussion, I >>> think it is better to contribute a draft version now and incorporate >>> feedback as it comes in. >>> >>> I hope the work is interesting to the list and all discussion is >>> welcomed. >>> >>> Regards, >>> Alexandros >>> >>> >> >> > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hi Jan, * Jan Algermissen <algermissen1971@...> [2009-05-20 00:15]: > Supposed that all feed- or entry-level property elements (e.g. > title, author, id...) did not have complex content but just > simple String values, would it have been a reasonable choice to > define a set of HTTP headers and link relations an use the HTTP > header as the envelope instead of a new XML language (ATOM)? makes perfect sense to me. The point of having them in XML is that it gives you the opportunity to extend the document with information that generic clients can easily ignore. In other words you get robust extensibility. HTTP headers do not afford that in the same way unless you go to some pains to allow it; and even then, the amount of substructure you can express is severly limited in practice. Do you need that? Depends on your application. I wouldn’t be *too* quick to decide that the flexibility is unnecessary. But you might be able to leave open the possiblity of using Atom in the future in some way, eg. based on the media type, so that you don’t paint yourself in a corner without an option for recovery. Then it would make sense to avoid complexity at the outset. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
I'm trying to develop a solid understanding of the principles behind REST, and I'm wrestling with what seems like a big inconsistency. I have a feeling there's a mismatch between my semantics and the community's. I've seen it said over and over again that passing session IDs (typically in a cookie) is generally considered unRESTful. I've also seen it said over and over again that passing credentials along with every request is considered very RESTful. The reasonings are unclear to me, though. Considering the "statelessness" principle of REST, if I think of a RESTful architecture as one in which every request passes all the information needed for the server to fulfill the request, then I don't see how passing session IDs (for instance) in requests violates that principle. To me, a session ID is a simple alphanumeric string. Authentication credentials are (typically expressed as) a simple alphanumeric string. Both strings persist on the client. Both strings are received by the server and are compared to information held on the server in order to make decisions on the server about how to process the request, and thus the content of the strings can dramatically affect the server's response to the request. When approached from this way, I lose the meaning behind the distinction. What am I misunderstanding?
--- In rest-discuss@yahoogroups.com, "object01" <object01@...> wrote: > > I'm trying to develop a solid understanding of the principles behind REST, and I'm wrestling with what seems like a big inconsistency. I have a feeling there's a mismatch between my semantics and the community's. > > I've seen it said over and over again that passing session IDs (typically in a cookie) is generally considered unRESTful. I've also seen it said over and over again that passing credentials along with every request is considered very RESTful. > > The reasonings are unclear to me, though. Considering the "statelessness" principle of REST, if I think of a RESTful architecture as one in which every request passes all the information needed for the server to fulfill the request, then I don't see how passing session IDs (for instance) in requests violates that principle. > > To me, a session ID is a simple alphanumeric string. Authentication credentials are (typically expressed as) a simple alphanumeric string. Both strings persist on the client. Both strings are received by the server and are compared to information held on the server in order to make decisions on the server about how to process the request, and thus the content of the strings can dramatically affect the server's response to the request. > > When approached from this way, I lose the meaning behind the distinction. What am I misunderstanding? > Is your cookie passing all the application state it needs to pass or is some of that state held by the server, such that your request could not be load balanced between multiple servers? Check out http://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm section 6.3.4.2. Eb
Just a few top of my head. 1. Session requires the server to keep state that is created out of band (your session id). You may well breach the self-descriptive message if the session needs to be known by the server in advance. 2. to get a session, you have a process by which you pre-define a web page or a URI a server has to go to, which will have to be different for each site. You're not designing for serendipity and reuse, but for your specific site, locking users in your authentication method. 3. Authentication information is not part of the resource state, and embedding it in the resource is, from my perspective, a dangerous ride. I'd find very hard to describe how a username and password, or session id, is a partial state transfer of a resource Blog or Customer. So you may breach the intended definition of a representation of a resource. Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of object01 Sent: 11 June 2009 18:42 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] RESTful vs. unRESTful: session IDs and authenticaiton I'm trying to develop a solid understanding of the principles behind REST, and I'm wrestling with what seems like a big inconsistency. I have a feeling there's a mismatch between my semantics and the community's. I've seen it said over and over again that passing session IDs (typically in a cookie) is generally considered unRESTful. I've also seen it said over and over again that passing credentials along with every request is considered very RESTful. The reasonings are unclear to me, though. Considering the "statelessness" principle of REST, if I think of a RESTful architecture as one in which every request passes all the information needed for the server to fulfill the request, then I don't see how passing session IDs (for instance) in requests violates that principle. To me, a session ID is a simple alphanumeric string. Authentication credentials are (typically expressed as) a simple alphanumeric string. Both strings persist on the client. Both strings are received by the server and are compared to information held on the server in order to make decisions on the server about how to process the request, and thus the content of the strings can dramatically affect the server's response to the request. When approached from this way, I lose the meaning behind the distinction. What am I misunderstanding? ------------------------------------ Yahoo! Groups Links
On Thu, Jun 11, 2009 at 5:23 PM, Bill Burke <bburke@...> wrote: > > > Ebenezer Ikonne wrote: > >> >> >> >> --- In rest-discuss@yahoogroups.com <mailto: >> rest-discuss%40yahoogroups.com <rest-discuss%2540yahoogroups.com>>, >> "object01" <object01@...> wrote: >> > >> > I'm trying to develop a solid understanding of the principles behind >> REST, and I'm wrestling with what seems like a big inconsistency. I have a >> feeling there's a mismatch between my semantics and the community's. >> > >> > I've seen it said over and over again that passing session IDs >> (typically in a cookie) is generally considered unRESTful. I've also seen it >> said over and over again that passing credentials along with every request >> is considered very RESTful. >> > >> > The reasonings are unclear to me, though. Considering the >> "statelessness" principle of REST, if I think of a RESTful architecture as >> one in which every request passes all the information needed for the server >> to fulfill the request, then I don't see how passing session IDs (for >> instance) in requests violates that principle. >> > >> > To me, a session ID is a simple alphanumeric string. Authentication >> credentials are (typically expressed as) a simple alphanumeric string. Both >> strings persist on the client. Both strings are received by the server and >> are compared to information held on the server in order to make decisions on >> the server about how to process the request, and thus the content of the >> strings can dramatically affect the server's response to the request. >> > >> > When approached from this way, I lose the meaning behind the >> distinction. What am I misunderstanding? >> > >> >> Is your cookie passing all the application state it needs to pass or is >> some of that state held by the server, such that your request could not be >> load balanced between multiple servers? >> >> > Digest authentication has the problem of not being able to be load balanced > between multiple servers (the server has to keep track of nonces), right? > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > I believe it would fail initial authentication on the "other" server, but it the client still posses everything it needs to pass authentication whereas with a cookie that may not be the case. Let me know if I'm mistaken, however I believe I get your point.
Ebenezer Ikonne wrote: > > > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, "object01" <object01@...> wrote: > > > > I'm trying to develop a solid understanding of the principles behind > REST, and I'm wrestling with what seems like a big inconsistency. I have > a feeling there's a mismatch between my semantics and the community's. > > > > I've seen it said over and over again that passing session IDs > (typically in a cookie) is generally considered unRESTful. I've also > seen it said over and over again that passing credentials along with > every request is considered very RESTful. > > > > The reasonings are unclear to me, though. Considering the > "statelessness" principle of REST, if I think of a RESTful architecture > as one in which every request passes all the information needed for the > server to fulfill the request, then I don't see how passing session IDs > (for instance) in requests violates that principle. > > > > To me, a session ID is a simple alphanumeric string. Authentication > credentials are (typically expressed as) a simple alphanumeric string. > Both strings persist on the client. Both strings are received by the > server and are compared to information held on the server in order to > make decisions on the server about how to process the request, and thus > the content of the strings can dramatically affect the server's response > to the request. > > > > When approached from this way, I lose the meaning behind the > distinction. What am I misunderstanding? > > > > Is your cookie passing all the application state it needs to pass or is > some of that state held by the server, such that your request could not > be load balanced between multiple servers? > Digest authentication has the problem of not being able to be load balanced between multiple servers (the server has to keep track of nonces), right? -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Well, you put what you want in the nonce. I use an encrypted timestamp myself, which every server can check independently. -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Bill Burke Sent: 11 June 2009 22:23 To: Ebenezer Ikonne Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: RESTful vs. unRESTful: session IDs and authenticaiton Ebenezer Ikonne wrote: > > > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, "object01" <object01@...> wrote: > > > > I'm trying to develop a solid understanding of the principles behind > REST, and I'm wrestling with what seems like a big inconsistency. I have > a feeling there's a mismatch between my semantics and the community's. > > > > I've seen it said over and over again that passing session IDs > (typically in a cookie) is generally considered unRESTful. I've also > seen it said over and over again that passing credentials along with > every request is considered very RESTful. > > > > The reasonings are unclear to me, though. Considering the > "statelessness" principle of REST, if I think of a RESTful architecture > as one in which every request passes all the information needed for the > server to fulfill the request, then I don't see how passing session IDs > (for instance) in requests violates that principle. > > > > To me, a session ID is a simple alphanumeric string. Authentication > credentials are (typically expressed as) a simple alphanumeric string. > Both strings persist on the client. Both strings are received by the > server and are compared to information held on the server in order to > make decisions on the server about how to process the request, and thus > the content of the strings can dramatically affect the server's response > to the request. > > > > When approached from this way, I lose the meaning behind the > distinction. What am I misunderstanding? > > > > Is your cookie passing all the application state it needs to pass or is > some of that state held by the server, such that your request could not > be load balanced between multiple servers? > Digest authentication has the problem of not being able to be load balanced between multiple servers (the server has to keep track of nonces), right? -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com ------------------------------------ Yahoo! Groups Links
>>>>> "Bill" == Bill Burke <bburke@...> writes:
Bill>Digest authentication has the problem of not being able to be load
Bill> balanced between multiple servers (the server has to keep
Bill> track of nonces), right?
See reply by Ebenezer, but note that this also depends on the kind of
load-balancing. If you load-balance at the tcp/ip level, you end up at
the same server.
--
Cheers,
Berend de Boer
To save myself having to repeat my justification for clarifying ReST, I've posted a tongue-in-cheek entry at http://serialseb.blogspot.com/2009/06/fighting-for-rest-or-tale-of-ice-cream .html Hopefully Roy won't mind me portraying him as an ice-cream maker, but it's the way I explain it to people and they seem to get it much faster than when I just point to the thesis.
<quote> Roy – No, it’s made of chocolate, if you put vanilla in it it’s vanilla ice-cream, because there’s no chocolate in it? Engineer – You’re just an extremist, you’re trying to confuse us! </quote> LoL, Jan On Jun 12, 2009, at 11:26 AM, Sebastien Lambla wrote: > > > To save myself having to repeat my justification for clarifying > ReST, I’ve posted a tongue-in-cheek entry at http://serialseb.blogspot.com/2009/06/fighting-for-rest-or-tale-of-ice-cream.html > > Hopefully Roy won’t mind me portraying him as an ice-cream maker, > but it’s the way I explain it to people and they seem to get it much > faster than when I just point to the thesis. > > >
Contributing nothing, but:
Hershey’s Homemade Chocolate Ice Cream – Serves 6
* 5 eggs
* 1 cup sugar
* 2 cans condensed milk
* 6 cups Hershey's Chocolate Milk
* 1 1/2 tsp. vanilla
* 3 oz. semi-sweet melted chocolate
Chocolate ice cream is made with vanilla! Maybe you should use
sherbert/sorbet :)
On Jun 12, 2009, at 11:37 AM, Jan Algermissen wrote:
> <quote>
> Roy – No, it’s made of chocolate, if you put vanilla in it it’s
> vanilla ice-cream, because there’s no chocolate in it?
>
> Engineer – You’re just an extremist, you’re trying to confuse us!
> </quote>
>
> LoL,
>
> Jan
>
>
>
>
> On Jun 12, 2009, at 11:26 AM, Sebastien Lambla wrote:
>
>>
>>
>> To save myself having to repeat my justification for clarifying
>> ReST, I’ve posted a tongue-in-cheek entry at http://serialseb.blogspot.com/2009/06/fighting-for-rest-or-tale-of-ice-cream.html
>>
>> Hopefully Roy won’t mind me portraying him as an ice-cream maker,
>> but it’s the way I explain it to people and they seem to get it much
>> faster than when I just point to the thesis.
>>
>>
>>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--- In rest-discuss@yahoogroups.com, "Sebastien Lambla" <seb@...> wrote: > > To save myself having to repeat my justification for clarifying ReST, I've > posted a tongue-in-cheek entry at > http://serialseb.blogspot.com/2009/06/fighting-for-rest-or-tale-of-ice-cream > .html > > > > Hopefully Roy won't mind me portraying him as an ice-cream maker, but it's > the way I explain it to people and they seem to get it much faster than when > I just point to the thesis. > Touché
Bill Burke wrote: > > Digest authentication has the problem of not being able to be load > balanced between multiple servers (the server has to keep track of > nonces), right? The server may keep a list of used nonces to prevent replay attacks, but it doesn't have to. According to RFC 2617[1] (see section 3.5) the server-generated nonce is also part of the client's request, so I see no technical reason why HTTP digest authentication would prevent load balancing in any way. If this was the case it would also severely break the statelessness constraint of REST, wouldn't it? [1]: http://www.ietf.org/rfc/rfc2617.txt Regards, Jochen
You know, adding the fact that there's a bit of vanilla in the original
makes the point even stronger. I may just have to add that to my story :)
Seb
-----Original Message-----
From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On
Behalf Of Robert Koberg
Sent: 12 June 2009 16:51
To: Jan Algermissen
Cc: Sebastien Lambla; 'Rest List'
Subject: Re: [rest-discuss] The tale of Roy Fielding the ice-cream maker
Contributing nothing, but:
Hershey's Homemade Chocolate Ice Cream - Serves 6
* 5 eggs
* 1 cup sugar
* 2 cans condensed milk
* 6 cups Hershey's Chocolate Milk
* 1 1/2 tsp. vanilla
* 3 oz. semi-sweet melted chocolate
Chocolate ice cream is made with vanilla! Maybe you should use
sherbert/sorbet :)
On Jun 12, 2009, at 11:37 AM, Jan Algermissen wrote:
> <quote>
> Roy - No, it's made of chocolate, if you put vanilla in it it's
> vanilla ice-cream, because there's no chocolate in it?
>
> Engineer - You're just an extremist, you're trying to confuse us!
> </quote>
>
> LoL,
>
> Jan
>
>
>
>
> On Jun 12, 2009, at 11:26 AM, Sebastien Lambla wrote:
>
>>
>>
>> To save myself having to repeat my justification for clarifying
>> ReST, I've posted a tongue-in-cheek entry at
http://serialseb.blogspot.com/2009/06/fighting-for-rest-or-tale-of-ice-cream
.html
>>
>> Hopefully Roy won't mind me portraying him as an ice-cream maker,
>> but it's the way I explain it to people and they seem to get it much
>> faster than when I just point to the thesis.
>>
>>
>>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
------------------------------------
Yahoo! Groups Links
So, if I don't have a 1/2 tsp. vanilla for the recipe, is it still REST? It will still taste pretty damn good... Sebastien Lambla wrote: > > > > You know, adding the fact that there's a bit of vanilla in the original > makes the point even stronger. I may just have to add that to my story :) > > Seb > > -----Original Message----- > From: rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com> > [mailto:rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>] On > Behalf Of Robert Koberg > Sent: 12 June 2009 16:51 > To: Jan Algermissen > Cc: Sebastien Lambla; 'Rest List' > Subject: Re: [rest-discuss] The tale of Roy Fielding the ice-cream maker > > Contributing nothing, but: > > Hershey's Homemade Chocolate Ice Cream - Serves 6 > > * 5 eggs > * 1 cup sugar > * 2 cans condensed milk > * 6 cups Hershey's Chocolate Milk > * 1 1/2 tsp. vanilla > * 3 oz. semi-sweet melted chocolate > > Chocolate ice cream is made with vanilla! Maybe you should use > sherbert/sorbet :) > > On Jun 12, 2009, at 11:37 AM, Jan Algermissen wrote: > > > <quote> > > Roy - No, it's made of chocolate, if you put vanilla in it it's > > vanilla ice-cream, because there's no chocolate in it? > > > > Engineer - You're just an extremist, you're trying to confuse us! > > </quote> > > > > LoL, > > > > Jan > > > > > > > > > > On Jun 12, 2009, at 11:26 AM, Sebastien Lambla wrote: > > > >> > >> > >> To save myself having to repeat my justification for clarifying > >> ReST, I've posted a tongue-in-cheek entry at > http://serialseb.blogspot.com/2009/06/fighting-for-rest-or-tale-of-ice-cream > <http://serialseb.blogspot.com/2009/06/fighting-for-rest-or-tale-of-ice-cream> > .html > >> > >> Hopefully Roy won't mind me portraying him as an ice-cream maker, > >> but it's the way I explain it to people and they seem to get it much > >> faster than when I just point to the thesis. > >> > >> > >> > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Jun 12, 2009, at 4:06 PM, Bill Burke wrote: > > > So, if I don't have a 1/2 tsp. vanilla for the recipe, is it still > REST? > It will still taste pretty damn good... > Perhaps. Just don't call it chocolate ice cream! > > > Sebastien Lambla wrote: > > > > > > > > You know, adding the fact that there's a bit of vanilla in the > original > > makes the point even stronger. I may just have to add that to my > story :) > > > > Seb > > > > -----Original Message----- > > From: rest-discuss@yahoogroups.com > > <mailto:rest-discuss%40yahoogroups.com> > > [mailto:rest-discuss@...m > > <mailto:rest-discuss%40yahoogroups.com>] On > > Behalf Of Robert Koberg > > Sent: 12 June 2009 16:51 > > To: Jan Algermissen > > Cc: Sebastien Lambla; 'Rest List' > > Subject: Re: [rest-discuss] The tale of Roy Fielding the ice-cream > maker > > > > Contributing nothing, but: > > > > Hershey's Homemade Chocolate Ice Cream - Serves 6 > > > > * 5 eggs > > * 1 cup sugar > > * 2 cans condensed milk > > * 6 cups Hershey's Chocolate Milk > > * 1 1/2 tsp. vanilla > > * 3 oz. semi-sweet melted chocolate > > > > Chocolate ice cream is made with vanilla! Maybe you should use > > sherbert/sorbet :) > > > > On Jun 12, 2009, at 11:37 AM, Jan Algermissen wrote: > > > > > <quote> > > > Roy - No, it's made of chocolate, if you put vanilla in it it's > > > vanilla ice-cream, because there's no chocolate in it? > > > > > > Engineer - You're just an extremist, you're trying to confuse us! > > > </quote> > > > > > > LoL, > > > > > > Jan > > > > > > > > > > > > > > > On Jun 12, 2009, at 11:26 AM, Sebastien Lambla wrote: > > > > > >> > > >> > > >> To save myself having to repeat my justification for clarifying > > >> ReST, I've posted a tongue-in-cheek entry at > > http://serialseb.blogspot.com/2009/06/fighting-for-rest-or-tale-of-ice-cream > > <http://serialseb.blogspot.com/2009/06/fighting-for-rest-or-tale-of-ice-cream > > > > .html > > >> > > >> Hopefully Roy won't mind me portraying him as an ice-cream maker, > > >> but it's the way I explain it to people and they seem to get it > much > > >> faster than when I just point to the thesis. > > >> > > >> > > >> > > > > > > > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > >
Robert Koberg wrote: > > On Jun 12, 2009, at 4:06 PM, Bill Burke wrote: > >> >> >> So, if I don't have a 1/2 tsp. vanilla for the recipe, is it still REST? >> It will still taste pretty damn good... >> > > Perhaps. Just don't call it chocolate ice cream! > Sigh...The AOP guys had the same hangups... -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Jun 12, 2009, at 8:26 AM, Sebastien Lambla wrote: > Hopefully Roy won’t mind me portraying him as an ice-cream maker, > but it’s the way I explain it to people and they seem to get it > much faster than when I just point to the thesis. > As it turns out, my family has a tradition of making home-made, hand-cranked ice cream for family reunion parties. However, I never made chocolate ice cream -- it interferes with the proper freezing process (a big deal when you are cranking by hand) and is less flexible than letting people add their own toppings to vanilla. IOW, chocolate in the mix adds unnecessary complexity and the result is overly coupled to a user-specific preference. ....Roy
So do you recommend vanilla over chocolate for loose coupling to user preference? (Sent from Deep Space 9 station) > CC: rest-discuss@yahoogroups.com > To: seb@... > From: fielding@... > Date: Sat, 13 Jun 2009 13:14:14 -0700 > Subject: Re: [rest-discuss] The tale of Roy Fielding the ice-cream maker > > On Jun 12, 2009, at 8:26 AM, Sebastien Lambla wrote: > > Hopefully Roy won’t mind me portraying him as an ice-cream maker, > > but it’s the way I explain it to people and they seem to get it > > much faster than when I just point to the thesis. > > > > As it turns out, my family has a tradition of making home-made, > hand-cranked ice cream for family reunion parties. However, > I never made chocolate ice cream -- it interferes with the proper > freezing process (a big deal when you are cranking by hand) > and is less flexible than letting people add their own toppings > to vanilla. IOW, chocolate in the mix adds unnecessary complexity > and the result is overly coupled to a user-specific preference. > > ....Roy > > ------------------------------------ > > Yahoo! Groups Links > > > _________________________________________________________________ MSN straight to your mobile - news, entertainment, videos and more. http://clk.atdmt.com/UKM/go/147991039/direct/01/
And what about custom headers/toppings? ;) -L On Sat, Jun 13, 2009 at 9:25 PM, Sebastien Lambla <seb@...> wrote: > > > So do you recommend vanilla over chocolate for loose coupling to user > preference? > > (Sent from Deep Space 9 station) > > > CC: rest-discuss@yahoogroups.com > > To: seb@... > > From: fielding@... > > Date: Sat, 13 Jun 2009 13:14:14 -0700 > > > Subject: Re: [rest-discuss] The tale of Roy Fielding the ice-cream maker > > > > On Jun 12, 2009, at 8:26 AM, Sebastien Lambla wrote: > > > Hopefully Roy won’t mind me portraying him as an ice-cream maker, > > > but it’s the way I explain it to people and they seem to get it > > > much faster than when I just point to the thesis. > > > > > > > As it turns out, my family has a tradition of making home-made, > > hand-cranked ice cream for family reunion parties. However, > > I never made chocolate ice cream -- it interferes with the proper > > freezing process (a big deal when you are cranking by hand) > > and is less flexible than letting people add their own toppings > > to vanilla. IOW, chocolate in the mix adds unnecessary complexity > > and the result is overly coupled to a user-specific preference. > > > > ....Roy > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > ------------------------------ > View your Twitter and Flickr updates from one place – Learn more!<http://clk.atdmt.com/UKM/go/137984870/direct/01/> > >
I think chocolate toppings are good, I think hidden chocolate coulis in the cone is nasty: Not visible from the outside, get very leaky. To: rest-discuss@yahoogroups.com From: luke.crouch@... Date: Sun, 14 Jun 2009 08:30:31 -0500 Subject: Re: [rest-discuss] The tale of Roy Fielding the ice-cream maker And what about custom headers/toppings? ;) -L On Sat, Jun 13, 2009 at 9:25 PM, Sebastien Lambla <seb@...> wrote: So do you recommend vanilla over chocolate for loose coupling to user preference? (Sent from Deep Space 9 station) > CC: rest-discuss@yahoogroups.com > To: seb@... > From: fielding@... > Date: Sat, 13 Jun 2009 13:14:14 -0700 > Subject: Re: [rest-discuss] The tale of Roy Fielding the ice-cream maker > > On Jun 12, 2009, at 8:26 AM, Sebastien Lambla wrote: > > Hopefully Roy won’t mind me portraying him as an ice-cream maker, > > but it’s the way I explain it to people and they seem to get it > > much faster than when I just point to the thesis. > > > > As it turns out, my family has a tradition of making home-made, > hand-cranked ice cream for family reunion parties. However, > I never made chocolate ice cream -- it interferes with the proper > freezing process (a big deal when you are cranking by hand) > and is less flexible than letting people add their own toppings > to vanilla. IOW, chocolate in the mix adds unnecessary complexity > and the result is overly coupled to a user-specific preference. > > ....Roy > > ------------------------------------ > > Yahoo! Groups Links > > > View your Twitter and Flickr updates from one place – Learn more! _________________________________________________________________ MSN straight to your mobile - news, entertainment, videos and more. http://clk.atdmt.com/UKM/go/147991039/direct/01/
--- In rest-discuss@yahoogroups.com, Bill Burke <bburke@...> wrote: > > Sigh...The AOP guys had the same hangups... Sorry... I did not understand your reference. Could you elaborate?
http://www.google.com/#hl=en&q=site%3Atech.groups.yahoo.com%20inurl:rest-discuss I noticed that Google has recently (perhaps even as recent as last night) changed their URI formatting. They killed the question mark and are now using fragment identifiers for Google Search.
That is just for browser-side state management. Subbu On Jun 18, 2009, at 8:19 AM, johnzabroski wrote: > http://www.google.com/#hl=en&q=site%3Atech.groups.yahoo.com > %20inurl:rest-discuss > > I noticed that Google has recently (perhaps even as recent as last > night) changed their URI formatting. They killed the question mark > and are now using fragment identifiers for Google Search. >
--- In rest-discuss@yahoogroups.com, Subbu Allamaraju <subbu@...> wrote: > > That is just for browser-side state management. > > Subbu > > On Jun 18, 2009, at 8:19 AM, johnzabroski wrote: > > > http://www.google.com/#hl=en&q=site%3Atech.groups.yahoo.com > > %20inurl:rest-discuss > > > > I noticed that Google has recently (perhaps even as recent as last > > night) changed their URI formatting. They killed the question mark > > and are now using fragment identifiers for Google Search. > > > "Just"? Sure, I will agree it *seems* innocuous but it's not. It's the difference between a search service and a search application. However, they still support the old "google.com/search?" URI, by the way. This is probably why I never noticed it sooner: I always use a keyboard hotkey that triggers search accelerator that uses the old URI. I know GMail and others use this "Kill The Question Mark" trick, but essentially its a big shift in abstracting out the browser and HTTP. Bottom line: This replaces the browser's TransferAgent object/service with the application's own InternalTransferAgent object/service, insulating the application from the browser's knowledge about how to handle URI requests. This is a radical shift in architecture, even if seemingly innocuous, and I call it _a change for the good of humankind_. This degree of loose coupling is similar to not depending on STDIN/STDOUT, which support for varies by OS, for your metaphor of coroutines. Instead of such coupling to I/O device, one might be seen using sockets instead (the ports/connections metaphor of coroutines). So "just" is unjustified. Right now browsers encourage a bias towards tight coupling with the transfer agent, and it is only through coincidence that it is stable enough to meet most needs, up until now. Maybe I am alone on this, but I think the ? and # system is the hugest hack in HTTP, and feel it came as an afterthought for how to handle what seemed like a small and simple problem. (I reserve the right to change my mind.)
You may be reading too much into this. Neither "?" nor "#" belong to the HTTP layer. Those are part of RFC 3986, and the fact that browsers don't send the part of the URI after the "#" separator to servers is dictated by the media type (primarily the HTML family). Thanks Subbu On Jun 18, 2009, at 11:51 AM, johnzabroski wrote: > --- In rest-discuss@yahoogroups.com, Subbu Allamaraju <subbu@...> > wrote: >> >> That is just for browser-side state management. >> >> Subbu >> >> On Jun 18, 2009, at 8:19 AM, johnzabroski wrote: >> >>> http://www.google.com/#hl=en&q=site%3Atech.groups.yahoo.com >>> %20inurl:rest-discuss >>> >>> I noticed that Google has recently (perhaps even as recent as last >>> night) changed their URI formatting. They killed the question mark >>> and are now using fragment identifiers for Google Search. >>> >> > > > "Just"? Sure, I will agree it *seems* innocuous but it's not. It's > the difference between a search service and a search application. > However, they still support the old "google.com/search?" URI, by the > way. This is probably why I never noticed it sooner: I always use > a keyboard hotkey that triggers search accelerator that uses the old > URI. > > I know GMail and others use this "Kill The Question Mark" trick, but > essentially its a big shift in abstracting out the browser and HTTP. > > Bottom line: This replaces the browser's TransferAgent object/ > service with the application's own InternalTransferAgent object/ > service, insulating the application from the browser's knowledge > about how to handle URI requests. This is a radical shift in > architecture, even if seemingly innocuous, and I call it _a change > for the good of humankind_. > > This degree of loose coupling is similar to not depending on STDIN/ > STDOUT, which support for varies by OS, for your metaphor of > coroutines. Instead of such coupling to I/O device, one might be > seen using sockets instead (the ports/connections metaphor of > coroutines). > > So "just" is unjustified. > > Right now browsers encourage a bias towards tight coupling with the > transfer agent, and it is only through coincidence that it is stable > enough to meet most needs, up until now. > > Maybe I am alone on this, but I think the ? and # system is the > hugest hack in HTTP, and feel it came as an afterthought for how to > handle what seemed like a small and simple problem. (I reserve the > right to change my mind.) >
--- In rest-discuss@yahoogroups.com, Subbu Allamaraju <subbu@...> wrote: > > You may be reading too much into this. Neither "?" nor "#" belong to > the HTTP layer. Those are part of RFC 3986, and the fact that browsers > don't send the part of the URI after the "#" separator to servers is > dictated by the media type (primarily the HTML family). > > Thanks > Subbu Thanks for your thoughtful reply. I am not reading too much into this. I am not sure what you are saying, other than that you are saying incorrect information. There is URIs mentioned in the HTTP spec, and there is the HTTP URI scheme. The HTTP URI scheme does not say anything about fragment identifiers. Likewise, you will note in the practical sense of "what to expect in real world implementations", that Search Engines do not index content after the #. This imposes a real world practical limitation on real world use. However, Google killing the question mark is huge. This kind of opens the door, just a crack, about the idea of Google having to require its Sitemap technology be able to understand fragment identifiers. In hypermedia theory, this means we can finally start building Web applications with "open linking" and "deep linking". As a separate matter, I do not understand how one can say my points are over reading this, when there is an unproductive thread about renaming HATEOAS with another acronym, that has 56 replies and counting. Um, priorities, people? If anything, I think what I am saying here is what I would hope Gartner's think tank would be focusing its energy on. Unfortunately, I have to face the realities that some people find a simple idea like REST so complicated that the best solution is to confuse them with even more acronyms with empty meanings. Gartner's clients expect power points with those kinds of things. I get that. I guess I am just using REST to build real applications and not simply writing about REST for the sake of writing about it. I'm not a client looking for stuff to stick in a PowerPoint presentation to show my boss.
It seems Google has reverted to the question mark format. The link which was originally posted, and which worked as formatted yesterday, now redirects to a URL with the ? instead of # Good food for thought and research for me personally though, I did not realise there was such a heavy debate on the proper use of hashes and question-marks in URLs. -Nissan On Fri, Jun 19, 2009 at 9:30 AM, johnzabroski <johnzabroski@...> wrote: > > > --- In rest-discuss@yahoogroups.com, Subbu Allamaraju <subbu@...> wrote: > > > > You may be reading too much into this. Neither "?" nor "#" belong to > > the HTTP layer. Those are part of RFC 3986, and the fact that browsers > > don't send the part of the URI after the "#" separator to servers is > > dictated by the media type (primarily the HTML family). > > > > Thanks > > Subbu > > Thanks for your thoughtful reply. > > I am not reading too much into this. > > I am not sure what you are saying, other than that you are saying incorrect information. There is URIs mentioned in the HTTP spec, and there is the HTTP URI scheme. The HTTP URI scheme does not say anything about fragment identifiers. > > Likewise, you will note in the practical sense of "what to expect in real world implementations", that Search Engines do not index content after the #. This imposes a real world practical limitation on real world use. However, Google killing the question mark is huge. This kind of opens the door, just a crack, about the idea of Google having to require its Sitemap technology be able to understand fragment identifiers. In hypermedia theory, this means we can finally start building Web applications with "open linking" and "deep linking". > > As a separate matter, I do not understand how one can say my points are over reading this, when there is an unproductive thread about renaming HATEOAS with another acronym, that has 56 replies and counting. Um, priorities, people? > > If anything, I think what I am saying here is what I would hope Gartner's think tank would be focusing its energy on. Unfortunately, I have to face the realities that some people find a simple idea like REST so complicated that the best solution is to confuse them with even more acronyms with empty meanings. Gartner's clients expect power points with those kinds of things. I get that. > > I guess I am just using REST to build real applications and not simply writing about REST for the sake of writing about it. I'm not a client looking for stuff to stick in a PowerPoint presentation to show my boss. > > -- Nissan Dookeran http://redditech.blogspot.com http://redditech.wordpress.com ---- "Find a problem. Figure out how to solve the problem. Find more people with the same problem and you have a business." (Gary Schoeniger, founder of the Entrepreneurial Learning Initiative) The Law of Motion & Responsibility: If you are neither learning nor contributing you are needed elsewhere. -- Nissan Dookeran http://redditech.blogspot.com http://redditech.wordpress.com ---- "Find a problem. Figure out how to solve the problem. Find more people with the same problem and you have a business." (Gary Schoeniger, founder of the Entrepreneurial Learning Initiative) The Law of Motion & Responsibility: If you are neither learning nor contributing you are needed elsewhere.
How can you engineer a better data-oriented REST conversation between an automated REST client and an automated REST server? On the HTML side of REST, where there's a human involved, there are "experts" in the field of "Information Architecture" who have the resposibility to construct "a big picture" that assure that the various types of system users have appropriate paths (and links) through the system that support each user profile's needs within a system. Has anyone used an "Information Architect" to develop a REST API? -Solomon
johnzabroski wrote: > > Subbu Allamaraju <subbu@...> wrote: > > > > You may be reading too much into this. Neither "?" nor "#" belong to > > the HTTP layer. Those are part of RFC 3986, and the fact that browsers > > don't send the part of the URI after the "#" separator to servers is > > dictated by the media type (primarily the HTML family). > > > [...] > I am not sure what you are saying, other than that you are saying > incorrect information. There is URIs mentioned in the HTTP spec, and > there is the HTTP URI scheme. The HTTP URI scheme does not say anything > about fragment identifiers. That's what Subbu is trying to tell you - how to dispatch on a fragid is defined by the media type not the transfer protocol or URI processing. Fragment identifiers are broken as designed in the web architecture. As we say in Ireland, I wouldn't start from there. > Likewise, you will note in the practical sense of "what to expect in > real world implementations", that Search Engines do not index content > after the #. This imposes a real world practical limitation on real > world use. However, Google killing the question mark is huge. This kind > of opens the door, just a crack, about the idea of Google having to > require its Sitemap technology be able to understand fragment > identifiers. In hypermedia theory, this means we can finally start > building Web applications with "open linking" and "deep linking". Since you're say you're coming from this from a practical angle, I would say it means user agents will need to know a lot more about media types than they do today, which is a very real form of coupling. Probably that will get pushed up to application developers. > I guess I am just using REST to build real applications and not simply > writing about REST for the sake of writing about it. I'm not a client > looking for stuff to stick in a PowerPoint presentation to show my boss. You don't need to justify yourself here. Unfortunately this is a detail of the web's implementation/specs more than REST canon. Bill
Hello,
Nissan Dookeran wrote:
> It seems Google has reverted to the question mark format.
> The link which was originally posted, and which worked as formatted
> yesterday, now redirects to a URL with the ? instead of #
> Good food for thought and research for me personally though, I did not
> realise there was such a heavy debate on the proper use of hashes and
> question-marks in URLs.
Question marks (?) and hashes (#) are completely different.
Many people in the REST area try to design systems that use "RESTful"
URIs, by which they mean using slashes (/) instead of question marks
(for example <http://example.com/item/1> instead of
<http://example.com/?item=1>). In both cases, the entire URI (fragment
excepted) will be used for dereferencing. The distinction between the
two formats is mainly cosmetic and an implementation detail regarding
how to dispatch the call on the server side; the client shouldn't really
care since URIs are meant to be opaque (excepted when the hypermedia
dictates how to form a URI, which is the case of GET forms in HTML).
In contrast, the "fragment identifier component of a URI allows indirect
identification of a secondary resource by reference to a primary
resource and additional identifying information" [1]. It is not actually
used for dereferencing, so never seen by the server.
[1] http://tools.ietf.org/html/rfc3986#section-3.5
> > --- In rest-discuss@yahoogroups.com
> <mailto:rest-discuss%40yahoogroups.com>, Subbu Allamaraju <subbu@...> wrote:
> > >
> > > You may be reading too much into this. Neither "?" nor "#" belong to
> > > the HTTP layer. Those are part of RFC 3986, and the fact that browsers
> > > don't send the part of the URI after the "#" separator to servers is
> > > dictated by the media type (primarily the HTML family).
Although what the secondary resource identified by the fragment depends
on the media type (once the representation has been obtained), I don't
think that not sending it is dictated by the media type.
The way I read [1] is that it should never be sent for dereferencing,
irrespectively of the media type of the representations involved: "As
such, the fragment identifier is not used in the scheme-specific
processing of a URI; instead, the fragment identifier is separated
from the rest of the URI prior to a dereference, and thus the
identifying information within the fragment itself is dereferenced
solely by the user agent, regardless of the URI scheme." [1]
To come back to the initial point (Google using
<http://www.google.com/#hl=en&q=site%3Atech.groups.yahoo.com%20inurl:rest-discuss>),
this seems to be relying on JavaScript.
If you look at what the browser does when you paste this URI in the
location bar: it deferences <http://www.google.com/> (maybe redirected
to <http://www.google.co.uk/> or similar) and obtains an HTML that
contains some JavaScript. Then, there is an automatic redirection to the
URI with a question mark (which is the one that is actually used for the
search).
Presumably, if this redirection wasn't occuring yesterday, this implies
that an asynchronous request was made to populate the page with the
search results (instead of just one when using the question mark).
I think this is a trick that can also be used with GWT-based applications.
Best wishes,
Bruno.
--- In rest-discuss@yahoogroups.com, Bill de hOra <bill@...> wrote: > > johnzabroski wrote: > > > > Subbu Allamaraju <subbu@> wrote: > > > > > > You may be reading too much into this. Neither "?" nor "#" belong to > > > the HTTP layer. Those are part of RFC 3986, and the fact that browsers > > > don't send the part of the URI after the "#" separator to servers is > > > dictated by the media type (primarily the HTML family). > > > > > [...] > > I am not sure what you are saying, other than that you are saying > > incorrect information. There is URIs mentioned in the HTTP spec, and > > there is the HTTP URI scheme. The HTTP URI scheme does not say anything > > about fragment identifiers. > > That's what Subbu is trying to tell you - how to dispatch on a fragid is > defined by the media type not the transfer protocol or URI processing. > Fragment identifiers are broken as designed in the web architecture. As > we say in Ireland, I wouldn't start from there. Then where would you start from? Maybe I am missing a completely obvious practical or even theoretical alternative to this. I've said in the past that fragment identifiers are broken, because they are not object-oriented and do not lend themselves well to representing object-oriented hypermedia. I am not sure where you or anyone else stands on "fragment identifiers are broken". I don't know of any RFC or any IETF document that says this, anywhere. > > Likewise, you will note in the practical sense of "what to expect in > > real world implementations", that Search Engines do not index content > > after the #. This imposes a real world practical limitation on real > > world use. However, Google killing the question mark is huge. This kind > > of opens the door, just a crack, about the idea of Google having to > > require its Sitemap technology be able to understand fragment > > identifiers. In hypermedia theory, this means we can finally start > > building Web applications with "open linking" and "deep linking". > > Since you're say you're coming from this from a practical angle, I would > say it means user agents will need to know a lot more about media types > than they do today, which is a very real form of coupling. Probably that > will get pushed up to application developers. No, you have it wrong, in my humble opinion. Today, the user agent we all use, The Web Browser Model, is too tightly coupled to the media type. The Web Browser is a program that has to know how to render the media formats it receives. It has to be way too smart. I am arguing for the dumbing down of this and returning to a more reasonable Object-Oriented approach where "you don't need a browser", where your code and data go together. The idea that a browser should know how to render HTML is backwards. Notice that I am saying the Browser should simply be a gateway, and that inside the browser we can embed application-specific gateways. In this way, the Browser can be configured more towards user preference. Currently, if you want to write a Browser app, you talk to a DOM Bridge that provides some Javascript call. You shouldn't even know you are doing that. Something like an HREF is a pervasive way of thinking, and therefore by state-of-the-art we have tightly coupled the resource identification mechanism with the browser's transfer agent. Even new technologies directly code against this model, and _that_ is the problem for application developers. Some big company like Microsoft writes an API like the Silverlight Navigation Framework, and it is tightly coupled to the browser's transfer agent. Then every application developer in the world who programs for that Silverlight media type is tightly coupled. I think of the Browser of the future as a software configuration management tool, with ability to get out of the way of user accessibility issues and the like. The Browser should give you a flexible, scalable, distributed model for accessing content like object-oriented programs. You could certainly argue that HTML/XHTML decouples interface from implementation, but that is a ridiculous argument -- we all know from just looking at the gold standard for rendering pages, the Acid Test, that nobody can even correctly implement these interfaces. So it makes more sense to talk about shipping a working versionable object, a component, that has code+data. I think the best explanation of this from an engineering standpoint, for MBAs, is Joel Spolsky's Martian Headsets. (Although I've written a similar essay, it is nowhere near as funny or entertaining, and I take myself too seriously.) > > I guess I am just using REST to build real applications and not simply > > writing about REST for the sake of writing about it. I'm not a client > > looking for stuff to stick in a PowerPoint presentation to show my boss. > > You don't need to justify yourself here. Thanks. > Unfortunately this is a detail > of the web's implementation/specs more than REST canon. hmm, I am not sure what your last sentence means? I can sort of perhaps see an argument for saying "this is not REST", but since a lot of this discussion revolves around media types and Roy acknowledges that part of his thesis as being wanting, I thought I would start to color it in with my own opinions. I guess I am reading your sentence as "Unfortunately this is a detail we do not talk about, even though it needs improvement." Perhaps not fair, but where else are you leading me? It sounds like what I need to hear then is "stop going down this direction, you need to re-crystallize, here is what you need to go and where you need to go." If you are simply saying "the browser networking stack needs improvement", then okay, but I'm not waiting until 2022 for HTML5 to see something done.
Can I use REST for login page? If yes, would you please tell me how to do it. I don't know how to use REST in this case ! Please help me !
Hi, cule_barca wrote: > Can I use REST for login page? A "login page" implies that you create a server side session to hold authentication data and application state. Any subsequent request to another server running your application would require your client to login on your "login page" again to create an appropriate session. In a RESTful environment, any request should be stateless and contain all the information your application needs to understand the request, e.g. authentication data. If you use RESTful HTTP (like almost anyone of us here) you could use an HTTP authentication mechanism like Basic authentication or Digest authentication to provide the needed information to the server. HTH, Jochen
Okay, I think I see Seb's and Bill's objection to what I am saying. I called # and ? the hugest hack in HTTP. They are trying to tell me # is opaque to HTTP. I get that. I shouldn't have said hugest hack in HTTP without qualifying it. Basically, in much the same way we hi-jack GET/POST in HTTP, people hi-jack ? and #. Bill and Seb are saying why should I care, it is not a theoretical limitation. I care because of discoverability, primarily by other user-agents. This is "open linking" and "deep linking" concepts from hypermedia theory. Really, it has nothing to do with HTTP, the but the hacks *in* HTTP. However, just as browsers only support GET/POST because that is all HTML qualifies, browsers only really support a limited networking stack because that is all HTML originally qualified for. I don't think any user agent that claims to support HTTP but whose primary media type is tightly coupled to the user agent's transfer agent is a clean HTTP user agent. Did I take my pepto bismal and get rid of the diarhea of the mouth???
johnzabroski wrote: > > Fragment identifiers are broken as designed in the web architecture. As > > we say in Ireland, I wouldn't start from there. > > Then where would you start from? Maybe I am missing a completely obvious > practical or even theoretical alternative to this. I've said in the past > that fragment identifiers are broken, because they are not > object-oriented and do not lend themselves well to representing > object-oriented hypermedia. I am not sure where you or anyone else > stands on "fragment identifiers are broken". I don't know of any RFC or > any IETF document that says this, anywhere. They're broken because they couple URIs to media representations. My stance fwiw, is not to design them into my systems. > > > Likewise, you will note in the practical sense of "what to expect in > > > real world implementations", that Search Engines do not index content > > > after the #. This imposes a real world practical limitation on real > > > world use. However, Google killing the question mark is huge. This > kind > > > of opens the door, just a crack, about the idea of Google having to > > > require its Sitemap technology be able to understand fragment > > > identifiers. In hypermedia theory, this means we can finally start > > > building Web applications with "open linking" and "deep linking". > > > > Since you're say you're coming from this from a practical angle, I would > > say it means user agents will need to know a lot more about media types > > than they do today, which is a very real form of coupling. Probably that > > will get pushed up to application developers. > > No, you have it wrong, in my humble opinion. Today, the user agent we > all use, The Web Browser Model, is too tightly coupled to the media > type. What I said was that the problem of understanding formats and objects will be pushed up to application developers. This is what happens already today with JSON (but not so much with Atom and hardly at all with HTML). If I'm wrong, please explain how. > The Web Browser is a program that has to know how to render the > media formats it receives. It has to be way too smart. I am arguing for > the dumbing down of this and returning to a more reasonable > Object-Oriented approach where "you don't need a browser", where your > code and data go together. There are systems that do that already and have attempted to do that, including the goal of "fixing the web" - they haven't been adopted even with massive industry backing. Distributed OO is a niche compared to the Web. > The idea that a browser should know how to > render HTML is backwards. Notice that I am saying the Browser should > simply be a gateway, and that inside the browser we can embed > application-specific gateways. Gateways already exist in the form of CGI. They're called gateways for reason, as they're not part of the web architecture. > I think of the Browser of the future as a software configuration > management tool, with ability to get out of the way of user > accessibility issues and the like. The Browser should give you a > flexible, scalable, distributed model for accessing content like > object-oriented programs. I think they already do. Was there a specific "not-browser" application you had in mind? > > You could certainly argue that HTML/XHTML decouples interface from > implementation, but that is a ridiculous argument -- we all know from > just looking at the gold standard for rendering pages, the Acid Test, > that nobody can even correctly implement these interfaces. That's ok, I didn't argue that. I think you'll find that most of the wildly successful formats are not consistently implemented. There's nothing exceptional about HTML, except that "wildly successful" is an understatement. > So it makes > more sense to talk about shipping a working versionable object, a > component, that has code+data. Why does it make more sense? Code on demand is part of REST, but the history of distributed objects is dismal. SOAP has failed for Web scale delivery. Even JavaScript gets reduced down to JSON. If there's something beyond the syntax related stuff we do today, I suspect it's likely to be data that has more precise semantics (eg KIF/RDF/OWL) and in the interim, formats like atom, microformats and rdfa, maybe json based vocabularies. Developers like their objects so I guess people will keep trying to go down that path, and you seem happy to do so. Otherwise this debate is years old, and I've seen nothing in over a decade that suggests we'll move off the current web to something approaching OO, not matter how stupid it seems to be each generation of programmers that discover the web and immediately wonder how to fix it. > I guess I am reading your sentence as "Unfortunately this is a detail we > do not talk about, even though it needs improvement." Perhaps not fair, > but where else are you leading me? It sounds like what I need to hear > then is "stop going down this direction, you need to re-crystallize, > here is what you need to go and where you need to go." I would say distributed objects in the way you're talking about them are a dead end, yes. Bill
Jochen, I humbly disagree. A token passed by a header param, such as a cookie, can be just as RESTful as the solutions you describe. You don't have to give userid/password in every request. Case in point: OAuth. A login page/resource is a Client Side construct, which in itself is as RESTful as any other resource. I do agree that if your server handles that page by creating a session that's tied to a specific machine, you're inherently unRESTful. However, if you're using distributed session sharing (through coherence or memcached for example), a sessionid can fit in well in a RESTful architecture. -Solomon On Fri, Jun 19, 2009 at 11:42 AM, Jochen Schalanda <jochen@...>wrote: > > > Hi, > > > cule_barca wrote: > > Can I use REST for login page? > > A "login page" implies that you create a server side session to hold > authentication data and application state. Any subsequent request to > another server running your application would require your client to > login on your "login page" again to create an appropriate session. > > In a RESTful environment, any request should be stateless and contain > all the information your application needs to understand the request, > e.g. authentication data. If you use RESTful HTTP (like almost anyone of > us here) you could use an HTTP authentication mechanism like Basic > authentication or Digest authentication to provide the needed > information to the server. > > HTH, > Jochen > > >
>>>>> "cule" == cule barca <vantu.ituns@...> writes:
cule> Can I use REST for login page? If yes, would you please tell
cule> me how to do it. I don't know how to use REST in this case !
cule> Please help me !
Some thoughts here:
http://www.berenddeboer.net/rest/authentication.html
--
Cheers,
Berend de Boer
>>>>> "Solomon" == Solomon Duskis <sduskis@...> writes:
Solomon> Jochen, I humbly disagree. A token passed by a header
Solomon> param, such as a cookie, can be just as RESTful as the
Solomon> solutions you describe. You don't have to give
Solomon> userid/password in every request.
Your solution is stateful, and therefore not REST.
--
Cheers,
Berend de Boer
Saving Resource state on the server-side if RESTful, as long as it doesn't bind a client to a single server. The REST constraint is that you can't have "Stateful Communication" that binds a client to a SPECIFIC server. I really don't see the difference between a random header/cookie value and a more specific one like BASIC/other authentication. Isn't OAuth RESTful? As an example of RESTful resource state, couldn't you have have a REST system that saves some data to a database? -Solomon On Fri, Jun 19, 2009 at 1:46 PM, Berend de Boer <berend@...> wrote: > >>>>> "Solomon" == Solomon Duskis <sduskis@...> writes: > > Solomon> Jochen, I humbly disagree. A token passed by a header > Solomon> param, such as a cookie, can be just as RESTful as the > Solomon> solutions you describe. You don't have to give > Solomon> userid/password in every request. > > Your solution is stateful, and therefore not REST. > > -- > Cheers, > > Berend de Boer >
I don't think I've replied to this thread yet? > To: rest-discuss@yahoogroups.com > From: johnzabroski@... > Date: Fri, 19 Jun 2009 15:53:56 +0000 > Subject: [rest-discuss] Re: Google kills the question mark? > > Okay, I think I see Seb's and Bill's objection to what I am saying. > > I called # and ? the hugest hack in HTTP. > > They are trying to tell me # is opaque to HTTP. I get that. > > I shouldn't have said hugest hack in HTTP without qualifying it. Basically, in much the same way we hi-jack GET/POST in HTTP, people hi-jack ? and #. Bill and Seb are saying why should I care, it is not a theoretical limitation. > > I care because of discoverability, primarily by other user-agents. This is "open linking" and "deep linking" concepts from hypermedia theory. > > Really, it has nothing to do with HTTP, the but the hacks *in* HTTP. However, just as browsers only support GET/POST because that is all HTML qualifies, browsers only really support a limited networking stack because that is all HTML originally qualified for. I don't think any user agent that claims to support HTTP but whose primary media type is tightly coupled to the user agent's transfer agent is a clean HTTP user agent. > > Did I take my pepto bismal and get rid of the diarhea of the mouth??? > > > > ------------------------------------ > > Yahoo! Groups Links > > > _________________________________________________________________ Get the best of MSN on your mobile http://clk.atdmt.com/UKM/go/147991039/direct/01/
> They're broken because they couple URIs to media representations. My > stance fwiw, is not to design them into my systems. How else would you represent a fragment identifier that is only a concern for the browser? The fact that the hash is overloaded for client-side persistence is only a problem because javascript cannot change the current URI in the address bar. How else would you represent a point in a document or thing? What's an alternative? > There are systems that do that already and have attempted to do that, > including the goal of "fixing the web" - they haven't been adopted even > with massive industry backing. Distributed OO is a niche compared to the > Web. That, I agree with completely. And for why it failed one may want to look at HTTP-NG :) Seb _________________________________________________________________ With Windows Live, you can organise, edit, and share your photos. http://clk.atdmt.com/UKM/go/134665338/direct/01/
Taking a step back... I know why I'd need one in a RESTful system (login form with more than just user/pass information, such as affiliate information; a link to forgot password functionality, a REST application that uses a third party REST login system such as OAuth). Vantu, why do you need a RESTful login page? -Solomon On Thu, Jun 18, 2009 at 11:15 PM, cule_barca <vantu.ituns@...> wrote: > > > Can I use REST for login page? If yes, would you please tell me how to do > it. I don't know how to use REST in this case ! > Please help me ! > > >
--- In rest-discuss@yahoogroups.com, Bill de hOra <bill@...> wrote: > > johnzabroski wrote: > > > > Fragment identifiers are broken as designed in the web architecture. As > > > we say in Ireland, I wouldn't start from there. > > > > Then where would you start from? Maybe I am missing a completely obvious > > practical or even theoretical alternative to this. I've said in the past > > that fragment identifiers are broken, because they are not > > object-oriented and do not lend themselves well to representing > > object-oriented hypermedia. I am not sure where you or anyone else > > stands on "fragment identifiers are broken". I don't know of any RFC or > > any IETF document that says this, anywhere. > > They're broken because they couple URIs to media representations. My > stance fwiw, is not to design them into my systems. Sorry, you provide commentary on a design position, but give me no examples. I am not sure what that looks like. You don't have any ID tags anywhere in your XML documents??? I think you are fooling me, or talking past me. Fragment identifiers are supposed to be independent of URI media type. I personally think full independence is a little crazy. That's tantamount to saying "ah, yes, we can run our enterprise's mission-critical data processing on a network database and don't need a SQL DBMS." There is a minimum sanity level in the design of any system. It is known in the engineering trades as "margin of safety". If your media type does not support fragment identifiers directly, then you can usually perform what compiler writers call a "worker/wrapper transformation". This is how Google Video brilliantly "just works". Google Video lets you fragment identify a particular second in a video you are watching, along with other options (I forget how to do this, though, since it is not exposed anywhere in the GUI.) It can do this because Google Video is not presenting you WMF or whatever. Instead, it wraps that content in a media player in a Flash container and wraps that Flash container in an HTML document, and the DOM model allows the user to manipulate the opaque media type via a DOM bridge. Some media types need this level of virtualization to "just work", and it is not tightly coupling anything to anything. It is simply good design. If you note, we've put in two extra degrees of freedom from the underlying media type (WMF) just to gain access to WMF's discrete time abilities. Now, one thing I dislike, and I think you probably do too, is that we use # in place of ?, mainly due to how much web browser's navigation services suck today. I'll talk more about this below - and explain my Platonic ideal. > > No, you have it wrong, in my humble opinion. Today, the user agent we > > all use, The Web Browser Model, is too tightly coupled to the media > > type. > > What I said was that the problem of understanding formats and objects > will be pushed up to application developers. This is what happens > already today with JSON (but not so much with Atom and hardly at all > with HTML). If I'm wrong, please explain how. My initial guess is you are "not even wrong". I don't understand how people come up with circular arguments like this one. Why on earth would application developers be responsible for building media type interpreters??? That just puts you back in the EDI world, and will bring about another round of dumb COBOL/VSAM-esque ideas like Netron Fusion frame-oriented programming. I will not comment on your JSON concerns, because I need a particular example. It is too aloof for me to wrap my head around. Could you illustrate by comparing JSON vs. Atom vs. HTML? I understand what all these are, but sometimes the differences people see in good design are remarkably subtle. There is a good chance you can simply see a subtlety I cannot without the help of your magnifying glass. > > The Web Browser is a program that has to know how to render the > > media formats it receives. It has to be way too smart. I am arguing for > > the dumbing down of this and returning to a more reasonable > > Object-Oriented approach where "you don't need a browser", where your > > code and data go together. > > There are systems that do that already and have attempted to do that, > including the goal of "fixing the web" - they haven't been adopted even > with massive industry backing. Distributed OO is a niche compared to the > Web. Sorry. Distributed OO could mean a lot of things. Could you please explain to me in what sense you are using it? I don't enjoy hearing a buzzword and "there are systems that do that already", because I can't figure out what that means. I much prefer examples in place of buzzwords. I am a firm believer that this sort of assumption -- that other technical people know what other technical people are talking about -- is why software is so complex. Here is my take on things... let's study the evolution of a user and his/her user agent (the legacy model of a web browser). This is a somewhat philosophical A Discourse by Three Drunkards on Web Browsers.... 1. User installs Operating System, or buys appliance with Operating System, or downloads a VM that is an OS Appliance. 2. They either have a web browser shipped with the OS, or they don't. If they don't, then they have to download it or install it. Downloading is tricky, because how do you bootstrap the process? Ah, at some point you need something to *lead* the bootstrap. We'll call that leader the browser. At this step, the first bootstrap on the user's machine ever, the browser could be a user or a user agent. 3. The browser is now automagically installed. The browser also automagically knows how to update itself from the Internet, using its own browser networking stack, which is abstracted away from the underlying OS. (This is the way FireFox works today. It gets the latest bits from an authoritative repository, and the next time you cycle FireFox, you get the version it downloaded/installed. You also get a page that says "You've just been upgraded.") 4. Your browser has a somewhat insidious but seemingly innocuous design flaw... it cannot automagically substitute rendering engines for HTML, the major media type on the Web. As a consequence, your mission-critical Enterprise web app starts behaving weird after some upgrade, because the upgrade included a change to CSS and now all of a sudden that weird CSS hack the webmaster put in is breaking everyone's dashboards. Bar graphs that were normally zeroes are now 100%s, and everyone knows something is horribly wrong, and right about now nobody gives a shit that "FireFox 2020" just passed the Acid6 test by the W3C... and the DotCOM CEO is about to commit Hudsucker Proxy-style boardroom meeting suicide. So, basically, to re-cap, if web browsers were more object-oriented, and supported "versionable objects", then we could stabilize such problems by saying we only support rendering engine COMPONENTs. This is the way .NET works. Your appmanifest.xml can be used to select which version of a DLL your application should run against. This is called virtualization and component-oriented software. The former buzzword is really sexy right now and the other is just "meh" to most people. If you look at what Microsoft is doing with Silverlight, they are basically trying to replace the Browser with something more object-oriented. For the most part, it is vastly superior, except for the fact that by default it is still hosted inside this monolithic, poorly architected notion of a Browser. The bad design of browsers is codified into the DHTML model of DOM. JavaScript's notion of a window is wrong. When you change the URL, using JavaScript, the browser should not automagically fetch the new URL. That's what is wrong with browsers today, that Silverlight and object-oriented stuff like it can't fix. The Web Browser today has no notion of interstitial concepts that good object-oriented design uses by default. The Browser's URI navigation bar is very direct, and not controllable by the application. Somehow, the Browser has this completely _jacked up_ idea that it knows more about how to dispatch a URI than the application itself. The Browser should really only provide the ability for the application to use HREF-style GOTO behavior as a HELPER function- not as a hardcoded non-modular mechanism. And if you don't see this hardcodedness and how it ties back into the original topic of ? and #, and why people even use # in the first place, well... it's because the browser's navigation bar uses the HREF GOTO model of navigation. The browser is really this phony hypermedia container wannabe, because nothing about the browser has ever been hypermedia-centric or object-oriented, and certainly not "Object-Oriented Hypermedia"-centric nirvana. That's why the browser is continuously taken by surprise with ideas like XMLHTTPRequest, because the browser is non-modular and people continuously hijack media types (like ActiveX) to provide correct, object-oriented features like XMLHTTPRequest! THINK ABOUT IT. Silverlight is basically a Virtual Machine-based web browser, done mostly right, but still having to deal with browser interop problems such as (a) the browser networking stack (b) the browser navigation model If my Silverlight/Moonlight example frightens you, just s/Silverlight/GWT and Google Gears/ and we're now talking about supposedly "do no evil" technology everybody loves right now. Except, GWT is fundamentally stupider than Silverlight, b/c it does not fix the fundamental problem that the browser is not object-oriented and does not use components. The legacy web browser is not a good model for mission-critical applications, regardless of what Google wants you to believe to pump-up their AdSense/AdWords revenue. Gears cannot fix this. Ever. (Gears is seriously cool, by the way, and I think Silverlight+++Gears is even cooler! But most people talk about Microsoft/Google in an unholy war sense, so they could never dream of Silverlight and Gears being used in conjunction to fix the browser networking stack problems and browser navigation model problems. However, Gears CANNOT fix the component problem. Again, I'll just point to Joel Spolsky's Martian Headsets blog entry for the best explanation of how messed up this thing is.) > > The idea that a browser should know how to > > render HTML is backwards. Notice that I am saying the Browser should > > simply be a gateway, and that inside the browser we can embed > > application-specific gateways. > > Gateways already exist in the form of CGI. They're called gateways for > reason, as they're not part of the web architecture. Are you snowing me? Your first sentence is spot-on, but your statement that "they're not part of the web architecture" overlooks how page-centric web applications operate today. Look at the DISGUSTING hacks JBoss does with continuations to solve the fact that the browser has no GOOD way to allow the application to define gateways. In theory, you are right. The Web Architecture doesn't know about gateways. In practice, you are "not even wrong", by being closed-minded to a set of design criteria you prefer. Understand that the way web browsers work today is just a _talking point_ for what they should look like in the future. The Web -- and The Web Browser Model -- did not have any concept of XMLHTTPRequest or anything like that. > > I think of the Browser of the future as a software configuration > > management tool, with ability to get out of the way of user > > accessibility issues and the like. The Browser should give you a > > flexible, scalable, distributed model for accessing content like > > object-oriented programs. > > I think they already do. Was there a specific "not-browser" application > you had in mind? > > > > > You could certainly argue that HTML/XHTML decouples interface from > > implementation, but that is a ridiculous argument -- we all know from > > just looking at the gold standard for rendering pages, the Acid Test, > > that nobody can even correctly implement these interfaces. > > That's ok, I didn't argue that. > > I think you'll find that most of the wildly successful formats are not > consistently implemented. There's nothing exceptional about HTML, except > that "wildly successful" is an understatement. > > > > So it makes > > more sense to talk about shipping a working versionable object, a > > component, that has code+data. > > Why does it make more sense? Code on demand is part of REST, but the > history of distributed objects is dismal. SOAP has failed for Web scale > delivery. Even JavaScript gets reduced down to JSON. > > If there's something beyond the syntax related stuff we do today, I > suspect it's likely to be data that has more precise semantics (eg > KIF/RDF/OWL) and in the interim, formats like atom, microformats and > rdfa, maybe json based vocabularies. > > Developers like their objects so I guess people will keep trying to go > down that path, and you seem happy to do so. Otherwise this debate is > years old, and I've seen nothing in over a decade that suggests we'll > move off the current web to something approaching OO, not matter how > stupid it seems to be each generation of programmers that discover the > web and immediately wonder how to fix it. I don't know what KIF is. I don't know what RDFA is. I know the others. I think the thing you are missing here is that I am not against any of these ideas, and actually see my vision of the browser as complementary to Tim Berners-Lee's vision of the Semantic Web and even Google's vision of the semantic web. This is hard for me to confess, because my greatest weakness is being so visionary and closed-minded. Yet, ya know, these semantic people know what they're talking about. I am not trying to "fix" anything there. I'm just waiting for them to bring me results so I can take advantage of it. As an aside, more pragmatically, you can't OWL everything and you don't want to. This is the same reason why Microsoft doesn't make .NET more modular. OWL means all your content on the web is totally modular, fully exposed semantics. For a lot of companies, this means that they are inviting competition. For example, where I work a company currently complements our product by providing rich hypermedia like videos to our clients, while we provide highly structured and organized content that is optimized for stereotyped workflows. If all our content was "out there", then the rich hypermedia competitor could just siphon us. At that point, they can go to our clients and say, "we have all the data you care about, and will charge you half the price." Basically, they would be copying our specification (the hardest thing in software to build). > > I guess I am reading your sentence as "Unfortunately this is a detail we > > do not talk about, even though it needs improvement." Perhaps not fair, > > but where else are you leading me? It sounds like what I need to hear > > then is "stop going down this direction, you need to re-crystallize, > > here is what you need to go and where you need to go." > > I would say distributed objects in the way you're talking about them are > a dead end, yes. I never used the phrase "distributed objects", anywhere. You put that word in my mouth. I am spitting it out. I don't even know what the heck you mean by "distributed objects". You are really being basically the ultimate buzzkill -- using buzzwords with connotative meanings and slapping them onto what I am discussing. This is why we have a thread about re-acronymizing everything under the sun. And exactly why that thread is one of the longest discussions on here in a long time. I think I am personally too closed-minded and visionary to be brought down by anyone who wants to buzzkill me. I know exactly what I want, and despite being an average programmer, I am a very great API designer. Amazingly, I didn't go to Stanford and I didn't get hired by Sun to create a super-complicated J2EE EJB 1.0 specification or get hired by Microsoft to chair a steering committee for ATL or DCOM. I know, my credentials suck!
Duskis, On Jun 19, 2009, at 3:14 PM, Solomon Duskis wrote: > > > How can you engineer a better data-oriented REST conversation > between an automated REST client and an automated REST server? What do you mean by 'better'? IOW, what in your context is it that you want to be 'better'? > On the HTML side of REST, where there's a human involved, there are > "experts" in the field of "Information Architecture" who have the > resposibility to construct "a big picture" that assure that the > various types of system users have appropriate paths (and links) > through the system that support each user profile's needs within a > system. > > Has anyone used an "Information Architect" to develop a REST API? I do not think that there is much of a difference between human and non-human clients. A browser for example has quite some automatic behaviour that results from implementing the processing model of HTML (load images, load stylesheets, execute JavaScript, do page reloads based on <meta> tags, etc.) All that changes for the non-human case is that media types would specify richer application semantics (because there is no user involved to decide what this or that link means). If there is no human to understand <a href="/all-versions">Click me to see all versions</a> then all you need is to standardize something like <link rel="http://example.com/linkreks/all-versions" href="/all- versions">. Jan > > -Solomon > > >
Solomon I think cookies are definitely suspect for something to be classified as RESTful at least based on the specific issues raised by Roy Fielding at http://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm#sec_6_3_4_2. Having said that I understand where you are coming from in principle. I would feel comfortable deploying a suggestion as you mentioned as a practical way to architect an otherwise restful system. Though I am unlikely to call it fully restful due to the specific issues Roy Fielding refers to in the section I pointed to. Dhananjay On Fri, Jun 19, 2009 at 11:34 PM, Solomon Duskis <sduskis@...> wrote: > > > Saving Resource state on the server-side if RESTful, as long as it doesn't > bind a client to a single server. The REST constraint is that you can't > have "Stateful Communication" that binds a client to a SPECIFIC server. I > really don't see the difference between a random header/cookie value and a > more specific one like BASIC/other authentication. Isn't OAuth RESTful? > > As an example of RESTful resource state, couldn't you have have a REST > system that saves some data to a database? > > -Solomon > > On Fri, Jun 19, 2009 at 1:46 PM, Berend de Boer <berend@...> wrote: > >> >>>>> "Solomon" == Solomon Duskis <sduskis@...> writes: >> >> Solomon> Jochen, I humbly disagree. A token passed by a header >> Solomon> param, such as a cookie, can be just as RESTful as the >> Solomon> solutions you describe. You don't have to give >> Solomon> userid/password in every request. >> >> Your solution is stateful, and therefore not REST. >> >> -- >> Cheers, >> >> Berend de Boer >> > > > -- -------------------------------------------------------- blog: http://blog.dhananjaynene.com twitter: http://twitter.com/dnene
There's too much content to reply inline, mixing ideas and rants, so I'll just make a fresh start instead with points that seem relevant to this conversation. The fact that the hash is used by google video to fast forward the video hosted in flash at a certain point in time is perfectly within the constraints defined by the layering between media type and protocol, as you rightly point out. The hash is used by the media type in whatever it sees fit. In html used in composite scenarios, it is often used as both storage and navigation clue. Nothing, absolutely nothing prevents the owner of the media type for wmv or flash video to define that the hash fragment is to be used to denotate a time within the video. And indeed, if it was the case, the flash content could simply delegate the navigation hash value to the secondary media type without resorting to scripting. As such, I really do not understand where the comparison between the query of a URI, which is a server concern, has any relationship with the hash part, which is intended for use by the media type. This is the foundation of the layered approach in WebArch. I'll propose that there is confusion in this discussion between the hash fragment used as intra-mediatype navigation (as in google video, xml identifier in xml, etc), hash fragment used as local storage (due to the lack of local persistence supported across browsers) and hash fragment used as additional navigation information (due to the lack of ability by browsers to update the address bar upon ajax calls). Each can be discussed independently, but they don't have any impact in the separation between media type hash fragment and query string. Now on the subject of silverlight (or flash for that matter), they are fundamentally mobile code, but are opaque to intermediaries. Look at how good google can crawl flash movies to see the kind of issues arise when you start doing mobile code. The notion that object-orientation would solve a problem (and I'm still *very* unclear as to what the problem under discussion is) is quite fallacious. Just like you need a browser understanding DOM2 and HTML4 to process a document, you will require version 2 of the silverlight runtime to run your package. What you're arguing here would be that a specific document could enforce a specific rendering engine "version" and all the problems in the world would be solved. I believe that idea has been tried and failed. Remember the "only on IE" buttons 15 years ago? I do. It doesn't work. Unless you ship everything side by side. Like IE7 mode in IE8 I suppose? A declarative choice of dependencies and runtimes (a strong version binding) *has nothing to do with OO*. At all. Versioning existed before and will exist after OO. A discussion that has nothing to do with either WebArch or REST. But what is completely relevant to REST is that the agent is only responsible for manipulating resource representations, displaying, modifying, or sending them back. The workflow picture is hidden away from the client for good reason that have been documented on this list and on the web ad-nauseum. And the only way to achieve the same attributes in an architecture based on a silverlight agent (or any other http agent for that matter) will end up with you redesigning the browser on a lesser-used media type with much higher parsing requirements. I don't see a mono-vendor (pun intended) format mixing document and behavior, developed in a closed fashion and not particularly a great http citizen (wcf duplex is a prime example) being fundamentally better than what WebArch is today. I'd be glad to continue that conversation if you can highlight exactly what you think is wrong in the web architecture. As long as you don't mind debating with a law school uni drop-out that didn't chair a board at microsoft or got employed by sun, but does believe that there is more to open architectures than contract and competitive issues. Seb > To: rest-discuss@yahoogroups.com > From: johnzabroski@yahoo.com > Date: Sun, 21 Jun 2009 17:07:56 +0000 > Subject: [rest-discuss] Re: Google kills the question mark? > > --- In rest-discuss@...m, Bill de hOra <bill@...> wrote: > > > > johnzabroski wrote: > > > > > > Fragment identifiers are broken as designed in the web architecture. As > > > > we say in Ireland, I wouldn't start from there. > > > > > > Then where would you start from? Maybe I am missing a completely obvious > > > practical or even theoretical alternative to this. I've said in the past > > > that fragment identifiers are broken, because they are not > > > object-oriented and do not lend themselves well to representing > > > object-oriented hypermedia. I am not sure where you or anyone else > > > stands on "fragment identifiers are broken". I don't know of any RFC or > > > any IETF document that says this, anywhere. > > > > They're broken because they couple URIs to media representations. My > > stance fwiw, is not to design them into my systems. > > > Sorry, you provide commentary on a design position, but give me no examples. I am not sure what that looks like. You don't have any ID tags anywhere in your XML documents??? I think you are fooling me, or talking past me. > > Fragment identifiers are supposed to be independent of URI media type. I personally think full independence is a little crazy. That's tantamount to saying "ah, yes, we can run our enterprise's mission-critical data processing on a network database and don't need a SQL DBMS." There is a minimum sanity level in the design of any system. It is known in the engineering trades as "margin of safety". > > If your media type does not support fragment identifiers directly, then you can usually perform what compiler writers call a "worker/wrapper transformation". This is how Google Video brilliantly "just works". Google Video lets you fragment identify a particular second in a video you are watching, along with other options (I forget how to do this, though, since it is not exposed anywhere in the GUI.) It can do this because Google Video is not presenting you WMF or whatever. Instead, it wraps that content in a media player in a Flash container and wraps that Flash container in an HTML document, and the DOM model allows the user to manipulate the opaque media type via a DOM bridge. > > Some media types need this level of virtualization to "just work", and it is not tightly coupling anything to anything. It is simply good design. If you note, we've put in two extra degrees of freedom from the underlying media type (WMF) just to gain access to WMF's discrete time abilities. > > Now, one thing I dislike, and I think you probably do too, is that we use # in place of ?, mainly due to how much web browser's navigation services suck today. I'll talk more about this below - and explain my Platonic ideal. > > > > No, you have it wrong, in my humble opinion. Today, the user agent we > > > all use, The Web Browser Model, is too tightly coupled to the media > > > type. > > > > What I said was that the problem of understanding formats and objects > > will be pushed up to application developers. This is what happens > > already today with JSON (but not so much with Atom and hardly at all > > with HTML). If I'm wrong, please explain how. > > > My initial guess is you are "not even wrong". I don't understand how people come up with circular arguments like this one. Why on earth would application developers be responsible for building media type interpreters??? That just puts you back in the EDI world, and will bring about another round of dumb COBOL/VSAM-esque ideas like Netron Fusion frame-oriented programming. > > I will not comment on your JSON concerns, because I need a particular example. It is too aloof for me to wrap my head around. Could you illustrate by comparing JSON vs. Atom vs. HTML? I understand what all these are, but sometimes the differences people see in good design are remarkably subtle. There is a good chance you can simply see a subtlety I cannot without the help of your magnifying glass. > > > > > The Web Browser is a program that has to know how to render the > > > media formats it receives. It has to be way too smart. I am arguing for > > > the dumbing down of this and returning to a more reasonable > > > Object-Oriented approach where "you don't need a browser", where your > > > code and data go together. > > > > There are systems that do that already and have attempted to do that, > > including the goal of "fixing the web" - they haven't been adopted even > > with massive industry backing. Distributed OO is a niche compared to the > > Web. > > > Sorry. Distributed OO could mean a lot of things. Could you please explain to me in what sense you are using it? I don't enjoy hearing a buzzword and "there are systems that do that already", because I can't figure out what that means. I much prefer examples in place of buzzwords. I am a firm believer that this sort of assumption -- that other technical people know what other technical people are talking about -- is why software is so complex. > > Here is my take on things... let's study the evolution of a user and his/her user agent (the legacy model of a web browser). This is a somewhat philosophical A Discourse by Three Drunkards on Web Browsers.... > > 1. User installs Operating System, or buys appliance with Operating System, or downloads a VM that is an OS Appliance. > > 2. They either have a web browser shipped with the OS, or they don't. If they don't, then they have to download it or install it. Downloading is tricky, because how do you bootstrap the process? Ah, at some point you need something to *lead* the bootstrap. We'll call that leader the browser. At this step, the first bootstrap on the user's machine ever, the browser could be a user or a user agent. > > 3. The browser is now automagically installed. The browser also automagically knows how to update itself from the Internet, using its own browser networking stack, which is abstracted away from the underlying OS. (This is the way FireFox works today. It gets the latest bits from an authoritative repository, and the next time you cycle FireFox, you get the version it downloaded/installed. You also get a page that says "You've just been upgraded.") > > 4. Your browser has a somewhat insidious but seemingly innocuous design flaw... it cannot automagically substitute rendering engines for HTML, the major media type on the Web. As a consequence, your mission-critical Enterprise web app starts behaving weird after some upgrade, because the upgrade included a change to CSS and now all of a sudden that weird CSS hack the webmaster put in is breaking everyone's dashboards. Bar graphs that were normally zeroes are now 100%s, and everyone knows something is horribly wrong, and right about now nobody gives a shit that "FireFox 2020" just passed the Acid6 test by the W3C... and the DotCOM CEO is about to commit Hudsucker Proxy-style boardroom meeting suicide. > > So, basically, to re-cap, if web browsers were more object-oriented, and supported "versionable objects", then we could stabilize such problems by saying we only support rendering engine COMPONENTs. This is the way .NET works. Your appmanifest.xml can be used to select which version of a DLL your application should run against. This is called virtualization and component-oriented software. The former buzzword is really sexy right now and the other is just "meh" to most people. > > If you look at what Microsoft is doing with Silverlight, they are basically trying to replace the Browser with something more object-oriented. For the most part, it is vastly superior, except for the fact that by default it is still hosted inside this monolithic, poorly architected notion of a Browser. The bad design of browsers is codified into the DHTML model of DOM. JavaScript's notion of a window is wrong. When you change the URL, using JavaScript, the browser should not automagically fetch the new URL. That's what is wrong with browsers today, that Silverlight and object-oriented stuff like it can't fix. The Web Browser today has no notion of interstitial concepts that good object-oriented design uses by default. The Browser's URI navigation bar is very direct, and not controllable by the application. Somehow, the Browser has this completely _jacked up_ idea that it knows more about how to dispatch a URI than the application itself. The Browser should really only provide the ability for the application to use HREF-style GOTO behavior as a HELPER function- not as a hardcoded non-modular mechanism. And if you don't see this hardcodedness and how it ties back into the original topic of ? and #, and why people even use # in the first place, well... it's because the browser's navigation bar uses the HREF GOTO model of navigation. The browser is really this phony hypermedia container wannabe, because nothing about the browser has ever been hypermedia-centric or object-oriented, and certainly not "Object-Oriented Hypermedia"-centric nirvana. That's why the browser is continuously taken by surprise with ideas like XMLHTTPRequest, because the browser is non-modular and people continuously hijack media types (like ActiveX) to provide correct, object-oriented features like XMLHTTPRequest! THINK ABOUT IT. > > Silverlight is basically a Virtual Machine-based web browser, done mostly right, but still having to deal with browser interop problems such as (a) the browser networking stack (b) the browser navigation model > > If my Silverlight/Moonlight example frightens you, just s/Silverlight/GWT and Google Gears/ and we're now talking about supposedly "do no evil" technology everybody loves right now. Except, GWT is fundamentally stupider than Silverlight, b/c it does not fix the fundamental problem that the browser is not object-oriented and does not use components. The legacy web browser is not a good model for mission-critical applications, regardless of what Google wants you to believe to pump-up their AdSense/AdWords revenue. Gears cannot fix this. Ever. (Gears is seriously cool, by the way, and I think Silverlight+++Gears is even cooler! But most people talk about Microsoft/Google in an unholy war sense, so they could never dream of Silverlight and Gears being used in conjunction to fix the browser networking stack problems and browser navigation model problems. However, Gears CANNOT fix the component problem. Again, I'll just point to Joel Spolsky's Martian Headsets blog entry for the best explanation of how messed up this thing is.) > > > > > > The idea that a browser should know how to > > > render HTML is backwards. Notice that I am saying the Browser should > > > simply be a gateway, and that inside the browser we can embed > > > application-specific gateways. > > > > Gateways already exist in the form of CGI. They're called gateways for > > reason, as they're not part of the web architecture. > > > Are you snowing me? Your first sentence is spot-on, but your statement that "they're not part of the web architecture" overlooks how page-centric web applications operate today. Look at the DISGUSTING hacks JBoss does with continuations to solve the fact that the browser has no GOOD way to allow the application to define gateways. > > In theory, you are right. The Web Architecture doesn't know about gateways. > > In practice, you are "not even wrong", by being closed-minded to a set of design criteria you prefer. Understand that the way web browsers work today is just a _talking point_ for what they should look like in the future. The Web -- and The Web Browser Model -- did not have any concept of XMLHTTPRequest or anything like that. > > > > > I think of the Browser of the future as a software configuration > > > management tool, with ability to get out of the way of user > > > accessibility issues and the like. The Browser should give you a > > > flexible, scalable, distributed model for accessing content like > > > object-oriented programs. > > > > I think they already do. Was there a specific "not-browser" application > > you had in mind? > > > > > > > > You could certainly argue that HTML/XHTML decouples interface from > > > implementation, but that is a ridiculous argument -- we all know from > > > just looking at the gold standard for rendering pages, the Acid Test, > > > that nobody can even correctly implement these interfaces. > > > > That's ok, I didn't argue that. > > > > I think you'll find that most of the wildly successful formats are not > > consistently implemented. There's nothing exceptional about HTML, except > > that "wildly successful" is an understatement. > > > > > > > So it makes > > > more sense to talk about shipping a working versionable object, a > > > component, that has code+data. > > > > Why does it make more sense? Code on demand is part of REST, but the > > history of distributed objects is dismal. SOAP has failed for Web scale > > delivery. Even JavaScript gets reduced down to JSON. > > > > If there's something beyond the syntax related stuff we do today, I > > suspect it's likely to be data that has more precise semantics (eg > > KIF/RDF/OWL) and in the interim, formats like atom, microformats and > > rdfa, maybe json based vocabularies. > > > > Developers like their objects so I guess people will keep trying to go > > down that path, and you seem happy to do so. Otherwise this debate is > > years old, and I've seen nothing in over a decade that suggests we'll > > move off the current web to something approaching OO, not matter how > > stupid it seems to be each generation of programmers that discover the > > web and immediately wonder how to fix it. > > > I don't know what KIF is. I don't know what RDFA is. I know the others. I think the thing you are missing here is that I am not against any of these ideas, and actually see my vision of the browser as complementary to Tim Berners-Lee's vision of the Semantic Web and even Google's vision of the semantic web. This is hard for me to confess, because my greatest weakness is being so visionary and closed-minded. Yet, ya know, these semantic people know what they're talking about. I am not trying to "fix" anything there. I'm just waiting for them to bring me results so I can take advantage of it. > > As an aside, more pragmatically, you can't OWL everything and you don't want to. This is the same reason why Microsoft doesn't make .NET more modular. OWL means all your content on the web is totally modular, fully exposed semantics. For a lot of companies, this means that they are inviting competition. For example, where I work a company currently complements our product by providing rich hypermedia like videos to our clients, while we provide highly structured and organized content that is optimized for stereotyped workflows. If all our content was "out there", then the rich hypermedia competitor could just siphon us. At that point, they can go to our clients and say, "we have all the data you care about, and will charge you half the price." Basically, they would be copying our specification (the hardest thing in software to build). > > > > > I guess I am reading your sentence as "Unfortunately this is a detail we > > > do not talk about, even though it needs improvement." Perhaps not fair, > > > but where else are you leading me? It sounds like what I need to hear > > > then is "stop going down this direction, you need to re-crystallize, > > > here is what you need to go and where you need to go." > > > > I would say distributed objects in the way you're talking about them are > > a dead end, yes. > > > > I never used the phrase "distributed objects", anywhere. You put that word in my mouth. I am spitting it out. I don't even know what the heck you mean by "distributed objects". You are really being basically the ultimate buzzkill -- using buzzwords with connotative meanings and slapping them onto what I am discussing. > > This is why we have a thread about re-acronymizing everything under the sun. And exactly why that thread is one of the longest discussions on here in a long time. > > I think I am personally too closed-minded and visionary to be brought down by anyone who wants to buzzkill me. I know exactly what I want, and despite being an average programmer, I am a very great API designer. Amazingly, I didn't go to Stanford and I didn't get hired by Sun to create a super-complicated J2EE EJB 1.0 specification or get hired by Microsoft to chair a steering committee for ATL or DCOM. I know, my credentials suck! > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > _________________________________________________________________ Share your photos with Windows Live Photos – Free. http://clk.atdmt.com/UKM/go/134665338/direct/01/
* Berend de Boer <berend@...> [2009-06-19 19:50]: > Your solution is stateful, and therefore not REST. REST is all about state. It’s right in the name. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Aristotle, Not true. There's a distinction between application state and conversational state. The state referred to in the name is application state. REST emphasizes an avoidance of conversational state while emphatically promoting transfer of application state through resource representations. Dhananjay On Mon, Jun 22, 2009 at 8:20 PM, Aristotle Pagaltzis<pagaltzis@gmx.de> wrote: > > > * Berend de Boer <berend@...> [2009-06-19 19:50]: > >> Your solution is stateful, and therefore not REST. > > REST is all about state. It’s right in the name. > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> > -- -------------------------------------------------------- blog: http://blog.dhananjaynene.com twitter: http://twitter.com/dnene
I have a problem that if 1 web app have log in(log out),it have use SESSION, I think that it have statefull ,is it a RESTful app???
I guess there are few questions to be answered: - Are we sure that chocolate ice-cream was ever produced? Or it is just an entry in the recipe book? - Is chocolate ice-cream so much better than vanilla? Is it easier to create? Is it healthier to eat? Is it worth it to go extra length to have exact amount of chocolate in the ice-cream? - What to do with tons and tons of vanilla ice cream that had been produced? Andrei --- In rest-discuss@yahoogroups.com, "Sebastien Lambla" <seb@...> wrote: > > To save myself having to repeat my justification for clarifying ReST, I've > posted a tongue-in-cheek entry at > http://serialseb.blogspot.com/2009/06/fighting-for-rest-or-tale-of-ice-cream > .html > > > > Hopefully Roy won't mind me portraying him as an ice-cream maker, but it's > the way I explain it to people and they seem to get it much faster than when > I just point to the thesis. >
I don't see what the big deal is. If your code isn't assuming conversational state with the server (the business logic) who cares if you get a token that you have to carry around with you? Isn't the security protocol orthogonal to the business problem? For example, the Digest protocol is allowed to keep some session-like state and remember nonces and request counters in-between requests to avoid replay attacks. And isn't client cert auth connection oriented? (Apologies if I'm incorrect on that one). Solomon mentioned OAuth. Other SSO solutions will have the same "session" issues. Dhananjay Nene wrote: > Aristotle, > > Not true. There's a distinction between application state and > conversational state. The state referred to in the name is application > state. REST emphasizes an avoidance of conversational state while > emphatically promoting transfer of application state through resource > representations. > > Dhananjay > > On Mon, Jun 22, 2009 at 8:20 PM, Aristotle Pagaltzis<pagaltzis@...> wrote: >> >> * Berend de Boer <berend@...> [2009-06-19 19:50]: >> >>> Your solution is stateful, and therefore not REST. >> REST is all about state. It’s right in the name. >> >> Regards, >> -- >> Aristotle Pagaltzis // <http://plasmasturm.org/> >> > > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill, I've already indicated in the thread earlier - there is no issue in my mind wrt implementing a solution with tokens or sessions or like (to the extent they only help to identify the user and carry no additional conversational state). The only big/small deal is in whether one can classify it to be restful. As an academic argument, I tend to believe such a solution is not RESTful. Dhananjay On Tue, Jun 23, 2009 at 2:52 AM, Bill Burke<bburke@...> wrote: > I don't see what the big deal is. If your code isn't assuming > conversational state with the server (the business logic) who cares if you > get a token that you have to carry around with you? Isn't the security > protocol orthogonal to the business problem? > > For example, the Digest protocol is allowed to keep some session-like state > and remember nonces and request counters in-between requests to avoid replay > attacks. And isn't client cert auth connection oriented? (Apologies if I'm > incorrect on that one). Solomon mentioned OAuth. Other SSO solutions will > have the same "session" issues. > > > > Dhananjay Nene wrote: >> >> Aristotle, >> >> Not true. There's a distinction between application state and >> conversational state. The state referred to in the name is application >> state. REST emphasizes an avoidance of conversational state while >> emphatically promoting transfer of application state through resource >> representations. >> >> Dhananjay >> >> On Mon, Jun 22, 2009 at 8:20 PM, Aristotle Pagaltzis<pagaltzis@...> >> wrote: >>> >>> * Berend de Boer <berend@...> [2009-06-19 19:50]: >>> >>>> Your solution is stateful, and therefore not REST. >>> >>> REST is all about state. It’s right in the name. >>> >>> Regards, >>> -- >>> Aristotle Pagaltzis // <http://plasmasturm.org/> >>> >> >> >> > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > -- -------------------------------------------------------- blog: http://blog.dhananjaynene.com twitter: http://twitter.com/dnene
Chocolate syrup, of course... Where I love we also have the Choc-Top: http://en.wikipedia.org/wiki/Choc-Top Analyse that... On 23/06/2009, at 5:31 AM, Andrei Filimonov wrote: > - What to do with tons and tons of vanilla ice cream that had been > produced? -- Mark Nottingham http://www.mnot.net/
On 19 Jun 2009, at 05:15, cule_barca wrote: > Can I use REST for login page? If yes, would you please tell me how > to do it. I don't know how to use REST in this case ! > Please help me ! Yes, we even have a protocol called foaf+ssl to do that. There is a wiki page on it here with links to all the implementations http://esw.w3.org/topic/foaf+ssl And we recently published a paper at the ESWC entitled "FOAF+SSL: RESTful Authentication for the Social Web" http://bblfish.net/tmp/2009/05/spot2009_submission_15.pdf So one can do RESTful authentication using current browsers with very little work. The user interfaces we have for the moment don't make it quite ready for the general public, but that's just a matter of a few good people working on it for a little while. Henry
Thanks for the lecture, but I imagine Aristotle knows that. Bill Dhananjay Nene wrote: > Aristotle, > > Not true. There's a distinction between application state and > conversational state. The state referred to in the name is application > state. REST emphasizes an avoidance of conversational state while > emphatically promoting transfer of application state through resource > representations. > > Dhananjay > > On Mon, Jun 22, 2009 at 8:20 PM, Aristotle Pagaltzis<pagaltzis@...> wrote: >> >> * Berend de Boer <berend@...> [2009-06-19 19:50]: >> >>> Your solution is stateful, and therefore not REST. >> REST is all about state. It�s right in the name. >> >> Regards, >> -- >> Aristotle Pagaltzis // <http://plasmasturm.org/> >> > > >
Lets say I have a REST API to create a GROUP and memberships for groups. Just like the OS UserGroup and User(s) Lets say that I want to create/update/delete/read groups Could you comment on the RESTfulness of the API below? Group has an NumericID, Name User has a NumericID, Name, Password 1. Create PUT ...../Groups/GroupName NO PAYLOAD in the BODY 2. RENAME POST ..../Groups/ID <GroupName>NewName</GroupName> 3. GET Group GET ..../Groups/GroupName GET ..../Groups/GroupID 4. GET ..../Groups <Groups><Group><ID><NAME>//// 5. Get users of a group GET ..../Groups/GroupName/Users? 6. Add user to a group POST ..../Groups/GroupName/Users 7. Delete user from a group how would this look like?
Afternoon all, I have a need for a meta-model which allows for association of non-hypertext representations of resources (e.g. binaries, images, etc.). I had originally proposed Atom but this wasn't well accepted by the XML xenophobes. It seems the Link: header was intended to accomplish just what I need in the original HTTP RFCs. Unfortunately though HTML was dominant and Link: wasn't implemented (nor implementable), so it was dropped only to be recently revived by @mnot in draft-nottingham-http-link-header. Per my request for clarification to apps-discuss below (which failed to get any bites - perhaps tl;dr), I'd like to find a sensible mechanism for setting the Link: headers, ideally without relying on new HTTP verbs (LINK and UNLINK were originally specified but have also been dropped). I'm figuring that just sending Link: header(s) in PUTs and POSTs will cleanly accomplish most of what I need, but things get hairy when you start thinking about updating/deleting individual links. Sam ---------- Forwarded message ---------- From: Sam Johnston <samj@...> Date: Mon, Jun 15, 2009 at 5:27 AM Subject: Clarifications on Web Linking with HTTP To: apps-discuss@... Morning all, The HTTP Link: header enables web linking without hypermedia - that is, arbitrary content types can be linked (with attriubtes) out-of-band rather than within the payload (e.g. HTML) itself. This enables the use of HTTP as a meta-model (at least for individual resources) without having to resort to Atom, which is potentially great news for RESTful APIs. I am currently working on a real world application of Marks' Web Linking I-D[1] (OGF's Open Cloud Computing Interface - http://www.occi-wg.org/) and require clarification on a few points (which may want to end up in the I-D). - First and foremost, in the absence of the LINK and UNLINK verbs originally defined in RFC 2068[2] but specifically omitted from RFC 2616[3], what is the preferred mechanism for manipulating these links via HTTP? It appears that this header is intended for GET requests only, but presumably specifying it in POST and PUT requests would be one option that avoids the creation of [not so] "new" verbs (bearing in mind that short of accepting Link: headers from empty POST/PUT requests, it would be necessary to GET and then PUT the entire payload to update links - twice if they were reciprocal). While there was an attempt a dozen years ago to better define the relevant HTTP verbs[4], it strikes me as more sensible to follow the example of the Set-Cookie: header for this rather than WebDAV's example of creating new verbs (even if we've seen them before) but you guys are the experts. - Another concern with an arbitrary number of links is that arbitrary string length limits may be imposed by user agents, as they are with URLs. This should not be a problem where there is one link per header, but it may be where the headers are concatenated as described in RFC 2616[5]. This is a double edged sword however as some user agents have only recently added support for multiple headers of the same type[6] and it remains a problem for some[7]. - The introduction of a link relation registry at IANA makes a lot of sense, though it would be nice if these were common for HTTP, HTML, Atom and other places links appear. Perhaps namespaces (e.g. "atom:service" or "occi.state.restart") would be useful here so as to enable significantly more future extensibility. - It seems useful to be able to (optionally) specify the type (as in content type rather than relation type) of a given link, as is the case for Atom. That said, this also seems somewhat redundant with HTTP Content Negotiation, but implementations that choose to support the "type" attribute may gain some performance and usability advantages from doing so. The matter of whether this information belongs in URIs (and if so, which side of the '?') or in HTTP headers (or both) is still not clear to me as there are pros and cons of each - perhaps the relation type is more suitable (or both?) as it's often not possible to unambigously determine the relation type from the content type (consider modeling people where both fingerprint and portrait representations may exist in image/png format). To be more specific about the requirements, the API models cloud infrastructure services (IaaS) and has three main nouns (compute, network, storage) which need to be associated with each other with attributes on the links (e.g. a compute resource having a network resource associated with a local identifier attribute of "eth0"). Using Atom as the meta-model worked fine (as evidenced by GData) but it now seems possible - at least for individual resources - with HTTP. Cheers, Sam 1. http://tools.ietf.org/html/draft-nottingham-http-link-header-05 2. http://tools.ietf.org/html/rfc2068#section-19.6.1 3. http://tools.ietf.org/html/rfc2616#section-19.6.3 4. http://ftp.ics.uci.edu/pub/ietf/http/draft-pritchard-http-links-00.txt 5. http://tools.ietf.org/html/rfc2616#section-4.2 6. http://www.mail-archive.com/bug-wget@.../msg00076.html 7. http://bugs.python.org/issue1660009
response below. On Sun, Jun 21, 2009 at 4:00 PM, Jan Algermissen <algermissen1971@...>wrote: > Duskis, > > On Jun 19, 2009, at 3:14 PM, Solomon Duskis wrote: > > >> >> How can you engineer a better data-oriented REST conversation between an >> automated REST client and an automated REST server? >> > > What do you mean by 'better'? IOW, what in your context is it that you want > to be 'better'? Better as in more RESTful in terms of completely satisfying the hypertext constraints. Do you remember Roy Fielding's frustration last year about the hypermedia constraint? IMHO, one way to get there is to think about "conversations" rather than "resources." The hypertext constraint is all about getting from a starting point to your destination through hypertext alone. It takes more planning and engineering to do that. Who are the clients of your API, and who in turn are their clients? > > On the HTML side of REST, where there's a human involved, there are >> "experts" in the field of "Information Architecture" who have the >> resposibility to construct "a big picture" that assure that the various >> types of system users have appropriate paths (and links) through the system >> that support each user profile's needs within a system. >> >> Has anyone used an "Information Architect" to develop a REST API? >> > > I do not think that there is much of a difference between human and > non-human clients. A browser for example has quite some automatic behaviour > that results from implementing the processing model of HTML (load images, > load stylesheets, execute JavaScript, do page reloads based on <meta> tags, > etc.) All that changes for the non-human case is that media types would > specify richer application semantics (because there is no user involved to > decide what this or that link means). > > If there is no human to understand <a href="/all-versions">Click me to see > all versions</a> then all you need is to standardize something like <link > rel="http://example.com/linkreks/all-versions" href="/all-versions">. > > Jan > This last sentence is one of many crucial differences between the design for human-oriented resources and the design for machine-oriented resources. The "interface" for humans is free text to be understood at "run time". The "Interface" for computers must come through some previously agreed to keywords that appear in a previously agreed to at "design time." -Solomon
Yesterday, in a meeting at JBoss, I was evangelizing REST to a few of my
colleagues. An interesting question came up:
Let's say you have a distributed cache you want to manage through a
RESTful interface. One operation on the cache is clearing or flushing
it. The interesting thing about flushing is that the act of flushing
changes the state of the cache, but "flushing" isn't a state of the
cache itself. It seems to be a pure operation. How do you model
something like this in REST? Is it correct to do:
PUT /cache/flusher (PUT because flushing is idempotent)
Or maybe even better:
GET /cache
returns a document like
<cache>
<link rel="FLUSH" href="/cache/flusher"/>
</cache>
Or maybe this is better:
DELETE /cache/data
Maybe I just answered my own question :)
Thanks for listening,
Bill
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
Bill: If you want a record of the results of the operation (possibly for audit reasons) and/or want to be able to send additional details, use a POST pattern: # REQUEST POST /cache-collection/ HTTP/1.1 .. optional body w/ details .. #RESPONSE HTTP/1.1 201 Created Location: /cache-collection/123 If you don't need a stored record, but still want to see a response use a PUT pattern: # REQUEST PUT /cache-collection/ HTTP/1.1 .. optional body w/ details .. #RESPONSE HTTP/1.1 200 OK Content-Length:xxx ... results body ... When you don't need to see any results just use the DELETE pattern: # REQUEST DELETE /cache-collection/ HTTP/1.1 #RESPONSE HTTP/1.1 204 No Content One other variation on this decision is whether the response versions will take some time to build. In that case, using 202 Accepted is a possibility and that limits you to using POST, not PUT. mca http://amundsen.com/blog/ On Thu, Jun 25, 2009 at 09:41, Bill Burke <bburke@...> wrote: > Yesterday, in a meeting at JBoss, I was evangelizing REST to a few of my > colleagues. An interesting question came up: > > Let's say you have a distributed cache you want to manage through a > RESTful interface. One operation on the cache is clearing or flushing > it. The interesting thing about flushing is that the act of flushing > changes the state of the cache, but "flushing" isn't a state of the > cache itself. It seems to be a pure operation. How do you model > something like this in REST? Is it correct to do: > > PUT /cache/flusher (PUT because flushing is idempotent) > > Or maybe even better: > > GET /cache > > returns a document like > > <cache> > <link rel="FLUSH" href="/cache/flusher"/> > </cache> > > > Or maybe this is better: > > DELETE /cache/data > > Maybe I just answered my own question :) > > Thanks for listening, > > Bill > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > ------------------------------------ > > Yahoo! Groups Links > > > >
>>>>> "Bill" == Bill Burke <bburke@...> writes:
Bill> Let's say you have a distributed cache you want to manage
Bill> through a RESTful interface. One operation on the cache is
Bill> clearing or flushing it. The interesting thing about
Bill> flushing is that the act of flushing changes the state of
Bill> the cache, but "flushing" isn't a state of the cache itself.
Bill> It seems to be a pure operation. How do you model something
Bill> like this in REST? Is it correct to do:
Bill> PUT /cache/flusher (PUT because flushing is idempotent)
Bill> Or maybe even better:
Bill> GET /cache
Bill> returns a document like
Bill> <cache> <link rel="FLUSH" href="/cache/flusher"/> </cache>
Bill> Or maybe this is better:
Bill> DELETE /cache/data
Bill> Maybe I just answered my own question :)
DELETE deletes a URL. PUT creates a URL. Given that, I would prefer:
POST /cache/flusher
But it all depends. If every cached entry has a url like:
/cache/12345
And you want to delete the entire cache, indeed
DELETE /cache
is sufficient, as all urls are now gone as well because you delete the
parent url.
--
Cheers,
Berend de Boer
On Thu, Jun 25, 2009 at 4:09 PM, Berend de Boer <berend@...> wrote: > And you want to delete the entire cache, indeed > > DELETE /cache > > is sufficient, as all urls are now gone as well because you delete the > parent url. > This makes the most sense to me, but doesn't it imply that /cache needs to go away (at least until it starts filling again)? That is, is it ok to return 200 OK to a DELETE but then leave the URL in place? Sam
Are you "DELETing" the parent URL or are you simply "CLEARing" it? After the operation is called, /cache will still exist, but return an empty set. Is that still a DELETE? -Solomon On Thu, Jun 25, 2009 at 10:09 AM, Berend de Boer <berend@...> wrote: > > > >>>>> "Bill" == Bill Burke <bburke@... <bburke%40redhat.com>> > writes: > > Bill> Let's say you have a distributed cache you want to manage > Bill> through a RESTful interface. One operation on the cache is > Bill> clearing or flushing it. The interesting thing about > Bill> flushing is that the act of flushing changes the state of > Bill> the cache, but "flushing" isn't a state of the cache itself. > Bill> It seems to be a pure operation. How do you model something > Bill> like this in REST? Is it correct to do: > > Bill> PUT /cache/flusher (PUT because flushing is idempotent) > > Bill> Or maybe even better: > > Bill> GET /cache > > Bill> returns a document like > > Bill> <cache> <link rel="FLUSH" href="/cache/flusher"/> </cache> > > Bill> Or maybe this is better: > > Bill> DELETE /cache/data > > Bill> Maybe I just answered my own question :) > > DELETE deletes a URL. PUT creates a URL. Given that, I would prefer: > > POST /cache/flusher > > But it all depends. If every cached entry has a url like: > > /cache/12345 > > And you want to delete the entire cache, indeed > > DELETE /cache > > is sufficient, as all urls are now gone as well because you delete the > parent url. > > -- > Cheers, > > Berend de Boer > >
Bill Burke <bburke@...> writes: > Yesterday, in a meeting at JBoss, I was evangelizing REST to a few of my > colleagues. An interesting question came up: > > Let's say you have a distributed cache you want to manage through a > RESTful interface. One operation on the cache is clearing or flushing > it. The interesting thing about flushing is that the act of flushing > changes the state of the cache, but "flushing" isn't a state of the > cache itself. It seems to be a pure operation. How do you model > something like this in REST? Is it correct to do: > > PUT /cache/flusher (PUT because flushing is idempotent) To flush, I'd: PUT /cache <empty body> YS.
>>>>> "Sam" == Sam Johnston <samj@...> writes:
>> And you want to delete the entire cache, indeed
>>
>> DELETE /cache
>>
>> is sufficient, as all urls are now gone as well because you
>> delete the parent url.
>>
Sam> This makes the most sense to me, but doesn't it imply that
Sam> /cache needs to go away (at least until it starts filling
Sam> again)? That is, is it ok to return 200 OK to a DELETE but
Sam> then leave the URL in place?
Yeah, DELETE is idempotent so you can delete it every time.
If it "pops back", i.e. GET /cache returns something (or 204) or a 404
when it's empty is an implementation detail I think.
--
Cheers,
Berend de Boer
Sam Johnston wrote: > On Thu, Jun 25, 2009 at 4:09 PM, Berend de Boer <berend@... > <mailto:berend@...>> wrote: > > And you want to delete the entire cache, indeed > > DELETE /cache > > is sufficient, as all urls are now gone as well because you delete the > parent url. > > This makes the most sense to me, but doesn't it imply that /cache needs > to go away (at least until it starts filling again)? That is, is it ok > to return 200 OK to a DELETE but then leave the URL in place? > This is why I modeled it as: DELETE /cache/data rather than: DELETE /cache You are not deleting the cache, just the data. GET /cache/data would return 404 because there is no data. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Berend de Boer wrote: > > DELETE deletes a URL. PUT creates a URL. Given that, I would prefer: > > POST /cache/flusher Not exactly like that. PUT identifies the enclosed entity with the URI invoked. In terms of CRUD, that means it can Create a URI identifying the enclosed entity if that URI doesen't exist, or Update it if it already exists. And can have othe rmeanings in non-CRUD environments, as long as the identification URI/entity is maintained So, PUT /cache/flusher is a viable option, in my opinion.
On Thu, Jun 25, 2009 at 4:24 PM, Berend de Boer <berend@...> wrote: > >>>>> "Sam" == Sam Johnston <samj@...> writes: > > >> And you want to delete the entire cache, indeed > >> > >> DELETE /cache > >> > >> is sufficient, as all urls are now gone as well because you > >> delete the parent url. > >> > Sam> This makes the most sense to me, but doesn't it imply that > Sam> /cache needs to go away (at least until it starts filling > Sam> again)? That is, is it ok to return 200 OK to a DELETE but > Sam> then leave the URL in place? > > Yeah, DELETE is idempotent so you can delete it every time. > > If it "pops back", i.e. GET /cache returns something (or 204) or a 404 > when it's empty is an implementation detail I think. Ok thanks for clarifying. DELETE /cache does sound cleaner than DELETE /cache/data and PUT /cache <empty>. Just because you delete something now doesn't mean it can't reappear in time for the next request. Sam
>>>>> "Yohanes" == Yohanes Santoso <yahoo-rest-discuss@...> writes:
Yohanes> To flush, I'd:
Yohanes> PUT /cache <empty body>
Nice if it is indeed just a big blob!
--
Cheers,
Berend de Boer
>>>>> "António" == António Mota <amsmota@...> writes:
> DELETE deletes a URL. PUT creates a URL. Given that, I would prefer:
>>
>> POST /cache/flusher
António> Not exactly like that. PUT identifies the enclosed entity
António> with the URI invoked. In terms of CRUD, that means it can
António> Create a URI identifying the enclosed entity if that URI
António> doesen't exist, or Update it if it already exists. And
António> can have othe rmeanings in non-CRUD environments, as long
António> as the identification URI/entity is maintained
Yeah yeah yeah, know all that.
António> So, PUT /cache/flusher is a viable option, in my opinion.
Don't think so. You're not creating the flusher url nor specifying new
contents.
--
Cheers,
Berend de Boer
Given all of the talk about GET/POST/DELETE, is the <link> tag enough? <link> implies GET. The link tag doesn't support METHOD, does it? Don't you need to specify that somewhere? Don't tell me OPTIONs... I don't buy that. The method is crucial to the communication and should be included in the original media format. OPTIONs is also not cachable... -Solomon On Thu, Jun 25, 2009 at 9:41 AM, Bill Burke <bburke@...> wrote: > > > Yesterday, in a meeting at JBoss, I was evangelizing REST to a few of my > colleagues. An interesting question came up: > > Let's say you have a distributed cache you want to manage through a > RESTful interface. One operation on the cache is clearing or flushing > it. The interesting thing about flushing is that the act of flushing > changes the state of the cache, but "flushing" isn't a state of the > cache itself. It seems to be a pure operation. How do you model > something like this in REST? Is it correct to do: > > PUT /cache/flusher (PUT because flushing is idempotent) > > Or maybe even better: > > GET /cache > > returns a document like > > <cache> > <link rel="FLUSH" href="/cache/flusher"/> > </cache> > > Or maybe this is better: > > DELETE /cache/data > > Maybe I just answered my own question :) > > Thanks for listening, > > Bill > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > >
Berend de Boer <berend@...> writes: >>>>>> "Yohanes" == Yohanes Santoso <yahoo-rest-discuss@...> writes: > > Yohanes> To flush, I'd: > > Yohanes> PUT /cache <empty body> > > Nice if it is indeed just a big blob! What does it being a blob have to do with things? I probably should have said 'empty representation' instead of 'empty body'. So, assuming you have: ==> GET /cache <== <collection> <entry><name>key1</name><value>value1</value></entry> <entry><name>key2</name><value>value2</value></entry> </collection> and you can do: ==> GET /cache/key1 <== <entry><name>key1</name><value>value1</value></entry> then a reasonable interpretation of PUT would allow it to be used to clear up the cache: ==> PUT /cache <collection></collection> <== 204 YS
I think the method translation was implied in the rel="FLUSH" ? Like creating a "pico-format" for the clients to understand rel="FLUSH" means request the link using POST, PUT, or DELETE - whichever the pico-format dictates ? Personally, I like: PUT /cache <empty body> But I think there are many RESTful ways to do it. -L On Thu, Jun 25, 2009 at 9:43 AM, Solomon Duskis <sduskis@...> wrote: > > > Given all of the talk about GET/POST/DELETE, is the <link> tag enough? > <link> implies GET. The link tag doesn't support METHOD, does it? Don't > you need to specify that somewhere? > Don't tell me OPTIONs... I don't buy that. The method is crucial to the > communication and should be included in the original media format. OPTIONs > is also not cachable... > > -Solomon > > On Thu, Jun 25, 2009 at 9:41 AM, Bill Burke <bburke@...> wrote: > >> >> >> Yesterday, in a meeting at JBoss, I was evangelizing REST to a few of my >> colleagues. An interesting question came up: >> >> Let's say you have a distributed cache you want to manage through a >> RESTful interface. One operation on the cache is clearing or flushing >> it. The interesting thing about flushing is that the act of flushing >> changes the state of the cache, but "flushing" isn't a state of the >> cache itself. It seems to be a pure operation. How do you model >> something like this in REST? Is it correct to do: >> >> PUT /cache/flusher (PUT because flushing is idempotent) >> >> Or maybe even better: >> >> GET /cache >> >> returns a document like >> >> <cache> >> <link rel="FLUSH" href="/cache/flusher"/> >> </cache> >> >> Or maybe this is better: >> >> DELETE /cache/data >> >> Maybe I just answered my own question :) >> >> Thanks for listening, >> >> Bill >> >> -- >> Bill Burke >> JBoss, a division of Red Hat >> http://bill.burkecentral.com >> > > >
On Thu, Jun 25, 2009 at 4:43 PM, Solomon Duskis <sduskis@...> wrote: > Given all of the talk about GET/POST/DELETE, is the <link> tag enough? > <link> implies GET. The link tag doesn't support METHOD, does it? Don't > you need to specify that somewhere? > Don't tell me OPTIONs... I don't buy that. The method is crucial to the > communication and should be included in the original media format. OPTIONs > is also not cachable... > I'm currently using <link>s (and the HTTP Link: header) to advertise verbs (e.g. start, stop restart) per HATEOAS principles. As these are unsafe, POST is the only method that makes sense - GET should return an error (and as you observe, you can always find out in advance with OPTIONS). I'm also using <link>s to advertise associations with other resources (e.g. Web Linking - see earlier thread) - "GET" does make sense for these, but I'm not sure that proactively advertising "OPTIONS" via <link>s is such a great/useful idea. Sam
Berend de Boer wrote: > > Yeah yeah yeah, know all that. > So maybe you should be more explicit when saying things like PUT creates a URL. After all this list it's not only for the great experts... > António> So, PUT /cache/flusher is a viable option, in my opinion. > > Don't think so. You're not creating the flusher url nor specifying new > contents. > > If you are emptying something (without deleting it) you're sure are changing it's content. And PUT can be viewed in a more broader sense than just CRUD, so PUT /cache/flusher can mean "start a process on the server identified by this URI" process that in this case flushes a cache. Now of course there are other ways to do the same, like POST /cache <empty> I said only that it was a viable option, not *the* option or even the better option... After all, I don't know all that, I'm here to try to learn some more...
Sam Johnston wrote: > On Thu, Jun 25, 2009 at 4:43 PM, Solomon Duskis <sduskis@... > <mailto:sduskis@...>> wrote: > > Given all of the talk about GET/POST/DELETE, is the <link> tag > enough? <link> implies GET. The link tag doesn't support METHOD, > does it? Don't you need to specify that somewhere? > > Don't tell me OPTIONs... I don't buy that. The method is crucial to > the communication and should be included in the original media > format. OPTIONs is also not cachable... > > > I'm currently using <link>s (and the HTTP Link: header) to advertise > verbs (e.g. start, stop restart) per HATEOAS principles. As these are > unsafe, POST is the only method that makes sense - GET should return an > error (and as you observe, you can always find out in advance with OPTIONS). > > I'm also using <link>s to advertise associations with other resources > (e.g. Web Linking - see earlier thread) - "GET" does make sense for > these, but I'm not sure that proactively advertising "OPTIONS" via > <link>s is such a great/useful idea. > PUT or DELETE are best IMO for a "flush". It is an idempotent operation. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
>>>>> "António" == António Mota <amsmota@...> writes:
António> And PUT can be viewed in a more broader sense than just
António> CRUD, so
António> PUT /cache/flusher
António> can mean "start a process on the server identified by
António> this URI" process that in this case flushes a cache.
No it cannot.
--
Cheers,
Berend de Boer
>>>>> "Yohanes" == Yohanes Santoso <yahoo-rest-discuss@...> writes:
Yohanes> To flush, I'd:
>>
Yohanes> PUT /cache <empty body>
>>
>> Nice if it is indeed just a big blob!
Yohanes> What does it being a blob have to do with things?
Well, if the cache has individual items, i.e. /cache/1 and /cache/2,
then personally I find it weird if you PUT to /cache and /cache/1
disappears.
So with blob I meant /cache is the entire cache, and there are no
individual items as far as the REST interface is concerned.
--
Cheers,
Berend de Boer
Hi, I was wondering if anyone is using a response code for indicating bad content (eg contains a virus). I was thinking 403/417 but they're not quite right. Thoughts? Bill
Berend de Boer wrote: >>>>>> "António" == António Mota <amsmota@...> writes: >>>>>> > > António> And PUT can be viewed in a more broader sense than just > António> CRUD, so > > António> PUT /cache/flusher > > António> can mean "start a process on the server identified by > António> this URI" process that in this case flushes a cache. > > No it cannot. > > Would you care to explain why, so I can learn a little more? Because when I read "The fundamental difference between the POST and PUT requests is reflected in the different meaning of the Request-URI" and knowing that POST can refer to a data-handling process, I don't know why the URI in PUT cannot also refer to a data-handling process, since that URI doesen't have to be created as you first said.
On Thu, Jun 25, 2009 at 5:24 PM, Bill de hOra <bill@...> wrote: > I was wondering if anyone is using a response code for indicating bad > content (eg contains a virus). I was thinking 403/417 but they're not > quite right. Thoughts? > Interesting question. 417 doesn't seem appropriate, but 403 does: "*The request was a legal request, but the server is refusing to respond to it.*" I'm not sure how standard substatus codes ala 403.3 are (could just be an IIS thing <http://en.wikipedia.org/wiki/HTTP_403>) but a substatus code indicating that the content was somehow unacceptable sounds sensible. Sam
From reading the descriptions, 403 seems the more logical... > The server understood the request, but is refusing to fulfill it. Bill de hOra wrote: > > > Hi, > > I was wondering if anyone is using a response code for indicating bad > content (eg contains a virus). I was thinking 403/417 but they're not > quite right. Thoughts? > > Bill > >
2009/6/25 António Mota <amsmota@...> > > António> can mean "start a process on the server identified by > > António> this URI" process that in this case flushes a cache. > > > > No it cannot. > > > > > Would you care to explain why, so I can learn a little more? Because > when I read > > "The fundamental difference between the POST and PUT requests is > reflected in the different meaning of the Request-URI" > > and knowing that POST can refer to a data-handling process, I don't know > why the URI in PUT cannot also refer to a data-handling process, since > that URI doesen't have to be created as you first said. Per RFC 2616 <http://tools.ietf.org/html/rfc2616#section-9.6>: *The PUT method requests that the enclosed entity be stored under the supplied Request-URI.* That is, a PUT request must be the resource itself in its entirety (which is why we have HTTP PATCH<https://datatracker.ietf.org/drafts/draft-dusseault-http-patch/>for partial updates) Separate but related (and quite probably obvious) question then: can POSTs contain just the entity body? Normally they would be HTML forms but I'm wanting to upload e.g. binary OVA virtual machines and have the URLs allocated on the server side. While we're there, that works fine for single files (e.g. OVA which is an archive of OVF and dependencies like virtual hard drives) but what's the best way to handle multiple files - I'm guessing I have to base64 encode it and submit it as a form, but I'd prefer to be able to POST the various binaries in some kind of transaction (which could be batched in a single persistent HTTP connection were it not for proxies)... Sam
LOL, i totally mis-understood the title of this thread! malcontent: "a person who is discontented or disgusted" http://www.google.com/search?hl=en&rlz=1C1GGLS_enUS291US301&defl=en&q=define:malcontent&ei=p51DStfVBI34Nei_oKwC&sa=X&oi=glossary_definition&ct=title <http://www.google.com/search?hl=en&rlz=1C1GGLS_enUS291US301&defl=en&q=define:malcontent&ei=p51DStfVBI34Nei_oKwC&sa=X&oi=glossary_definition&ct=title>thanks for the smile. mca http://amundsen.com/blog/ On Thu, Jun 25, 2009 at 11:24, Bill de hOra <bill@...> wrote: > Hi, > > I was wondering if anyone is using a response code for indicating bad > content (eg contains a virus). I was thinking 403/417 but they're not > quite right. Thoughts? > > Bill > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Sam Johnston wrote: > > "The fundamental difference between the POST and PUT requests is > reflected in the different meaning of the Request-URI" > > and knowing that POST can refer to a data-handling process, I > don't know > why the URI in PUT cannot also refer to a data-handling process, > since > that URI doesen't have to be created as you first said. > > > Per RFC 2616 <http://tools.ietf.org/html/rfc2616#section-9.6>: > > /The PUT method requests that the enclosed entity be stored under > the supplied Request-URI./ > > That is, a PUT request must be the resource itself in its entirety > (which is why we have HTTP PATCH > <https://datatracker.ietf.org/drafts/draft-dusseault-http-patch/> for > partial updates) > Thanks for a meaningful response. But even then, "be stored" can perfectly refer to a data-handling process. Think of a resource that saves data to a database, for records where there is a well-know natural key. You can use PUT to create/change that resource/database record, and that is clearly a data-handling process. Even if we understand "be stored" in a "strict-sensu", if the enclosed entity is empty, doing a PUT /cache/flusher (or maybe it should be PUT /cache, depending on the meaning you have for your resources) will effectively empty the cache, without delete it. However I think that "be stored" shouldn't be interpreted in "strict-sensu", like I think we should not think of GET/POST/PUT/DELETE as CRUD...
From what I know, that is indeed IIS specific. However, nothing prevents using them, or even create your owns, specially in a closed environment where you can guarantee the clients will understand them. All other clients still understand the "standard" code anyway. Sam Johnston wrote: > > > On Thu, Jun 25, 2009 at 5:24 PM, Bill de hOra <bill@... > <mailto:bill@...>> wrote: > > I was wondering if anyone is using a response code for indicating bad > content (eg contains a virus). I was thinking 403/417 but they're not > quite right. Thoughts? > > Interesting question. 417 doesn't seem appropriate, but 403 does: > "/The request was a legal request, but the server is refusing to > respond to it./" > > I'm not sure how standard substatus codes ala 403.3 are (could just be > an IIS thing <http://en.wikipedia.org/wiki/HTTP_403>) but a substatus > code indicating that the content was somehow unacceptable sounds sensible. > > Sam > >
2009/6/25 António Mota <amsmota@...> > Thanks for a meaningful response. But even then, "be stored" can perfectly > refer to a data-handling process. Think of a resource that saves data to a > database, for records where there is a well-know natural key. You can use > PUT to create/change that resource/database record, and that is clearly a > data-handling process. > To me a POST-style "data handling process" may involve transformations, touching other resources, etc. while a PUT simply replaces a resource. > Even if we understand "be stored" in a "strict-sensu", if the enclosed > entity is empty, doing a > > PUT /cache/flusher > > (or maybe it should be PUT /cache, depending on the meaning you have for > your resources) > > will effectively empty the cache, without delete it. > No, it will truncate the /cache/flusher resource, and if it does anything else (e.g. trashing a bunch of other resources) then it's probably a bug. Iff /cache is a "blob" of cache entries and you truncate that then yeah, PUT fits (but even then I prefer DELETE). > However I think that "be stored" shouldn't be interpreted in > "strict-sensu", like I think we should not think of GET/POST/PUT/DELETE as > CRUD... > I like that PUT is simple - POST /cache/flusher or DELETE /cache both make a *lot* more sense to me. Sam
>>>>> "António" == António Mota <amsmota@...> writes:
António> Thanks for a meaningful response. But even then, "be
António> stored" can perfectly refer to a data-handling
António> process.
No, that's not the meaning in English. This is the meaning of POST:
The POST method is used to request that the origin server accept
the entity enclosed in the request as data to be processed by of
the resource identified by the Request-URI in the
António> However I think that "be stored" shouldn't be interpreted
António> in "strict-sensu", like I think we should not think of
António> GET/POST/PUT/DELETE as CRUD...
It should be, because PUT is idempotent, while POST is not.
--
Cheers,
Berend de Boer
Shouldn't something like the HTTP Method to use to perform an action be explicit and "self described"? The client shouldn't have to guess that GET will return an error... <link> inherently implies GET, doesn't it? <link> does make sense for associations which are inherently GET. Does it make sense for operations? -Solomon On Thu, Jun 25, 2009 at 10:51 AM, Sam Johnston <samj@...> wrote: > On Thu, Jun 25, 2009 at 4:43 PM, Solomon Duskis <sduskis@...> wrote: > >> Given all of the talk about GET/POST/DELETE, is the <link> tag enough? >> <link> implies GET. The link tag doesn't support METHOD, does it? Don't >> you need to specify that somewhere? >> Don't tell me OPTIONs... I don't buy that. The method is crucial to the >> communication and should be included in the original media format. OPTIONs >> is also not cachable... >> > > I'm currently using <link>s (and the HTTP Link: header) to advertise verbs > (e.g. start, stop restart) per HATEOAS principles. As these are unsafe, POST > is the only method that makes sense - GET should return an error (and as you > observe, you can always find out in advance with OPTIONS). > > I'm also using <link>s to advertise associations with other resources (e.g. > Web Linking - see earlier thread) - "GET" does make sense for these, but I'm > not sure that proactively advertising "OPTIONS" via <link>s is such a > great/useful idea. > > Sam > >
> It should be, because PUT is idempotent, while POST is not. What am I missing here? PUT (should) guarantee idempotency but does that mean a POST cannot be idempotent. Additionally whether "other" resources are modified as a result of PUT/POST is not what qualifies the usage of PUT/POST. It's what happens to resource directly being manipulated that is of interest here. If using PUT, I would GET the representation of the cache (because it had to have existed) and return an empty representation. I personally don't the get the "flusher" resource but that's probably not important. Many ways to address this however. Eb
On 25.06.2009, at 16:17, Yohanes Santoso wrote: > > > Bill Burke <bburke@redhat.com> writes: > > > Yesterday, in a meeting at JBoss, I was evangelizing REST to a few > of my > > colleagues. An interesting question came up: > > > > Let's say you have a distributed cache you want to manage through a > > RESTful interface. One operation on the cache is clearing or > flushing > > it. The interesting thing about flushing is that the act of flushing > > changes the state of the cache, but "flushing" isn't a state of the > > cache itself. It seems to be a pure operation. How do you model > > something like this in REST? Is it correct to do: > > > > PUT /cache/flusher (PUT because flushing is idempotent) > > To flush, I'd: > > PUT /cache > <empty body> > > +1, by far the best solution IMO (with the <collection /> idea being equal). DELETE /cache is bad because a subsequent GET on /cache will likely return 200. PUT /cache/flusher only makes sense if I GET back something on /cache/flusher that at least somehow resembles what I PUT to it. The only thing I would consider would be a POST on /cache/flush. But that fails to exploit the idempotency of PUT and uses the cop-out 'process this' aspect of POST. I'd always suggest to not use POST if another verb can be used without violating its intent. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ > YS. > >
Mike, That will be the subject of future thread, about social networking systems ;) Bill mike amundsen wrote: > > > > > LOL, i totally mis-understood the title of this thread! > > malcontent: "a person who is discontented or disgusted" > http://www.google.com/search?hl=en&rlz=1C1GGLS_enUS291US301&defl=en&q=define:malcontent&ei=p51DStfVBI34Nei_oKwC&sa=X&oi=glossary_definition&ct=title > <http://www.google.com/search?hl=en&rlz=1C1GGLS_enUS291US301&defl=en&q=define:malcontent&ei=p51DStfVBI34Nei_oKwC&sa=X&oi=glossary_definition&ct=title> > > <http://www.google.com/search?hl=en&rlz=1C1GGLS_enUS291US301&defl=en&q=define:malcontent&ei=p51DStfVBI34Nei_oKwC&sa=X&oi=glossary_definition&ct=title>thanks > for the smile. > > mca > http://amundsen.com/blog/ <http://amundsen.com/blog/> > > > > On Thu, Jun 25, 2009 at 11:24, Bill de hOra <bill@... > <mailto:bill@...>> wrote: > > Hi, > > I was wondering if anyone is using a response code for indicating bad > content (eg contains a virus). I was thinking 403/417 but they're not > quite right. Thoughts? > > Bill > > > ------------------------------------ > > Yahoo! Groups Links > > > (Yahoo! ID required) > > mailto:rest-discuss-fullfeatured@yahoogroups.com > <mailto:rest-discuss-fullfeatured@yahoogroups.com> > > > >
Hey Bill, squid supports PURGE <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a> Bill Bill Burke wrote: > > > > Yesterday, in a meeting at JBoss, I was evangelizing REST to a few of my > colleagues. An interesting question came up: > > Let's say you have a distributed cache you want to manage through a > RESTful interface. One operation on the cache is clearing or flushing > it. The interesting thing about flushing is that the act of flushing > changes the state of the cache, but "flushing" isn't a state of the > cache itself. It seems to be a pure operation. How do you model > something like this in REST? Is it correct to do: > > PUT /cache/flusher (PUT because flushing is idempotent) > > Or maybe even better: > > GET /cache > > returns a document like > > <cache> > <link rel="FLUSH" href="/cache/flusher"/> > </cache> > > Or maybe this is better: > > DELETE /cache/data > > Maybe I just answered my own question :) > > Thanks for listening, > > Bill > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com <http://bill.burkecentral.com> > >
That's interesting. Is a non-standard METHOD better than an ambiguous use of a standard METHOD? -Solomon On Thu, Jun 25, 2009 at 5:28 PM, Bill de hOra <bill@...> wrote: > > > Hey Bill, > > squid supports PURGE > > < > http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > > > > Bill > > > Bill Burke wrote: > > > > > > > > Yesterday, in a meeting at JBoss, I was evangelizing REST to a few of my > > colleagues. An interesting question came up: > > > > Let's say you have a distributed cache you want to manage through a > > RESTful interface. One operation on the cache is clearing or flushing > > it. The interesting thing about flushing is that the act of flushing > > changes the state of the cache, but "flushing" isn't a state of the > > cache itself. It seems to be a pure operation. How do you model > > something like this in REST? Is it correct to do: > > > > PUT /cache/flusher (PUT because flushing is idempotent) > > > > Or maybe even better: > > > > GET /cache > > > > returns a document like > > > > <cache> > > <link rel="FLUSH" href="/cache/flusher"/> > > </cache> > > > > Or maybe this is better: > > > > DELETE /cache/data > > > > Maybe I just answered my own question :) > > > > Thanks for listening, > > > > Bill > > > > -- > > Bill Burke > > JBoss, a division of Red Hat > > http://bill.burkecentral.com <http://bill.burkecentral.com> > > > > > > >
dpgorti wrote: > > > > Lets say I have a REST API to create a GROUP and memberships for groups. > Just like the OS UserGroup and User(s) > > Lets say that I want to create/update/delete/read groups > Could you comment on the RESTfulness of the API below? > > Group has an NumericID, Name > User has a NumericID, Name, Password > > 1. Create > PUT ...../Groups/GroupName > NO PAYLOAD in the BODY If you are ok with client controlled resource URLs - this is "WebDAV style", different from "AtomPub style" where the server defines the URL for the group. > 2. RENAME > POST ..../Groups/ID > <GroupName>NewName</GroupName> This is a subtle issue, but would renaming the group change its URL? IOW is that a deliberate design decision, or a side-effect of client controlled URLs? Also, the protocol lacks symmetry - how you send the group name during creation is different from how you send it for a rename. > 3. GET Group > GET ..../Groups/GroupName > GET ..../Groups/GroupID Someone will have to write code on the server to distinguish between lookups by id and name. > 4. GET ..../Groups > <Groups><Group><ID><NAME>//// A REST based format would contain links to the groups as well and a media type for the XML. > 5. Get users of a group > GET ..../Groups/GroupName/Users? based on 3 you would need to support GroupID/Users as well. > 6. Add user to a group > POST ..../Groups/GroupName/Users This is different to way you add a group to the groups. Wny? Bill Bill
LINK suffers from a problem - it magically pops into existence as a header, but without a means to manage the implied relationship. I think the architecture simply doesn't support management of resource relationships in this way, and the only means is through hypertext and constrained protocols on top of HTTP. Claim: LINK/UNLINK break with web architecture. So unless you want to go to a higher level in the stack to something like RDF/OWL that would provide a real meta-model, the sensible technique to managing these relationships is hypertext. People in the IETF that don't like Atom should bear the burden of proposing a credible technical alternative (eg Atom/AtomPub provides a solution for iterators and collections which HTTP doesn't cater for) Bill Sam Johnston wrote: > > > > Afternoon all, > > I have a need for a meta-model which allows for association of > non-hypertext representations of resources (e.g. binaries, images, > etc.). I had originally proposed Atom but this wasn't well accepted by > the XML xenophobes. It seems the Link: header was intended to accomplish > just what I need in the original HTTP RFCs. Unfortunately though HTML > was dominant and Link: wasn't implemented (nor implementable), so it was > dropped only to be recently revived by @mnot in > draft-nottingham-http-link-header. > > Per my request for clarification to apps-discuss below (which failed to > get any bites - perhaps tl;dr), I'd like to find a sensible mechanism > for setting the Link: headers, ideally without relying on new HTTP verbs > (LINK and UNLINK were originally specified but have also been dropped). > I'm figuring that just sending Link: header(s) in PUTs and POSTs will > cleanly accomplish most of what I need, but things get hairy when you > start thinking about updating/deleting individual links. > > Sam > > ---------- Forwarded message ---------- > From: *Sam Johnston* <samj@... <mailto:samj@...>> > Date: Mon, Jun 15, 2009 at 5:27 AM > Subject: Clarifications on Web Linking with HTTP > To: apps-discuss@... <mailto:apps-discuss@...> > > > Morning all, > > The HTTP Link: header enables web linking without hypermedia - that > is, arbitrary content types can be linked (with attriubtes) > out-of-band rather than within the payload (e.g. HTML) itself. This > enables the use of HTTP as a meta-model (at least for individual > resources) without having to resort to Atom, which is potentially > great news for RESTful APIs. > > I am currently working on a real world application of Marks' Web > Linking I-D[1] (OGF's Open Cloud Computing Interface - > http://www.occi-wg.org/ <http://www.occi-wg.org/>) and require > clarification on a few points > (which may want to end up in the I-D). > > - First and foremost, in the absence of the LINK and UNLINK verbs > originally defined in RFC 2068[2] but specifically omitted from RFC > 2616[3], what is the preferred mechanism for manipulating these links > via HTTP? It appears that this header is intended for GET requests > only, but presumably specifying it in POST and PUT requests would be > one option that avoids the creation of [not so] "new" verbs (bearing > in mind that short of accepting Link: headers from empty POST/PUT > requests, it would be necessary to GET and then PUT the entire payload > to update links - twice if they were reciprocal). While there was an > attempt a dozen years ago to better define the relevant HTTP verbs[4], > it strikes me as more sensible to follow the example of the > Set-Cookie: header for this rather than WebDAV's example of creating > new verbs (even if we've seen them before) but you guys are the > experts. > > - Another concern with an arbitrary number of links is that arbitrary > string length limits may be imposed by user agents, as they are with > URLs. This should not be a problem where there is one link per header, > but it may be where the headers are concatenated as described in RFC > 2616[5]. This is a double edged sword however as some user agents have > only recently added support for multiple headers of the same type[6] > and it remains a problem for some[7]. > > - The introduction of a link relation registry at IANA makes a lot of > sense, though it would be nice if these were common for HTTP, HTML, > Atom and other places links appear. Perhaps namespaces (e.g. > "atom:service" or "occi.state.restart") would be useful here so as to > enable significantly more future extensibility. > > - It seems useful to be able to (optionally) specify the type (as in > content type rather than relation type) of a given link, as is the > case for Atom. That said, this also seems somewhat redundant with HTTP > Content Negotiation, but implementations that choose to support the > "type" attribute may gain some performance and usability advantages > from doing so. The matter of whether this information belongs in URIs > (and if so, which side of the '?') or in HTTP headers (or both) is > still not clear to me as there are pros and cons of each - perhaps the > relation type is more suitable (or both?) as it's often not possible > to unambigously determine the relation type from the content type > (consider modeling people where both fingerprint and portrait > representations may exist in image/png format). > > To be more specific about the requirements, the API models cloud > infrastructure services (IaaS) and has three main nouns (compute, > network, storage) which need to be associated with each other with > attributes on the links (e.g. a compute resource having a network > resource associated with a local identifier attribute of "eth0"). > Using Atom as the meta-model worked fine (as evidenced by GData) but > it now seems possible - at least for individual resources - with HTTP. > > Cheers, > > Sam > > 1. http://tools.ietf.org/html/draft-nottingham-http-link-header-05 > <http://tools.ietf.org/html/draft-nottingham-http-link-header-05> > 2. http://tools.ietf.org/html/rfc2068#section-19.6.1 > <http://tools.ietf.org/html/rfc2068#section-19.6.1> > 3. http://tools.ietf.org/html/rfc2616#section-19.6.3 > <http://tools.ietf.org/html/rfc2616#section-19.6.3> > 4. > http://ftp.ics.uci.edu/pub/ietf/http/draft-pritchard-http-links-00.txt > <http://ftp.ics.uci.edu/pub/ietf/http/draft-pritchard-http-links-00.txt> > 5. http://tools.ietf.org/html/rfc2616#section-4.2 > <http://tools.ietf.org/html/rfc2616#section-4.2> > 6. http://www.mail-archive.com/bug-wget@.../msg00076.html > <http://www.mail-archive.com/bug-wget@.../msg00076.html> > 7. http://bugs.python.org/issue1660009 <http://bugs.python.org/issue1660009> > >
FYI this was a management interface for a JBoss specific cache set up by the user, not an HTTP cache. Bill de hOra wrote: > > > > Hey Bill, > > squid supports PURGE > > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a>> > > Bill > > Bill Burke wrote: > > > > > > > > Yesterday, in a meeting at JBoss, I was evangelizing REST to a few of my > > colleagues. An interesting question came up: > > > > Let's say you have a distributed cache you want to manage through a > > RESTful interface. One operation on the cache is clearing or flushing > > it. The interesting thing about flushing is that the act of flushing > > changes the state of the cache, but "flushing" isn't a state of the > > cache itself. It seems to be a pure operation. How do you model > > something like this in REST? Is it correct to do: > > > > PUT /cache/flusher (PUT because flushing is idempotent) > > > > Or maybe even better: > > > > GET /cache > > > > returns a document like > > > > <cache> > > <link rel="FLUSH" href="/cache/flusher"/> > > </cache> > > > > Or maybe this is better: > > > > DELETE /cache/data > > > > Maybe I just answered my own question :) > > > > Thanks for listening, > > > > Bill > > > > -- > > Bill Burke > > JBoss, a division of Red Hat > > http://bill.burkecentral.com <http://bill.burkecentral.com> > <http://bill.burkecentral.com <http://bill.burkecentral.com>> > > > > > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Solomon Duskis wrote: > That's interesting. > > Is a non-standard METHOD better than an ambiguous use of a standard METHOD? I won't get into what "standard" means, or provide a yes/no answer. But I think the time to choose a new method is when the operation you want is subtly different, but very similar to a well-defined method. Bill
So in terms of a REST approach for a non-HTTP cache, a PURGE method won't work? Bill Bill Burke wrote: > FYI this was a management interface for a JBoss specific cache set up by > the user, not an HTTP cache. > > Bill de hOra wrote: >> >> >> >> Hey Bill, >> >> squid supports PURGE >> >> <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a >> <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a>> >> >> >> Bill >> >> Bill Burke wrote: >> > >> > >> > >> > Yesterday, in a meeting at JBoss, I was evangelizing REST to a few >> of my >> > colleagues. An interesting question came up: >> > >> > Let's say you have a distributed cache you want to manage through a >> > RESTful interface. One operation on the cache is clearing or flushing >> > it. The interesting thing about flushing is that the act of flushing >> > changes the state of the cache, but "flushing" isn't a state of the >> > cache itself. It seems to be a pure operation. How do you model >> > something like this in REST? Is it correct to do: >> > >> > PUT /cache/flusher (PUT because flushing is idempotent) >> > >> > Or maybe even better: >> > >> > GET /cache >> > >> > returns a document like >> > >> > <cache> >> > <link rel="FLUSH" href="/cache/flusher"/> >> > </cache> >> > >> > Or maybe this is better: >> > >> > DELETE /cache/data >> > >> > Maybe I just answered my own question :) >> > >> > Thanks for listening, >> > >> > Bill >> > >> > -- >> > Bill Burke >> > JBoss, a division of Red Hat >> > http://bill.burkecentral.com <http://bill.burkecentral.com> >> <http://bill.burkecentral.com <http://bill.burkecentral.com>> >> > >> > >> >> >
Bill de hOra wrote: > > > > LINK suffers from a problem - it magically pops into existence as a > header, but without a means to manage the implied relationship. I'm not understanding what you mean by this. > I think > the architecture simply doesn't support management of resource > relationships in this way, and the only means is through hypertext and > constrained protocols on top of HTTP. > In some cases I can see that the data-format is inflexible (an old company document schema that doesn't support links) and the only way you'd be able to transfer relationships is through an envelope (atom) or headers (this proposal). I think Atom is great for what it is built for (syndication), but I'm not convinced it is a good format for web services. The way people are suggesting to use it in a web services environment is *way* to much like SOAP. Especially in this case, where headers would work just fine. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill Burke wrote: > > > Bill de hOra wrote: >> >> >> >> LINK suffers from a problem - it magically pops into existence as a >> header, but without a means to manage the implied relationship. > > I'm not understanding what you mean by this. It's like atom:id - you must have one in the format, but how to create one is undefined. Atom's format only considered the read/syndication usecase. That was awkward when it came to specifying AtomPub. LINK is similar - how a LINK relationship is created/managed/destroyed is undefined. > [...] Especially in this case, where headers would work just fine. "UNLINK" what resource is that method applied to? Bill
I'll defer to Solomon's question about non-standard methods. Bill de hOra wrote: > > > > So in terms of a REST approach for a non-HTTP cache, a PURGE method > won't work? > > Bill > > Bill Burke wrote: > > FYI this was a management interface for a JBoss specific cache set up by > > the user, not an HTTP cache. > > > > Bill de hOra wrote: > >> > >> > >> > >> Hey Bill, > >> > >> squid supports PURGE > >> > >> > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a> > > >> > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a>>> > > >> > >> > >> Bill > >> > >> Bill Burke wrote: > >> > > >> > > >> > > >> > Yesterday, in a meeting at JBoss, I was evangelizing REST to a few > >> of my > >> > colleagues. An interesting question came up: > >> > > >> > Let's say you have a distributed cache you want to manage through a > >> > RESTful interface. One operation on the cache is clearing or flushing > >> > it. The interesting thing about flushing is that the act of flushing > >> > changes the state of the cache, but "flushing" isn't a state of the > >> > cache itself. It seems to be a pure operation. How do you model > >> > something like this in REST? Is it correct to do: > >> > > >> > PUT /cache/flusher (PUT because flushing is idempotent) > >> > > >> > Or maybe even better: > >> > > >> > GET /cache > >> > > >> > returns a document like > >> > > >> > <cache> > >> > <link rel="FLUSH" href="/cache/flusher"/> > >> > </cache> > >> > > >> > Or maybe this is better: > >> > > >> > DELETE /cache/data > >> > > >> > Maybe I just answered my own question :) > >> > > >> > Thanks for listening, > >> > > >> > Bill > >> > > >> > -- > >> > Bill Burke > >> > JBoss, a division of Red Hat > >> > http://bill.burkecentral.com <http://bill.burkecentral.com> > <http://bill.burkecentral.com <http://bill.burkecentral.com>> > >> <http://bill.burkecentral.com <http://bill.burkecentral.com> > <http://bill.burkecentral.com <http://bill.burkecentral.com>>> > >> > > >> > > >> > >> > > > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Fri, Jun 26, 2009 at 01:31:14AM +0100, Bill de hOra wrote: > It's like atom:id - you must have one in the format, but how to create one is > undefined. Atom's format only considered the read/syndication usecase. That > was awkward when it came to specifying AtomPub. LINK is similar - how a LINK > relationship is created/managed/destroyed is undefined. I don't understand this analogy. The problem with the atom:id element is not how to add the element to an Atom document, it is presumably how to generate the value. However, with the LINK header, the question that has been raised is how to manipulate the value, not how to derive the relationships between resources. > "UNLINK" > > what resource is that method applied to? I have only lightly followed this thread, so I apologise if I am covering old ground. I don't understand why this ID should cover the manipulation of the header value any more than any of the other common HTTP headers have values that can be manipulated with HTTP itself. What is the problem with saying that any LINK headers should be managed at the server's discretion? For what it's worth, I would want to configure the LINK headers via some static Apache configuration on the server. I can understand why others might want to do it over HTTP, but the server is free to expose some hypertext system for that. Best, -- Noah Slater, http://tumbolia.org/nslater
Solomon Duskis <sduskis@...> writes:
> On Sun, Jun 21, 2009 at 4:00 PM, Jan Algermissen <algermissen1971@...> wrote:
> On Jun 19, 2009, at 3:14 PM, Solomon Duskis wrote:
> On the HTML side of REST, where there's a human involved, there are "experts" in the field of "Information Architecture" who have the resposibility to construct "a big picture" that assure
> that the various types of system users have appropriate paths (and links) through the system that support each user profile's needs within a system.
>
> Has anyone used an "Information Architect" to develop a REST API?
>
> I do not think that there is much of a difference between human and non-human clients. A browser for example has quite some automatic behaviour that results from implementing the processing
> model of HTML (load images, load stylesheets, execute JavaScript, do page reloads based on <meta> tags, etc.) All that changes for the non-human case is that media types would specify richer
> application semantics (because there is no user involved to decide what this or that link means).
>
> If there is no human to understand <a href="/all-versions">Click me to see all versions</a> then all you need is to standardize something like <link rel="http://example.com/linkreks/
> all-versions" href="/all-versions">.
>
> Jan
>
> This last sentence is one of many crucial differences between the design for human-oriented resources and the design for machine-oriented resources. Â The "interface" for humans is free text to be
> understood at "run time". Â The "Interface" for computers must come through some previously agreed to keywords that appear in a previously agreed to at "design time."
While the interface might be *coded to* based on keywords that come in
some interface-description document at "design time", the runtime
implementation should still come through hypermedia that has the same
keywords at "run time".
In this sense, it's not substantially different from a human client.
As a human, you know from previous run-time experience (a sort of ad-hoc
"design time" document, if you will) that if you see a "checkout with
credit card" link, upon clicking on it you expect to see a form with a
card number, expiration date and other fields. These days, you also
expect to see a CCV field (interface versioning/migration).
nn
As an automated client, you know from the design document that if you
see a <link rel="http://example.com/checkout/" href=…>, upon navigating
it, you expect to see a form with a rough schema like:
form := cc_number_field + expiration_date_field [ + ccv_field ] + …
In either cases, seeing a link that said "Frobnigate the Whatsit!" or
<http://example.com/frobnigate/> … or encountering a form that was
really a question of trivial pursuit … would confuse the client.
Moreover, getting one of the lower-frequency but still valid responses
(3xx, 503 + Retry-After, 101, &c.) would not confuse either the
automated client or the user's agent.
As to the idea of "information architect" … I'd agree that the
requirements on a resource space for automated and human clients are
different … I think the question is "how different"? A lot of people
have a "web ui" and a "REST API", and it's not clear why the REST API
isn't just an additional layer or refinement within the "web ui". Take
flickr, for instance. If the key links and HTML forms in their HTML UI
had e.g., <a href=… rel=…>, <link rel=…> and other attributes, couldn't
an automated client consume it just as well? It seems like supporting
one interface is better than two. I don't have any practical experience
trying something like this, however, so perhaps it breaks down rather
quickly.
--
...jsled
http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
Hmm ... not sure how much I agree with that, but I don't have enough experience to know when I have something that doesn't fit the POST, GET, PUT, DELETE paradigm. IMHO, those are the 4 things you can do with data and should adequately cover 99.99% of operations. The example I like is this - there is no need for a MARRY HTTP method. You just POST /marriage :) -L On Thu, Jun 25, 2009 at 6:04 PM, Bill de hOra <bill@...> wrote: > > > Solomon Duskis wrote: > > That's interesting. > > > > Is a non-standard METHOD better than an ambiguous use of a standard > METHOD? > > I won't get into what "standard" means, or provide a yes/no answer. But > I think the time to choose a new method is when the operation you want > is subtly different, but very similar to a well-defined method. > > Bill > >
Bottom line is: REST/HTTP APIs for machines should behave similar to HTML
"APIs" for people (to some extent).
ergo: REST APIs should be designed similar to the way Websites are...
(including perhaps "Information Architecture")
-Solomon
On Thu, Jun 25, 2009 at 9:08 PM, Josh Sled <jsled@asynchronous.org> wrote:
> Solomon Duskis <sduskis@...> writes:
> > On Sun, Jun 21, 2009 at 4:00 PM, Jan Algermissen <
> algermissen1971@...> wrote:
> > On Jun 19, 2009, at 3:14 PM, Solomon Duskis wrote:
> > On the HTML side of REST, where there's a human involved, there
> are "experts" in the field of "Information Architecture" who have the
> resposibility to construct "a big picture" that assure
> > that the various types of system users have appropriate paths
> (and links) through the system that support each user profile's needs within
> a system.
> >
> > Has anyone used an "Information Architect" to develop a REST API?
> >
> > I do not think that there is much of a difference between human and
> non-human clients. A browser for example has quite some automatic behaviour
> that results from implementing the processing
> > model of HTML (load images, load stylesheets, execute JavaScript, do
> page reloads based on <meta> tags, etc.) All that changes for the non-human
> case is that media types would specify richer
> > application semantics (because there is no user involved to decide
> what this or that link means).
> >
> > If there is no human to understand <a href="/all-versions">Click me
> to see all versions</a> then all you need is to standardize something like
> <link rel="http://example.com/linkreks/
> > all-versions" href="/all-versions">.
> >
> > Jan
> >
> > This last sentence is one of many crucial differences between the design
> for human-oriented resources and the design for machine-oriented resources.
> The "interface" for humans is free text to be
> > understood at "run time". The "Interface" for computers must come
> through some previously agreed to keywords that appear in a previously
> agreed to at "design time."
>
> While the interface might be *coded to* based on keywords that come in
> some interface-description document at "design time", the runtime
> implementation should still come through hypermedia that has the same
> keywords at "run time".
>
> In this sense, it's not substantially different from a human client.
>
> As a human, you know from previous run-time experience (a sort of ad-hoc
> "design time" document, if you will) that if you see a "checkout with
> credit card" link, upon clicking on it you expect to see a form with a
> card number, expiration date and other fields. These days, you also
> expect to see a CCV field (interface versioning/migration).
> nn
> As an automated client, you know from the design document that if you
> see a <link rel="http://example.com/checkout/" href=…>, upon navigating
> it, you expect to see a form with a rough schema like:
>
> form := cc_number_field + expiration_date_field [ + ccv_field ] + …
>
>
> In either cases, seeing a link that said "Frobnigate the Whatsit!" or
> <http://example.com/frobnigate/> … or encountering a form that was
> really a question of trivial pursuit … would confuse the client.
>
> Moreover, getting one of the lower-frequency but still valid responses
> (3xx, 503 + Retry-After, 101, &c.) would not confuse either the
> automated client or the user's agent.
>
>
> As to the idea of "information architect" … I'd agree that the
> requirements on a resource space for automated and human clients are
> different … I think the question is "how different"? A lot of people
> have a "web ui" and a "REST API", and it's not clear why the REST API
> isn't just an additional layer or refinement within the "web ui". Take
> flickr, for instance. If the key links and HTML forms in their HTML UI
> had e.g., <a href=… rel=…>, <link rel=…> and other attributes, couldn't
> an automated client consume it just as well? It seems like supporting
> one interface is better than two. I don't have any practical experience
> trying something like this, however, so perhaps it breaks down rather
> quickly.
>
> --
> ...jsled
> http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
>
> Bottom line is: REST/HTTP APIs for machines should behave similar to >HTML "APIs" for people (to some extent). +1, that is truly the bottom line.
(Looping in the REST-discuss of the crowd) What I'm trying to get at, is that in order to add relevant links, you need to do some analysis on which links might be relevant. That requires an analysis of how your API will be used by different types of users. It's not about too many options, it's making sure that the "reasonable possibilities" exist based on the different needs of the likely users of your system. There are tried and true methods that Website "information architects" use in order to envision how the human users of the website will likely want to interact with the system. REST developers don't have those "tried and true methods" yet. Sure, there are distinct differences between REST APIs and websites. However, websites are the most RESTful systems that we're aware of. I suggested "information architecture" because while it does have plenty of aesthetic related concerns, it also deals with "the structural design of shared information environments" and "practice focused on bringing principles of design and architecture to the digital landscape" ( http://en.wikipedia.org/wiki/Information_architecture). Those seem to fit the needs of REST APIs. You can build a better API (Application Programming Interface) if you better understood how the programmers want to use it. You're API will likely suffer if you don't take those kind of aspects into consideration. Even though I've thought about approaching RESTful design from this perspective for quite a while, I still have very few practical "best practices" from other REST APIs to fall back on. I'm sure you can even think of additional "possible links" in Kenai... For example, what would a user want to do after "Requests to VNet Resources?" http://kenai.com/projects/suncloudapis/pages/CloudAPIVNetRequests - I assume that there are things that can be done next... Re: pretty URLs. No, that wasn't my point. However, while the client applications shouldn't necessarily care about interpreting the meaning of the URL, the programmer of that application might care... -Solomon On Thu, Jun 25, 2009 at 9:40 PM, Craig McClanahan <craigmcc@...>wrote: > On Thu, Jun 25, 2009 at 6:24 PM, Solomon Duskis <sduskis@...> wrote: > > > ergo: REST APIs should be designed similar to the way Websites are... > (including > > perhaps "Information Architecture") > > Hmm ... I don't think I agree with this very much. In a human facing > website, aesthetics are generally much more important than in a web service, > which will lead to decisions about WHAT information is shown as well as HOW > (in this context, this is mostly about what links you might offer to go > other places). In a web service, there is no such concern, and you should > focus more on providing all possible links that might be relevant ... the > client machine will not complain about having too many options :-). And the > client developer will likely thank you for offering all the reasonable > possibilities. > > If you are talking about "pretty URLs" when you are talking about design, I > *definitely* disagree with you. Clients of properly designed web services > (i.e. those that care about the HATEOAS constraint) will not try to > interpret the "meaning" of a URL ... they will just follow it if it does > what they need. People, on the other hand, tend to care about the URL that > shows in the location bar of your browser. > > Craig McClanahan > >
the way i look at the method issue is this: "what methods do i need to _transfer state_?" after that it's all about defining resources and designing representations to transfer the state to/from those resources. mca http://amundsen.com/blog/ On Thu, Jun 25, 2009 at 21:09, Luke Crouch <luke.crouch@...> wrote: > > > Hmm ... not sure how much I agree with that, but I don't have enough > experience to know when I have something that doesn't fit the POST, GET, > PUT, DELETE paradigm. > > IMHO, those are the 4 things you can do with data and should adequately > cover 99.99% of operations. The example I like is this - there is no need > for a MARRY HTTP method. You just POST /marriage :) > > -L > > On Thu, Jun 25, 2009 at 6:04 PM, Bill de hOra <bill@...> wrote: > >> >> >> Solomon Duskis wrote: >> > That's interesting. >> > >> > Is a non-standard METHOD better than an ambiguous use of a standard >> METHOD? >> >> I won't get into what "standard" means, or provide a yes/no answer. But >> I think the time to choose a new method is when the operation you want >> is subtly different, but very similar to a well-defined method. >> >> Bill >> > > > > >
Just out of curiosity, what was your conclusion? What method/uri did you finally use? Bill Burke wrote: > > > FYI this was a management interface for a JBoss specific cache set up by > the user, not an HTTP cache. > > Bill de hOra wrote: > > > > > > > > Hey Bill, > > > > squid supports PURGE > > > > > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a> > > > > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a>>> > > > > Bill > > > > Bill Burke wrote: > > > > > > > > > > > > Yesterday, in a meeting at JBoss, I was evangelizing REST to a few > of my > > > colleagues. An interesting question came up: > > > > > > Let's say you have a distributed cache you want to manage through a > > > RESTful interface. One operation on the cache is clearing or flushing > > > it. The interesting thing about flushing is that the act of flushing > > > changes the state of the cache, but "flushing" isn't a state of the > > > cache itself. It seems to be a pure operation. How do you model > > > something like this in REST? Is it correct to do: > > > > > > PUT /cache/flusher (PUT because flushing is idempotent) > > > > > > Or maybe even better: > > > > > > GET /cache > > > > > > returns a document like > > > > > > <cache> > > > <link rel="FLUSH" href="/cache/flusher"/> > > > </cache> > > > > > > Or maybe this is better: > > > > > > DELETE /cache/data > > > > > > Maybe I just answered my own question :) > > > > > > Thanks for listening, > > > > > > Bill > > > > > > -- > > > Bill Burke > > > JBoss, a division of Red Hat > > > http://bill.burkecentral.com <http://bill.burkecentral.com> > <http://bill.burkecentral.com <http://bill.burkecentral.com>> > > <http://bill.burkecentral.com <http://bill.burkecentral.com> > <http://bill.burkecentral.com <http://bill.burkecentral.com>>> > > > > > > > > > > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com <http://bill.burkecentral.com> > >
Agreed, I can see how a transition to a flush state could be modeled as
idempotent;
PUT /cache { .... 'flushed': 'true' ...... }
Having said that, if I was to model that transition as idempotent I'd
probably prefer DELETE /cache because it's a more descriptive message
and cleaner to implement on the server side... but my understanding was
that DELETE didn't necessarily require the URI to be removed?
Also - I think a good argument can be made for treating it as
non-idempotent and creating separate flush resources for each request
e.g. removing specific entities:
POST /flushes { "targets": [ "/cache/A324234FE87", "/cache/D546F092123",
... ] } => 202 Accepted ; Location: /flushes/4123
..or clearing the whole cache:
POST /flushes { "targets": [ "/cache" ] } => 202 Accepted ; Location:
/flushes/4124
On reflection, I think I prefer the POST solution
- Mike
Ebenezer Ikonne wrote:
>> It should be, because PUT is idempotent, while POST is not.
>>
>
> What am I missing here? PUT (should) guarantee idempotency but does that mean a POST cannot be idempotent. Additionally whether "other" resources are modified as a result of PUT/POST is not what qualifies the usage of PUT/POST. It's what happens to resource directly being manipulated that is of interest here.
>
> If using PUT, I would GET the representation of the cache (because it had to have existed) and return an empty representation. I personally don't the get the "flusher" resource but that's probably not important.
>
> Many ways to address this however.
>
> Eb
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Stefan Tilkov wrote: > > > DELETE /cache is bad because a subsequent GET on /cache will > likely return 200 > Wouldn't that make sense if the cache still exists but is empty?
On Jun 26, 2009, at 1:32 PM, Mike Kelly wrote:
> Stefan Tilkov wrote:
>>
>>
>> DELETE /cache is bad because a subsequent GET on /cache will
>> likely return 200
>>
>
>
> Wouldn't that make sense if the cache still exists but is empty?
That's my point - if this makes sense, I consider the semantics of
DELETE to be violated: The resource isn't deleted, its content is
being replaced with an empty representation. Hence my preference for
PUT.
But admittedly the spec is pretty vague on this ("The DELETE method
requests that the origin server delete the resource identified by the
request-target."). To me, this means that the resource should be GONE,
but YMMV.
Stefan
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Stefan Tilkov wrote:
>
> That's my point - if this makes sense, I consider the semantics of
> DELETE to be violated: The resource isn't deleted, its content is
> being replaced with an empty representation. Hence my preference for
> PUT.
>
> But admittedly the spec is pretty vague on this ("The DELETE method
> requests that the origin server delete the resource identified by the
> request-target."). To me, this means that the resource should be GONE,
> but YMMV.
>
> Stefan
>
I definitely think the spec leaves room for both (since a resource is
being deleted and not its identifier).
I don't know whether that was intentional or not - but I can't think of
a situation where allowing URIs to persist after DELETE would cause
issues or violate the protocol.
On Fri, Jun 26, 2009 at 1:49 PM, Stefan Tilkov <stefan.tilkov@...> wrote:
>
> On Jun 26, 2009, at 1:32 PM, Mike Kelly wrote:
> > Stefan Tilkov wrote:
> >>
> >> DELETE /cache is bad because a subsequent GET on /cache will
> >> likely return 200
> >
> > Wouldn't that make sense if the cache still exists but is empty?
Yes. If it's a problem for you then recreate the cache when the first
entries come in (but keep an eye out for clients breaking in the
interim when they get 404s).
> That's my point - if this makes sense, I consider the semantics of
> DELETE to be violated: The resource isn't deleted, its content is
> being replaced with an empty representation. Hence my preference for
> PUT.
Another thing to consider is that DELETE makes even more sense if
you're allowed to delete individual entries - why would you want to
have two completely different processes? Sure, if the cache is a blob
you could PUT an empty entity-body but then it makes even more sense
to do a DELETE. Further, some of the POST proposals feel very RPC-like
and require clients and servers to understand rules (which should
arguably be avoided).
> But admittedly the spec is pretty vague on this ("The DELETE method
> requests that the origin server delete the resource identified by the
> request-target."). To me, this means that the resource should be GONE,
> but YMMV.
As Mike just said, you're deleting the *resource*, not the identifier.
The thing that's not clear to me is what the spec wants us to do with
the subordinates (e.g. /cache/123)... I guess that's up to the
implementation (for example, if I DELETE an article I probably also
want to delete its dependencies - images and so on).
In any case I don't follow how you figure that PUT makes any sense in
this context.
Sam
I don't know if that's a correct interpretation, seeing this:
> the server SHOULD NOT indicate success unless, at the time the
> response is given, it intends to delete the resource *or move it to an
> inaccessible location*.
*or move it to an inaccessible location* suggests that the resource can
still exist but without a URI pointing at it. That implies that DELETE
should actually remove the URI.
Mike Kelly wrote:
>
>
> Stefan Tilkov wrote:
> >
> > That's my point - if this makes sense, I consider the semantics of
> > DELETE to be violated: The resource isn't deleted, its content is
> > being replaced with an empty representation. Hence my preference for
> > PUT.
> >
> > But admittedly the spec is pretty vague on this ("The DELETE method
> > requests that the origin server delete the resource identified by the
> > request-target."). To me, this means that the resource should be GONE,
> > but YMMV.
> >
> > Stefan
> >
>
> I definitely think the spec leaves room for both (since a resource is
> being deleted and not its identifier).
>
> I don't know whether that was intentional or not - but I can't think of
> a situation where allowing URIs to persist after DELETE would cause
> issues or violate the protocol.
>
>
Ant�nio Mota wrote: > I don't know if that's a correct interpretation, seeing this: > > >> the server SHOULD NOT indicate success unless, at the time the >> response is given, it intends to delete the resource *or move it to an >> inaccessible location*. >> > > *or move it to an inaccessible location* suggests that the resource can > still exist but without a URI pointing at it. That implies that DELETE > should actually remove the URI. > > I take your point I'm just not sure that interpretation gains you anything
Putting it in other words that seems to imply that a GET after a DELETE
should always return 404, not a 200 (providing the DELETE was successful).
Ant�nio Mota wrote:
> I don't know if that's a correct interpretation, seeing this:
>
>> the server SHOULD NOT indicate success unless, at the time the
>> response is given, it intends to delete the resource *or move it to
>> an inaccessible location*.
>
> *or move it to an inaccessible location* suggests that the resource
> can still exist but without a URI pointing at it. That implies that
> DELETE should actually remove the URI.
>
>
> Mike Kelly wrote:
>>
>>
>> Stefan Tilkov wrote:
>> >
>> > That's my point - if this makes sense, I consider the semantics of
>> > DELETE to be violated: The resource isn't deleted, its content is
>> > being replaced with an empty representation. Hence my preference for
>> > PUT.
>> >
>> > But admittedly the spec is pretty vague on this ("The DELETE method
>> > requests that the origin server delete the resource identified by the
>> > request-target."). To me, this means that the resource should be GONE,
>> > but YMMV.
>> >
>> > Stefan
>>
>>
>>
>
Ant�nio Mota wrote: > Putting it in other words that seems to imply that a GET after a > DELETE should always return 404, not a 200 (providing the DELETE was > successful). > It definitely doesn't say that in the spec! Besides, how could that client be aware of whether a given removed URI was not reinstated immediately by another agent somewhere?
These scenarios seem semantically accurate, yeah?
DELETE /cache
HTTP 200 OK
GET /cache
HTTP 410 Gone
PUT /cache
{some cache data}
HTTP 200 OK
or
PUT /cache
{empty body}
HTTP 200 OK
GET /cache
HTTP 200 OK
{empty body}
This seems analogous to CouchDB's REST API for db's and documents?
http://wiki.apache.org/couchdb/HTTP_REST_API
-L
> That's my point - if this makes sense, I consider the semantics of
> DELETE to be violated: The resource isn't deleted, its content is
> being replaced with an empty representation. Hence my preference for
> PUT.
>
> But admittedly the spec is pretty vague on this ("The DELETE method
> requests that the origin server delete the resource identified by the
> request-target."). To me, this means that the resource should be GONE,
> but YMMV.
On Fri, Jun 26, 2009 at 1:15 AM, Bill Burke <bburke@...> wrote: > > Bill de hOra wrote: > > LINK suffers from a problem - it magically pops into existence as a > > header, but without a means to manage the implied relationship. > > I'm not understanding what you mean by this. Ditto - the question was about devising a "means to manage the implied relationship". The designers of HTTP planned to do it with LINK and UNLINK, but I doubt I need to convince any of you that adding verbs should be an absolute last resort (and even then firewalls, proxies, gateways, etc. will fail spectacularly until it's standardised and implemented). Set-Cookie works well for cookies so it follows that we could do something similar for links (which are something like server side cookies in this proposal, pointing at alternative representations and/or associating resources). The original plan was to use Atom independently of cardinality but there was significant resistance to XML (and love for JSON, the format du jour). Of course now people are talking about supporting XML-based OVF which somewhat defeats the purpose but we digress... My main concerns about Atom in this context (e.g. individual resources rather than collections) are that it's not DRY in that metadata like atom:id (HTTP's URL) and atom:updated (HTTP's Last-Modified: header) is repeated, it requires parsing and decoding just to get at the resource itself (ruling out interactions from simple clients like curl/wget) or multiple requests where resources are passed by reference rather than by value it, it base64 encodes the entity-body which is a significant performance and efficiency hit... the list goes on. OTOH for collections is's an (almost) perfect fit, short of bundling message/http objects together somehow. > > I think > > the architecture simply doesn't support management of resource > > relationships in this way, and the only means is through hypertext and > > constrained protocols on top of HTTP. > > In some cases I can see that the data-format is inflexible (an old > company document schema that doesn't support links) and the only way > you'd be able to transfer relationships is through an envelope (atom) or > headers (this proposal). I think Atom is great for what it is built for > (syndication), but I'm not convinced it is a good format for web > services. The way people are suggesting to use it in a web services > environment is *way* to much like SOAP. Especially in this case, where > headers would work just fine. Agreed, there are plenty of non-hypertext formats that we need to handle for different types of resources (e.g. iCal, vCard, or in this context OVF/OVA). This is obviously what the web's founding fathers had in mind but HTML was so wildly successful that we haven't needed it (until now). Why on earth would anyone want a wrapper format when it was possible to use native HTTP, especially when you're dealing with potentially enormous files like virtual hard drives and need the raw performance of a clean connection? Sam
Just a small detail, it should be 404, not 410, because the resource is
not permanently gone.
Luke Crouch wrote:
>
>
> These scenarios seem semantically accurate, yeah?
>
> DELETE /cache
> HTTP 200 OK
> GET /cache
> HTTP 410 Gone
> PUT /cache
> {some cache data}
> HTTP 200 OK
>
> or
>
> PUT /cache
> {empty body}
> HTTP 200 OK
> GET /cache
> HTTP 200 OK
> {empty body}
>
> This seems analogous to CouchDB's REST API for db's and documents?
>
> http://wiki.apache.org/couchdb/HTTP_REST_API
> <http://wiki.apache.org/couchdb/HTTP_REST_API>
>
> -L
>
> > That's my point - if this makes sense, I consider the semantics of
> > DELETE to be violated: The resource isn't deleted, its content is
> > being replaced with an empty representation. Hence my preference for
> > PUT.
> >
> > But admittedly the spec is pretty vague on this ("The DELETE method
> > requests that the origin server delete the resource identified by the
> > request-target."). To me, this means that the resource should be GONE,
> > but YMMV.
>
>
I'm not trying to argue in favour of one interpretation or the other, but for me feels "strange" a DELETE that deletes the resource and not it's URI. What is worth a URI without representation? What does it identifies? Also, if you have in the same application a DELETE that deletes a resource and not the URI and other DELETE that deletes both, then DELETE isn't uniform any more. So basically I wouldn't use a DELETE that deletes resources and not it's associated URI's, for that I would use PUT<empty>, meaning I'm not actually deleting the resource but simply emptying, or flushing it. But I'm not saying using it is "wrong"... Mike Kelly wrote: > Ant�nio Mota wrote: >> Putting it in other words that seems to imply that a GET after a >> DELETE should always return 404, not a 200 (providing the DELETE was >> successful). >> > > It definitely doesn't say that in the spec! > > Besides, how could that client be aware of whether a given removed URI > was not reinstated immediately by another agent somewhere?
Well, I need /cache to be the representation of the cache's configuration, so any operation would have to be on /cache/data to do a flush. Since these caches represent things like RDMS ORM caches (Hibernate), HTTP Session state (Java objects) it doesn't make a lot of sense to use PUT as you'd never be able to PUT a non-empty body. So I think I'd prefer GET/PUT on /cache to modify cache configuration DELETE /cache/data to flush the cache, returning 204 when the cache is empty GET /cache/data could return a picture of the cache where it made sense. Finally, DELETE is much more intuitive and simple. I'd much prefer a simple interface thats easy to describe over something that is "pure". Since everybody's definition of "pure" seems to be different then who's to say mine isn't... FYI, this was an awesome discussion :) I really appreciate the thought exercise and hearing everybody's opinion. I hope do get some blog or article together describing a few exercises we went through creating our management interface. António Mota wrote: > > > > Just out of curiosity, what was your conclusion? What method/uri did you > finally use? > > Bill Burke wrote: > > > > > > FYI this was a management interface for a JBoss specific cache set up by > > the user, not an HTTP cache. > > > > Bill de hOra wrote: > > > > > > > > > > > > Hey Bill, > > > > > > squid supports PURGE > > > > > > > > > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a> > > > > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a>> > > > > > > > > > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a> > > > > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > <http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a>>>> > > > > > > Bill > > > > > > Bill Burke wrote: > > > > > > > > > > > > > > > > Yesterday, in a meeting at JBoss, I was evangelizing REST to a few > > of my > > > > colleagues. An interesting question came up: > > > > > > > > Let's say you have a distributed cache you want to manage through a > > > > RESTful interface. One operation on the cache is clearing or flushing > > > > it. The interesting thing about flushing is that the act of flushing > > > > changes the state of the cache, but "flushing" isn't a state of the > > > > cache itself. It seems to be a pure operation. How do you model > > > > something like this in REST? Is it correct to do: > > > > > > > > PUT /cache/flusher (PUT because flushing is idempotent) > > > > > > > > Or maybe even better: > > > > > > > > GET /cache > > > > > > > > returns a document like > > > > > > > > <cache> > > > > <link rel="FLUSH" href="/cache/flusher"/> > > > > </cache> > > > > > > > > Or maybe this is better: > > > > > > > > DELETE /cache/data > > > > > > > > Maybe I just answered my own question :) > > > > > > > > Thanks for listening, > > > > > > > > Bill > > > > > > > > -- > > > > Bill Burke > > > > JBoss, a division of Red Hat > > > > http://bill.burkecentral.com <http://bill.burkecentral.com> > <http://bill.burkecentral.com <http://bill.burkecentral.com>> > > <http://bill.burkecentral.com <http://bill.burkecentral.com> > <http://bill.burkecentral.com <http://bill.burkecentral.com>>> > > > <http://bill.burkecentral.com <http://bill.burkecentral.com> > <http://bill.burkecentral.com <http://bill.burkecentral.com>> > > <http://bill.burkecentral.com <http://bill.burkecentral.com> > <http://bill.burkecentral.com <http://bill.burkecentral.com>>>> > > > > > > > > > > > > > > > > > > -- > > Bill Burke > > JBoss, a division of Red Hat > > http://bill.burkecentral.com <http://bill.burkecentral.com> > <http://bill.burkecentral.com <http://bill.burkecentral.com>> > > > > > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Sam Johnston <samj@...> writes: > On Fri, Jun 26, 2009 at 1:49 PM, Stefan Tilkov <stefan.tilkov@...> wrote: >> >> On Jun 26, 2009, at 1:32 PM, Mike Kelly wrote: >> > Stefan Tilkov wrote: >> >> >> >> DELETE /cache is bad because a subsequent GET on /cache will >> >> likely return 200 >> > >> > Wouldn't that make sense if the cache still exists but is empty? > > Yes. If it's a problem for you then recreate the cache when the first > entries come in (but keep an eye out for clients breaking in the > interim when they get 404s). If I receive a DELETE request, I will assume that the requester want the specified resource to go away. But in this case, I don't think you want to do away with the cache. Instead you simply want to reset its state. The identifier /cache points to a compound resource which includes resources for individual cache entries. Affecting the state of the compound resource should affect the member resources' as well. So, having PUT /cache with empty representation delete the member cache entry resources is entirely reasonable. > In any case I don't follow how you figure that PUT makes any sense in > this context. > > Sam IMO, using a DELETE to reset the resource's state seems overkill and wholly unnecessary. It resets the state in a round-about way. Also, sometimes you want to keep some of the resource's internal states because their usefulness extend beyond the flushing of the cache. For example, they could have been used to optimise performance (e.g. frequency table so it put popular key on a faster media). The PUT approach won't have such problem because: - these attributes are internal (not in the exported representation) - outside agents can directly affect them since they won't know of the attributes' existence - they are associated with the resource - the resource is not deleted and/or replace with some other resource thus, they remain. YS.
Netflix has a couple of searches: - http://api.netflix.com/catalog/titles - search movies - http://api.netflix.com/catalog/people - search for people Each returns a list of relevant material with links to movies or people. Once you change your application state to view people or movies, you can see other related movies, people and other fun stuff through URLs, but you can't get back to search. What suggestions would you have to model the relationships between movies/people and search? -Solomon
> I'm not trying to argue in favour of one interpretation or the other, > but for me feels "strange" a DELETE that deletes the resource and not > it's URI. What is worth a URI without representation? What does it > identifies? It may feel "strange" like pineapple slices as a pizza topping does to me, but it's not wrong. The URI is the "pointer" to the resource, not the resource itself. Issuing a DELETE should remove the resource (technically not the URI) and a GET to the resource using that URI should return a 404 (although I could see a use case for 410). Having said all this, I wouldn't use DELETE. Eb
Sam Johnston wrote: > Agreed, there are plenty of non-hypertext formats that we need to handle > for different types of resources (e.g. iCal, vCard, or in this context > OVF/OVA). This is obviously what the web's founding fathers had in mind > but HTML was so wildly successful that we haven't needed it (until now). > Why on earth would anyone want a wrapper format when it was possible to > use native HTTP, especially when you're dealing with potentially > enormous files like virtual hard drives and need the raw performance of > a clean connection? > > Sam > Its not just about raw performance though. There's also a barrier to entry problem very similar to SOAP and CORBA. Before I heard about Atom, one of the things that attracted me to REST was its low barriers to entry to create interoperable services. There were very few steps for me to be able to communicate and work with Java, PHP, Perl, Pythin, and Ruby based applications. I just needed an HTTP library. Beyond HTTP itself there was nothing extra I need to assembly or put together to interact with a service. I'd just be exchanging the information I was interested in. Get me? It was simple. I already started rethinking pub/sub and p2p by writing a one-to-one facade over JMS a year ago: http://bill.burkecentral.com/2008/06/16/resteasy-mom-an-exercise-in-jax-rs-restful-ws-design/ The solution is totally format agnostic. I want to go down the road of reworking the solution as JMS can be too session oriented. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
I don't know if you've been following all the thread, but the question here was using a DELETE in a way that a subsequent GET would return a 200. Ebenezer Ikonne wrote: > > > > I'm not trying to argue in favour of one interpretation or the other, > > but for me feels "strange" a DELETE that deletes the resource and not > > it's URI. What is worth a URI without representation? What does it > > identifies? > > It may feel "strange" like pineapple slices as a pizza topping does to > me, but it's not wrong. > > The URI is the "pointer" to the resource, not the resource itself. > Issuing a DELETE should remove the resource (technically not the URI) > and a GET to the resource using that URI should return a 404 (although > I could see a use case for 410). > > Having said all this, I wouldn't use DELETE. > > Eb > >
Correcting some errors in my post: Sam Johnston <samj@...> writes: > On Fri, Jun 26, 2009 at 1:49 PM, Stefan Tilkov <stefan.tilkov@...> wrote: >> >> On Jun 26, 2009, at 1:32 PM, Mike Kelly wrote: >> > Stefan Tilkov wrote: >> >> >> >> DELETE /cache is bad because a subsequent GET on /cache will >> >> likely return 200 >> > >> > Wouldn't that make sense if the cache still exists but is empty? > > Yes. If it's a problem for you then recreate the cache when the first > entries come in (but keep an eye out for clients breaking in the > interim when they get 404s). If I receive a DELETE request, I will assume that the requester want the specified resource to go away. But in this case, I don't think you want to do away with the cache. Instead you simply want to reset its state. > In any case I don't follow how you figure that PUT makes any sense in > this context. > > Sam The identifier /cache points to a compound resource which includes resources for individual cache entries. Affecting the state of the compound resource should affect the member resources' as well. So, having PUT /cache with empty representation delete the member cache entry resources is entirely reasonable. IMO, using a DELETE to reset the resource's state seems overkill and wholly unnecessary. It resets the state in a round-about way. Also, sometimes you want to keep some of the resource's internal attributes because their usefulness extend beyond the flushing of the cache. For example, they could have been used to optimise performance (e.g. a frequency table so it could put popular keys on a faster media). The PUT approach won't have a problem in preserving them: - these attributes are internal (not in the exported representation) - outside agents can directly affect them since they won't know of the attributes' existence - they are associated with the resource - the resource is not deleted and/or replace with some other resource thus, they remain. YS.
Ant�nio Mota wrote: > Also, if you have in the same application a DELETE that deletes a > resource and not the URI and other DELETE that deletes both, then DELETE > isn't uniform any more. > I disagree with that, because I don't think it's actually violating the DELETE specification. If you have self-descriptive messages and are leveraging HATEOAS there should be no issue mixing both approaches in one system.
I did not miss that. I'm just saying that I don't believe a DELETE "deletes" a URI as one could infer from your original statement. 2009/6/26 António Mota <amsmota@...> > I don't know if you've been following all the thread, but the question here > was using > a DELETE in a way that a subsequent GET would return a 200. > > > Ebenezer Ikonne wrote: > >> >> >> > I'm not trying to argue in favour of one interpretation or the other, >> > but for me feels "strange" a DELETE that deletes the resource and not >> > it's URI. What is worth a URI without representation? What does it >> > identifies? >> >> It may feel "strange" like pineapple slices as a pizza topping does to me, >> but it's not wrong. >> >> The URI is the "pointer" to the resource, not the resource itself. Issuing >> a DELETE should remove the resource (technically not the URI) and a GET to >> the resource using that URI should return a 404 (although I could see a use >> case for 410). >> >> Having said all this, I wouldn't use DELETE. >> >> Eb >> >> >> > >
Noah Slater wrote: > > > > On Fri, Jun 26, 2009 at 01:31:14AM +0100, Bill de hOra wrote: > > It's like atom:id - you must have one in the format, but how to > create one is > > undefined. Atom's format only considered the read/syndication > usecase. That > > was awkward when it came to specifying AtomPub. LINK is similar - how > a LINK > > relationship is created/managed/destroyed is undefined. > > I don't understand this analogy. It's about managing the state of resources - with atom you have to have an id for the entry before the entry resource is created (and that was a bootstrap problem for the people who worked on AtomPub). With link you have to have the relationship created and it'll have to be done out of band. > I have only lightly followed this thread, so I apologise if I am > covering old > ground. I don't understand why this ID should cover the manipulation of the > header value any more than any of the other common HTTP headers have > values that > can be manipulated with HTTP itself. What is the problem with saying > that any > LINK headers should be managed at the server's discretion? Because the thread is about how to manage them without hypertext, as per the title. > I can understand why others might > want to do > it over HTTP, but the server is free to expose some hypertext system for > that. That's the point - I saying it'll have to. Bill
Well, what I was saying is that the spec says "to delete the resource *or move it to an inaccessible location* ", so the resource is not necessarily deleted. Now I think a URI doesn't exist "per se" (or existing "per se" it points to nowhere, identifies nothing, for me is the same as "don't exist" in the sense it doesn't have a representation) so when I said "delete the URI" I should have said "disassociate the URI from the resource" to be accurate. Ebenezer Ikonne wrote: > > > I did not miss that. I'm just saying that I don't believe a DELETE > "deletes" a URI as one could infer from your original statement. > > > 2009/6/26 Ant�nio Mota <amsmota@... <mailto:amsmota@...>> > > I don't know if you've been following all the thread, but the > question here was using > a DELETE in a way that a subsequent GET would return a 200. > > > Ebenezer Ikonne wrote: > > > > > I'm not trying to argue in favour of one interpretation or > the other, > > but for me feels "strange" a DELETE that deletes the > resource and not > > it's URI. What is worth a URI without representation? What > does it > > identifies? > > It may feel "strange" like pineapple slices as a pizza topping > does to me, but it's not wrong. > > The URI is the "pointer" to the resource, not the resource > itself. Issuing a DELETE should remove the resource > (technically not the URI) and a GET to the resource using that > URI should return a 404 (although I could see a use case for 410). > > Having said all this, I wouldn't use DELETE. > > Eb > > > > >
Yep. So let's all concur that DELETE removes the resource (whatever that resource is), leaves the URI alone and subsequent GETs to that resource should "fail" i.e. return 404/410 status code because the resource should be not accessible i.e. a 2xx status code should not be returned? With this assumption, if you DELETE the /cache a new /cache would need to be created (via PUT) before it could be populated (via subsequent PUTs). (There are other ways to handle this also). I'm probably asking for too much here :), but this really doesn't seem that complicated. I may be missing something very obvious though. 2009/6/26 António Mota <amsmota@...> > Well, what I was saying is that the spec says "to delete the resource *or > move it to an inaccessible location* ", so the resource is not necessarily > deleted. Now I think a URI doesn't exist "per se" (or existing "per se" it > points to nowhere, identifies nothing, for me is the same as "don't exist" > in the sense it doesn't have a representation) so when I said "delete the > URI" I should have said "disassociate the URI from the resource" to be > accurate. > > Ebenezer Ikonne wrote: > >> >> >> I did not miss that. I'm just saying that I don't believe a DELETE >> "deletes" a URI as one could infer from your original statement. >> >> >> 2009/6/26 António Mota <amsmota@gmail.com <mailto:amsmota@...>> >> >> I don't know if you've been following all the thread, but the >> question here was using >> a DELETE in a way that a subsequent GET would return a 200. >> >> >> Ebenezer Ikonne wrote: >> >> >> >> > I'm not trying to argue in favour of one interpretation or >> the other, >> > but for me feels "strange" a DELETE that deletes the >> resource and not >> > it's URI. What is worth a URI without representation? What >> does it >> > identifies? >> >> It may feel "strange" like pineapple slices as a pizza topping >> does to me, but it's not wrong. >> >> The URI is the "pointer" to the resource, not the resource >> itself. Issuing a DELETE should remove the resource >> (technically not the URI) and a GET to the resource using that >> URI should return a 404 (although I could see a use case for 410). >> >> Having said all this, I wouldn't use DELETE. >> >> Eb >> >> >> >> >> >> > >
Let me rephrase: Having in the same app a situation where a DELETE causes a subsequent GET to return a 200 and a DELETE causing a subsequent GET returning a 404 breaks the uniform constraint. Because we are talking about two different operations here, the first DELETE isn't really deleting a resource, it's modifying it's state, so it's more of a hack. But hacks happen, I use them lot's of time... Mike Kelly wrote: > Ant�nio Mota wrote: > >> Also, if you have in the same application a DELETE that deletes a >> resource and not the URI and other DELETE that deletes both, then >> DELETE isn't uniform any more. >> > > I disagree with that, because I don't think it's actually violating > the DELETE specification. > > If you have self-descriptive messages and are leveraging HATEOAS there > should be no issue mixing both approaches in one system. > >
Bill, As a matter of interest, are the individual entries available (e.g. at /cache/123 or /cache/data/123)? I don't really grok/like this /cache/data idea - /cache is the resource and cache entries should be subordinates (which you can retrieve, delete, etc. individually). Say this "cache" is a queue (for a real world example of a similar problem), and you can create as many queues as you want (e.g. /queue/123)... there is now a need to be able to permanently delete the queue such that attempting to submit messages fails. If we've already used DELETE for flushing then we're in trouble. Plus, if there's parameters (e.g. capacity) then recreation after a DELETE should reset those. In that case "curl -X POST http://example.com/queue/123/_flush" seems appropriate to me. FWIW I like the way CouchDB handles compaction<http://wiki.apache.org/couchdb/Compaction>: http://example.com/my_db/_compact. In fact I like the "_" syntax which keeps the verbs out of the main namespace without having to bundle them up in a directory (Sun Cloud API<http://kenai.com/projects/suncloudapis/pages/HelloCloud>puts its verbs in under "ops" for example). Often you also need to supply one or more parameters too (for example, is that "shutdown" a clean OS shutdown, an ACPI off or a cable pull and that disk "resize" is how many Gb?) - in that case it's easy enough to pass them in HTML form encoded. So how's all that grab you? Sam (who still doesn't like empty PUTs, unless the cache is a single blob) 2009/6/26 Bill Burke <bburke@...> > > > Well, I need /cache to be the representation of the cache's > configuration, so any operation would have to be on /cache/data to do a > flush. > > Since these caches represent things like RDMS ORM caches (Hibernate), > HTTP Session state (Java objects) it doesn't make a lot of sense to use > PUT as you'd never be able to PUT a non-empty body. > > So I think I'd prefer > > GET/PUT on /cache to modify cache configuration > DELETE /cache/data to flush the cache, returning 204 when the cache is > empty > GET /cache/data could return a picture of the cache where it made sense. > > Finally, DELETE is much more intuitive and simple. I'd much prefer a > simple interface thats easy to describe over something that is "pure". > Since everybody's definition of "pure" seems to be different then who's > to say mine isn't... > > FYI, this was an awesome discussion :) I really appreciate the thought > exercise and hearing everybody's opinion. I hope do get some blog or > article together describing a few exercises we went through creating our > management interface. > > > António Mota wrote: > > > > > > > > Just out of curiosity, what was your conclusion? What method/uri did you > > finally use? > > > > Bill Burke wrote: > > > > > > > > > FYI this was a management interface for a JBoss specific cache set up > by > > > the user, not an HTTP cache. > > > > > > Bill de hOra wrote: > > > > > > > > > > > > > > > > Hey Bill, > > > > > > > > squid supports PURGE > > > > > > > > > > > > > < > http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > > < > http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a> > > > > > > > > < > http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > > < > http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a>> > > > > > > > > > > > > > > > < > http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > > < > http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a> > > > > > > > > < > http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > > < > http://wiki.squid-cache.org/SquidFaq/SquidLogs#head-b68908c93520751aedc2311c245694476978681a > >>>> > > > > > > > > Bill > > > > > > > > Bill Burke wrote: > > > > > > > > > > > > > > > > > > > > Yesterday, in a meeting at JBoss, I was evangelizing REST to a few > > > of my > > > > > colleagues. An interesting question came up: > > > > > > > > > > Let's say you have a distributed cache you want to manage through a > > > > > RESTful interface. One operation on the cache is clearing or > flushing > > > > > it. The interesting thing about flushing is that the act of > flushing > > > > > changes the state of the cache, but "flushing" isn't a state of the > > > > > cache itself. It seems to be a pure operation. How do you model > > > > > something like this in REST? Is it correct to do: > > > > > > > > > > PUT /cache/flusher (PUT because flushing is idempotent) > > > > > > > > > > Or maybe even better: > > > > > > > > > > GET /cache > > > > > > > > > > returns a document like > > > > > > > > > > <cache> > > > > > <link rel="FLUSH" href="/cache/flusher"/> > > > > > </cache> > > > > > > > > > > Or maybe this is better: > > > > > > > > > > DELETE /cache/data > > > > > > > > > > Maybe I just answered my own question :) > > > > > > > > > > Thanks for listening, > > > > > > > > > > Bill > > > > > > > > > > -- > > > > > Bill Burke > > > > > JBoss, a division of Red Hat > > > > > http://bill.burkecentral.com <http://bill.burkecentral.com> > > <http://bill.burkecentral.com <http://bill.burkecentral.com>> > > > <http://bill.burkecentral.com <http://bill.burkecentral.com> > > <http://bill.burkecentral.com <http://bill.burkecentral.com>>> > > > > <http://bill.burkecentral.com <http://bill.burkecentral.com> > > <http://bill.burkecentral.com <http://bill.burkecentral.com>> > > > <http://bill.burkecentral.com <http://bill.burkecentral.com> > > <http://bill.burkecentral.com <http://bill.burkecentral.com>>>> > > > > > > > > > > > > > > > > > > > > > > > > -- > > > Bill Burke > > > JBoss, a division of Red Hat > > > http://bill.burkecentral.com <http://bill.burkecentral.com> > > <http://bill.burkecentral.com <http://bill.burkecentral.com>> > > > > > > > > > > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > >
Sam Johnston wrote: > > > > Bill, > > As a matter of interest, are the individual entries available (e.g. at > /cache/123 or /cache/data/123)? I don't really grok/like this > /cache/data idea - /cache is the resource and cache entries should be > subordinates (which you can retrieve, delete, etc. individually). > > Say this "cache" is a queue (for a real world example of a similar > problem), and you can create as many queues as you want (e.g. > /queue/123)... there is now a need to be able to permanently delete the > queue such that attempting to submit messages fails. If we've already > used DELETE for flushing then we're in trouble. Plus, if there's > parameters (e.g. capacity) then recreation after a DELETE should reset > those. In that case "curl -X POST http://example.com/queue/123/_flush > <http://example.com/queue/123/_flush>" seems appropriate to me. > > <http://wiki.apache.org/couchdb/Compaction>: > http://example.com/my_db/_compact <http://example.com/my_db/_compact>. I see what you're saying. There's a possibility of having the need to overload DELETE, so don't use DELETE. So would it be? PUT /cache/mycache/root/_flush PUT /cache/mycache/root/123/_flush PUT /cache/mycache/root/123/456/_flush (I prefer PUT over POST when the operation is idempotent :) ) Another operation might be evict, which is different than flush, which is a clear so PUT /cache/mycache/root/_evict PUT /cache/mycache/root/123/_evict DELETE should still probably be allowed for individual leaf entries off of root as DELETE removes the entry while _flush leaves the entry but just clears it. Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Fri, Jun 26, 2009 at 8:11 PM, Bill Burke <bburke@...> wrote: > > I see what you're saying. There's a possibility of having the need to > overload DELETE, so don't use DELETE. > Exactly (which is a shift from my earlier position after considering the problem more carefully and generically, knowing that OCCI will likely end up looking at such things before long). > So would it be? > > PUT /cache/mycache/root/_flush > PUT /cache/mycache/root/123/_flush > PUT /cache/mycache/root/123/456/_flush > > (I prefer PUT over POST when the operation is idempotent :) ) > I still don't see that PUT is appropriate here, idempotent or not. Quoting the RFC: "*The PUT method requests that the enclosed entity be stored under the supplied Request-URI.*" This (and verbs in general) sounds more like a "data-handling process" to me, so POST. Note also that some of your verbs are going to require parameters - perhaps you'll want something like scrub = none|zero|random for example. Another operation might be evict, which is different than flush, which is a > clear so > > PUT /cache/mycache/root/_evict > PUT /cache/mycache/root/123/_evict > Right, this approach allows for multiple/many verbs (though perhaps parametrising/overloading the "flush" verb makes more sense depending on the semantics of "evict" - passing in the cache replacement policy for example). > DELETE should still probably be allowed for individual leaf entries off of > root as DELETE removes the entry while _flush leaves the entry but just > clears it. > Exactly. Interested to hear what others think about the underscore syntax for verbs ala CouchDB... seems simple and elegant to me, provided your keys don't start with "_" :) Sam -- Sam Johnston http://samj.net/
On Fri, Jun 26, 2009 at 10:34 AM, Sam Johnston <samj@...> wrote: > > FWIW I like the way CouchDB handles compaction: http://example.com/my_db/_compact. In fact I like the "_" syntax which > keeps the verbs out of the main namespace without having to bundle them up in a directory (Sun Cloud API puts its verbs in > under "ops" for example). Often you also need to supply one or more parameters too (for example, is that "shutdown" a > clean OS shutdown, an ACPI off or a cable pull and that disk "resize" is how many Gb?) - in that case it's easy enough to > pass them in HTML form encoded. > In our current implementation of the Sun Cloud API, the verbs do creep back in to the URIs on stuff like this, but we happen to use a request parameter for it :-). Of course, the client is never supposed to be examining these URIs anyway, so it's just an implementation detail. Regarding parameters to an operation, we support an open ended application/json hash in the request body with conventions around some field names (a "note" field should be a message intended for a log file, if it is present). If we were built around HTML forms, that would certainly work just as well, but we're all JSON as a matter of expedience. Regarding the verb, we were originally using PUT for these operations, but got some fairly strong feedback (including some from this group) that POST was more appropriate because of the semantics of what the operations did ... they really did start some process on the server, and the impacts on the representation of what we POST to are *side effects* that have no immediate relationship to the content of the request body. So, POST feels a lot better. If this were my cache, I'd probably make the same sort of design decision here, and use a POST for a non-CRUD operation like "clear the cache", plus let it have some options for things like "clear all entries older than xxx minutes" or "clear all entries with the following ids". Using a PUT with an empty body feels like trying to turn a screw into a nail because I happen to have a hammer handy. Craig
> If this were my cache, I'd probably make the same sort of design > decision here, and use a POST for a non-CRUD operation like "clear the > cache", plus let it have some options for things like "clear all > entries older than xxx minutes" or "clear all entries with the > following ids". Using a PUT with an empty body feels like trying to > turn a screw into a nail because I happen to have a hammer handy. > > Craig > I don't necessarily agree with assessment on using PUT but I agree with the notion that this ultimately should be driven by potential changes (and side-effects)that could happen to the resource(s). If it boils down to simply adding/removing entries then PUT could handle this quite nicely (I think) as we'd be simply modifying the resource via representations. If there are more creative things that can occur, then POST may be required and make more sense. I just don't see DELETE as much of an option except the intent is to get rid of cache (and not only its contents) entirely.
I was thinking about implementing a RESTful topic (as opposed to queue).
In Java JMS land a subscriber can make himself persistent with a
topic. This means that the topic keeps a placeholder for the subscriber
so that when the subscriber asks for a message, it gets the next message
after the last one it read. The problem with this of course is that the
server must maintain a session for each subscriber.
So, to flip the problem around, how about a subscriber doing the
book-keeping itself? The topic would remember all of its messages and
the order in which they were published. The subscriber would just tell
the topic the index it wants.
Which brings me to my thought. What about using conditional gets to
implement this?
GET /topic would return one posted message:
200
Content-Type: ...
ETag: 34234234
Last-Modified: /6/26/2009 ...
<message-body>
The ETag and/or Last-Modified header would be the index into the topic.
A non-conditional GET would return the latest posted message. A
conditional GET would return a message that was posted *right after* the
ETAG/Last-Modified combo. If there are no new messages in the topic,
the conditional GET would return 304, NOT MODIFIED.
An interesting side effect of this is that a Cache-Control could
control(suggest) how often the subscribers should poll for new messages.
Like if somebody conditionally gets an older index, the server would
return no-cache as there are newer messages in the topic. A server
could keep an average rate of publishing and use that average time in a
max-age value.
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
On Jun 26, 2009, at 5:16 PM, Ebenezer Ikonne wrote:
<snip/>
>
> I'm probably asking for too much here :), but this really doesn't
> seem that complicated. I may be missing something very obvious
> though.
>
>
Not to say that you are in fact missing something, but what is not the
agreement on the method to use to flush the cache but agreement on the
link semantics that tell the client where to PUT/POST/DELETE for the
cache to be flushed.
Along these lines:
GET /cache
200 Ok
Content-Type: application/cacheinfo+xml
<cacheinfo>
<software>Squid 3.0</software>
<cache>
<total>262</total>
<stale>3</stale>
<completelist href="/cache/all"/>
<top10>
<!-- links to top10 requests to cache -->
</top10>
</cache>
</cacheinfo>
Now, if the specification of application/cacheinfo+xml defines that a
DELETE on the href attribute of the <completelist> element will flush
the cache then one can write a true hypermedia driven client that can
flush the cache when given a cacheinfo document. If the media type
spec prefers to define that a PUT <empty/> does the flush, then that
is equally fine.
Jan
> 2009/6/26 António Mota <amsmota@...>
> Well, what I was saying is that the spec says "to delete the
> resource *or move it to an inaccessible location* ", so the resource
> is not necessarily deleted. Now I think a URI doesn't exist "per
> se" (or existing "per se" it points to nowhere, identifies nothing,
> for me is the same as "don't exist" in the sense it doesn't have a
> representation) so when I said "delete the URI" I should have said
> "disassociate the URI from the resource" to be accurate.
>
> Ebenezer Ikonne wrote:
>
>
> I did not miss that. I'm just saying that I don't believe a DELETE
> "deletes" a URI as one could infer from your original statement.
>
>
> 2009/6/26 António Mota <amsmota@... <mailto:amsmota@...>>
>
>
> I don't know if you've been following all the thread, but the
> question here was using
> a DELETE in a way that a subsequent GET would return a 200.
>
>
> Ebenezer Ikonne wrote:
>
>
>
> > I'm not trying to argue in favour of one interpretation or
> the other,
> > but for me feels "strange" a DELETE that deletes the
> resource and not
> > it's URI. What is worth a URI without representation? What
> does it
> > identifies?
>
> It may feel "strange" like pineapple slices as a pizza topping
> does to me, but it's not wrong.
>
> The URI is the "pointer" to the resource, not the resource
> itself. Issuing a DELETE should remove the resource
> (technically not the URI) and a GET to the resource using that
> URI should return a 404 (although I could see a use case for
> 410).
>
> Having said all this, I wouldn't use DELETE.
>
> Eb
>
>
>
>
>
>
>
>
>
>
Sorry, somehow some words did not make it. The first para should have read: > Not to say that you are in fact missing something, but what is > important is not the > agreement on the method to use to flush the cache but agreement on the > link semantics that tell the client where to PUT/POST/DELETE for the > cache to be flushed. On Jun 27, 2009, at 1:19 AM, Jan Algermissen wrote: > > Not to say that you are in fact missing something, but what is not the > agreement on the method to use to flush the cache but agreement on the > link semantics that tell the client where to PUT/POST/DELETE for the > cache to be flushed. > > Along these lines: > > GET /cache > > 200 Ok > Content-Type: application/cacheinfo+xml > > <cacheinfo> > <software>Squid 3.0</software> > <cache> > <total>262</total> > <stale>3</stale> > <completelist href="/cache/all"/> > <top10> > <!-- links to top10 requests to cache --> > </top10> > </cache> > </cacheinfo> > > Now, if the specification of application/cacheinfo+xml defines that a > DELETE on the href attribute of the <completelist> element will flush > the cache then one can write a true hypermedia driven client that can > flush the cache when given a cacheinfo document. If the media type > spec prefers to define that a PUT <empty/> does the flush, then that > is equally fine. > > > Jan > > > > > > > >> 2009/6/26 António Mota <amsmota@...> >> Well, what I was saying is that the spec says "to delete the >> resource *or move it to an inaccessible location* ", so the resource >> is not necessarily deleted. Now I think a URI doesn't exist "per >> se" (or existing "per se" it points to nowhere, identifies nothing, >> for me is the same as "don't exist" in the sense it doesn't have a >> representation) so when I said "delete the URI" I should have said >> "disassociate the URI from the resource" to be accurate. >> >> Ebenezer Ikonne wrote: >> >> >> I did not miss that. I'm just saying that I don't believe a DELETE >> "deletes" a URI as one could infer from your original statement. >> >> >> 2009/6/26 António Mota <amsmota@... <mailto:amsmota@...>> >> >> >> I don't know if you've been following all the thread, but the >> question here was using >> a DELETE in a way that a subsequent GET would return a 200. >> >> >> Ebenezer Ikonne wrote: >> >> >> >>> I'm not trying to argue in favour of one interpretation or >> the other, >>> but for me feels "strange" a DELETE that deletes the >> resource and not >>> it's URI. What is worth a URI without representation? What >> does it >>> identifies? >> >> It may feel "strange" like pineapple slices as a pizza topping >> does to me, but it's not wrong. >> >> The URI is the "pointer" to the resource, not the resource >> itself. Issuing a DELETE should remove the resource >> (technically not the URI) and a GET to the resource using that >> URI should return a 404 (although I could see a use case for >> 410). >> >> Having said all this, I wouldn't use DELETE. >> >> Eb >> >> >> >> >> >> >> >> >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Fri, Jun 26, 2009 at 8:32 PM, Craig McClanahan <craigmcc@...>wrote: In our current implementation of the Sun Cloud API, the verbs do creep > back in to the URIs on stuff like this, but we happen to use a request > parameter for it :-). Of course, the client is never supposed to be > examining these URIs anyway, so it's just an implementation detail. > Agreed, there should be no rules and URLs should be opaque. I use link relations to indicate what the link does and the title attribute for a human readable version, something like this: <link href="http://example.com/compute/123/_restart" title="Restart" rel=" http://purl.org/occi/state#restart" /> > Regarding parameters to an operation, we support an open ended > application/json hash in the request body with conventions around some > field names (a "note" field should be a message intended for a log > file, if it is present). If we were built around HTML forms, that > would certainly work just as well, but we're all JSON as a matter of > expedience. > The beauty of using HTML forms is that clients have built in support for it. The sort of thing we want to be able to do is have sysadmin scripts, cron jobs and tyre kickers interacting with the API ala: curl -F type=cold http://example.com/compute/123/_reboot > Regarding the verb, we were originally using PUT for these operations, > but got some fairly strong feedback (including some from this group) > that POST was more appropriate because of the semantics of what the > operations did ... they really did start some process on the server, > and the impacts on the representation of what we POST to are *side > effects* that have no immediate relationship to the content of the > request body. So, POST feels a lot better. > +1 > If this were my cache, I'd probably make the same sort of design > decision here, and use a POST for a non-CRUD operation like "clear the > cache", plus let it have some options for things like "clear all > entries older than xxx minutes" or "clear all entries with the > following ids". Using a PUT with an empty body feels like trying to > turn a screw into a nail because I happen to have a hammer handy. > +1. The HTTP specs are very clear on what PUT is to be used for - one heavily overloaded verb (POST) is more than enough. Sam
Fair point. I believe however we were dealing with "/cache". "/cache/all" obviously models the problem space a little differently and using DELETE and/or PUT can end up flushing the cache (same application end result) but alter the resource differently i.e. in one case a GET returns 4xx while the other returns a 200 with an empty body. My thoughts anyway. Good discussion. On Fri, Jun 26, 2009 at 7:19 PM, Jan Algermissen <algermissen1971@mac.com>wrote: > > On Jun 26, 2009, at 5:16 PM, Ebenezer Ikonne wrote: > > <snip/> > > >> I'm probably asking for too much here :), but this really doesn't seem >> that complicated. I may be missing something very obvious though. >> >> >> > Not to say that you are in fact missing something, but what is not the > agreement on the method to use to flush the cache but agreement on the link > semantics that tell the client where to PUT/POST/DELETE for the cache to be > flushed. > > Along these lines: > > GET /cache > > 200 Ok > Content-Type: application/cacheinfo+xml > > <cacheinfo> > <software>Squid 3.0</software> > <cache> > <total>262</total> > <stale>3</stale> > <completelist href="/cache/all"/> > <top10> > <!-- links to top10 requests to cache --> > </top10> > </cache> > </cacheinfo> > > Now, if the specification of application/cacheinfo+xml defines that a > DELETE on the href attribute of the <completelist> element will flush the > cache then one can write a true hypermedia driven client that can flush the > cache when given a cacheinfo document. If the media type spec prefers to > define that a PUT <empty/> does the flush, then that is equally fine. > > > Jan > > > > > > > > 2009/6/26 António Mota <amsmota@...> >> Well, what I was saying is that the spec says "to delete the resource *or >> move it to an inaccessible location* ", so the resource is not necessarily >> deleted. Now I think a URI doesn't exist "per se" (or existing "per se" it >> points to nowhere, identifies nothing, for me is the same as "don't exist" >> in the sense it doesn't have a representation) so when I said "delete the >> URI" I should have said "disassociate the URI from the resource" to be >> accurate. >> >> Ebenezer Ikonne wrote: >> >> >> I did not miss that. I'm just saying that I don't believe a DELETE >> "deletes" a URI as one could infer from your original statement. >> >> >> 2009/6/26 António Mota <amsmota@... <mailto:amsmota@...>> >> >> >> I don't know if you've been following all the thread, but the >> question here was using >> a DELETE in a way that a subsequent GET would return a 200. >> >> >> Ebenezer Ikonne wrote: >> >> >> >> > I'm not trying to argue in favour of one interpretation or >> the other, >> > but for me feels "strange" a DELETE that deletes the >> resource and not >> > it's URI. What is worth a URI without representation? What >> does it >> > identifies? >> >> It may feel "strange" like pineapple slices as a pizza topping >> does to me, but it's not wrong. >> >> The URI is the "pointer" to the resource, not the resource >> itself. Issuing a DELETE should remove the resource >> (technically not the URI) and a GET to the resource using that >> URI should return a 404 (although I could see a use case for 410). >> >> Having said all this, I wouldn't use DELETE. >> >> Eb >> >> >> >> >> >> >> >> >> >> >> > >
Bill, This is certainly something that should be tracked on the client side, though I'd be very wary of trying to overload the existing caching mechanisms because their behaviour is both well defined and well known. The first thing that came to my mind when I saw this was GData queries<http://code.google.com/apis/gdata/docs/2.0/reference.html#Queries>ala: http://www.example.com/feeds/jo?q=Darcy&*updated-min=2005-04-19T15:30:00Z* Of course you could pick up this timestamp from the Last-Modified header, but don't use If-Modified-Since (which returns the entire resource if it's been modified, or nothing). The problem with timestamps is that regardless of the precision there's always the chance that you'll have multiple events at the same "instant" and will end up missing some (which is not great when they're things like payments). Enter ETags which are in fact deterministic. Of course for them to make sense (given they're "random" identifiers rather than sequences - which are notoriously difficult to maintain in scalable cloud architectures) you need to track the order that messages are committed. When a clients asks for "messages since C0QBRXcycSp7ImA9WxRVFUk" (perhaps last-etag=xyz in the query string?) you need to be able to replay the intermediary messages and return the ETag of the latest one with the resultset. Also "mean time between updates" is a clever idea but it sounds like an attribute on the topic object itself. Objects in queues are generally immutable so it doesn't make sense to prematurely expire them by overloading the caching directives. Sam On Fri, Jun 26, 2009 at 9:09 PM, Bill Burke <bburke@redhat.com> wrote: > > > I was thinking about implementing a RESTful topic (as opposed to queue). > In Java JMS land a subscriber can make himself persistent with a > topic. This means that the topic keeps a placeholder for the subscriber > so that when the subscriber asks for a message, it gets the next message > after the last one it read. The problem with this of course is that the > server must maintain a session for each subscriber. > > So, to flip the problem around, how about a subscriber doing the > book-keeping itself? The topic would remember all of its messages and > the order in which they were published. The subscriber would just tell > the topic the index it wants. > > Which brings me to my thought. What about using conditional gets to > implement this? > > GET /topic would return one posted message: > > 200 > Content-Type: ... > ETag: 34234234 > Last-Modified: /6/26/2009 ... > > <message-body> > > The ETag and/or Last-Modified header would be the index into the topic. > A non-conditional GET would return the latest posted message. A > conditional GET would return a message that was posted *right after* the > ETAG/Last-Modified combo. If there are no new messages in the topic, > the conditional GET would return 304, NOT MODIFIED. > > An interesting side effect of this is that a Cache-Control could > control(suggest) how often the subscribers should poll for new messages. > Like if somebody conditionally gets an older index, the server would > return no-cache as there are newer messages in the topic. A server > could keep an average rate of publishing and use that average time in a > max-age value. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > >
Sam Johnston wrote: > Bill, > > This is certainly something that should be tracked on the client side, > though I'd be very wary of trying to overload the existing caching > mechanisms because their behaviour is both well defined and well known. > > The first thing that came to my mind when I saw this was GData queries > <http://code.google.com/apis/gdata/docs/2.0/reference.html#Queries> ala: > > http://www.example.com/feeds/jo?q=Darcy& > <http://www.example.com/feeds/jo?q=Darcy&>*updated-min=2005-04-19T15:30:00Z* > Yeah, using a query parameter would work better. Using ETag/last-modified as the index wouldn't work in a proxy scenario if the client had an older index than the proxy. But you would still need a mechanism to update the client's index. An envelope format or custom header would need to be introduced. This is why I wanted to use this idea in the first place: to avoid defining an envelope or new response header or a Link header NEXT relation. With the idea, the client subscriber can be totally oblivious to what is going on. For example, I could skim through all messages in the topic just by refreshing the URL in my browser. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
For all you JSON guys, how do you describe services that exchange JSON? There is no schema or object notation for JSON. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
well, there's this idea: http://json-schema.org/ right now, i use out-of-band documentation to let folks know what's allowed in POST/PUT and what is returned in GET. most of the time, JSON requests for services i create are negotiated via the Accept header and that usually means folks can also negotiate some XML flavor of the same representation in order to get an idea of what is avail/expected in a form that may be a bit easier to grok/validate, etc. mca http://amundsen.com/blog/ On Sun, Jun 28, 2009 at 20:02, Bill Burke <bburke@...> wrote: > For all you JSON guys, how do you describe services that exchange JSON? > There is no schema or object notation for JSON. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > ------------------------------------ > > Yahoo! Groups Links > > > >
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
There is a schema format for JSON (Mike pointed to a link, although it
is not hard to google for info about it). You can also see that
development of that at: http://groups.google.com/group/json-schema.
For RPC services, the service mapping description (SMD) format is
available for describing JSON services (RPC oriented). For REST
services (presumably you are more interested in that if you are
emailing rest-discuss), there has been ongoing discussions of
development of conventions at the restful-json google group[1] (which
is more focused REST applied to JSON, a better SNR over there). In
particular, based on the discussions of this thread [2], I proposed an
approach for leveraging JSON schema for describing restful JSON
services (based on some patterns used in variety of RESTful JSON
systems). I am working on putting together a more comprehensive/formal
proposal for this. Suggestions/feedback welcome.
[1] http://groups.google.com/group/restful-json
[2]
http://groups.google.com/group/restful-json/browse_thread/thread/cf4b0bd444f5fd83
I hope that helps,
Kris
Bill Burke wrote:
>
>
> For all you JSON guys, how do you describe services that exchange JSON?
> There is no schema or object notation for JSON.
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com <http://bill.burkecentral.com>
>
>
> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%;
> font-weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; }
> #ygrp-mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!--
> #ygrp-sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%;
> line-height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom:
> 10px; padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px;
> font-family:
> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> input, textarea {font:99% arial,helvetica,clean,sans-serif;}
> #ygrp-mlmsg pre, code {font:115% monospace;*font-size:100%;}
> #ygrp-mlmsg * {line-height:1.22em;} #ygrp-text{ font-family:
> Georgia; } #ygrp-text p{ margin: 0 0 1em 0; } dd.last p a {
> font-family: Verdana; font-weight: bold; } #ygrp-vitnav{
> padding-top: 10px; font-family: Verdana; font-size: 77%; margin: 0;
> } #ygrp-vitnav a{ padding: 0 1px; } #ygrp-mlmsg #logo{
> padding-bottom: 10px; } #ygrp-reco { margin-bottom: 20px; padding:
> 0px; } #ygrp-reco #reco-head { font-weight: bold; color: #ff7900; }
> #reco-category{ font-size: 77%; } #reco-desc{ font-size: 77%; }
> #ygrp-vital a{ text-decoration: none; } #ygrp-vital a:hover{
> text-decoration: underline; } #ygrp-sponsor #ov ul{ padding: 0 0 0
> 8px; margin: 0; } #ygrp-sponsor #ov li{ list-style-type: square;
> padding: 6px 0; font-size: 77%; } #ygrp-sponsor #ov li a{
> text-decoration: none; font-size: 130%; } #ygrp-sponsor #nc{
> background-color: #eee; margin-bottom: 20px; padding: 0 8px; }
> #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-sponsor .ad #hd1{
> font-family: Arial; font-weight: bold; color: #628c2a; font-size:
> 100%; line-height: 122%; } #ygrp-sponsor .ad a{ text-decoration:
> none; } #ygrp-sponsor .ad a:hover{ text-decoration: underline; }
> #ygrp-sponsor .ad p{ margin: 0; font-weight: normal; color: #000000;
> } o{font-size: 0; } .MsoNormal{ margin: 0 0 0 0; } #ygrp-text tt{
> font-size: 120%; } blockquote{margin: 0 0 0 4px;} .replbq{margin:4}
> dd.last p span { margin-right: 10px; font-family: Verdana;
> font-weight: bold; } dd.last p span.yshortcuts { margin-right: 0; }
> div.photo-title a, div.photo-title a:active, div.photo-title
> a:hover, div.photo-title a:visited { text-decoration: none; }
> div.file-title a, div.file-title a:active, div.file-title a:hover,
> div.file-title a:visited { text-decoration: none; } #ygrp-msg
> p#attach-count { clear: both; padding: 15px 0 3px 0; overflow:
> hidden; } #ygrp-msg p#attach-count span { color: #1E66AE;
> font-weight: bold; } div#ygrp-mlmsg #ygrp-msg p a span.yshortcuts {
> font-family: Verdana; font-size: 10px; font-weight: normal; }
> #ygrp-msg p a { font-family: Verdana; font-size: 10px; } #ygrp-mlmsg
> a { color: #1E66AE; } div.attach-table div div a { text-decoration:
> none; } div.attach-table { width: 400px; } -->
- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iEYEARECAAYFAkpIM18ACgkQ9VpNnHc4zAzDlACfSzu1yhA4pFWTcUCH59F4HyX2
JB8AnjYeVuJ1ngMTiTtrm/RXXzbi95Is
=7Oqr
-----END PGP SIGNATURE-----
Awesome stuff thanks.
Kris Zyp wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> There is a schema format for JSON (Mike pointed to a link, although it
> is not hard to google for info about it). You can also see that
> development of that at: http://groups.google.com/group/json-schema.
>
> For RPC services, the service mapping description (SMD) format is
> available for describing JSON services (RPC oriented). For REST
> services (presumably you are more interested in that if you are
> emailing rest-discuss), there has been ongoing discussions of
> development of conventions at the restful-json google group[1] (which
> is more focused REST applied to JSON, a better SNR over there). In
> particular, based on the discussions of this thread [2], I proposed an
> approach for leveraging JSON schema for describing restful JSON
> services (based on some patterns used in variety of RESTful JSON
> systems). I am working on putting together a more comprehensive/formal
> proposal for this. Suggestions/feedback welcome.
>
> [1] http://groups.google.com/group/restful-json
> [2]
> http://groups.google.com/group/restful-json/browse_thread/thread/cf4b0bd444f5fd83
>
> I hope that helps,
> Kris
>
> Bill Burke wrote:
>>
>> For all you JSON guys, how do you describe services that exchange JSON?
>> There is no schema or object notation for JSON.
>>
>> --
>> Bill Burke
>> JBoss, a division of Red Hat
>> http://bill.burkecentral.com <http://bill.burkecentral.com>
>>
>>
>> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
>> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
>> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%;
>> font-weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
>> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; }
>> #ygrp-mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!--
>> #ygrp-sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
>> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%;
>> line-height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom:
>> 10px; padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px;
>> font-family:
>> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
>> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
>> input, textarea {font:99% arial,helvetica,clean,sans-serif;}
>> #ygrp-mlmsg pre, code {font:115% monospace;*font-size:100%;}
>> #ygrp-mlmsg * {line-height:1.22em;} #ygrp-text{ font-family:
>> Georgia; } #ygrp-text p{ margin: 0 0 1em 0; } dd.last p a {
>> font-family: Verdana; font-weight: bold; } #ygrp-vitnav{
>> padding-top: 10px; font-family: Verdana; font-size: 77%; margin: 0;
>> } #ygrp-vitnav a{ padding: 0 1px; } #ygrp-mlmsg #logo{
>> padding-bottom: 10px; } #ygrp-reco { margin-bottom: 20px; padding:
>> 0px; } #ygrp-reco #reco-head { font-weight: bold; color: #ff7900; }
>> #reco-category{ font-size: 77%; } #reco-desc{ font-size: 77%; }
>> #ygrp-vital a{ text-decoration: none; } #ygrp-vital a:hover{
>> text-decoration: underline; } #ygrp-sponsor #ov ul{ padding: 0 0 0
>> 8px; margin: 0; } #ygrp-sponsor #ov li{ list-style-type: square;
>> padding: 6px 0; font-size: 77%; } #ygrp-sponsor #ov li a{
>> text-decoration: none; font-size: 130%; } #ygrp-sponsor #nc{
>> background-color: #eee; margin-bottom: 20px; padding: 0 8px; }
>> #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-sponsor .ad #hd1{
>> font-family: Arial; font-weight: bold; color: #628c2a; font-size:
>> 100%; line-height: 122%; } #ygrp-sponsor .ad a{ text-decoration:
>> none; } #ygrp-sponsor .ad a:hover{ text-decoration: underline; }
>> #ygrp-sponsor .ad p{ margin: 0; font-weight: normal; color: #000000;
>> } o{font-size: 0; } .MsoNormal{ margin: 0 0 0 0; } #ygrp-text tt{
>> font-size: 120%; } blockquote{margin: 0 0 0 4px;} .replbq{margin:4}
>> dd.last p span { margin-right: 10px; font-family: Verdana;
>> font-weight: bold; } dd.last p span.yshortcuts { margin-right: 0; }
>> div.photo-title a, div.photo-title a:active, div.photo-title
>> a:hover, div.photo-title a:visited { text-decoration: none; }
>> div.file-title a, div.file-title a:active, div.file-title a:hover,
>> div.file-title a:visited { text-decoration: none; } #ygrp-msg
>> p#attach-count { clear: both; padding: 15px 0 3px 0; overflow:
>> hidden; } #ygrp-msg p#attach-count span { color: #1E66AE;
>> font-weight: bold; } div#ygrp-mlmsg #ygrp-msg p a span.yshortcuts {
>> font-family: Verdana; font-size: 10px; font-weight: normal; }
>> #ygrp-msg p a { font-family: Verdana; font-size: 10px; } #ygrp-mlmsg
>> a { color: #1E66AE; } div.attach-table div div a { text-decoration:
>> none; } div.attach-table { width: 400px; } -->
>
> - --
> Kris Zyp
> SitePen
> (503) 806-1841
> http://sitepen.com
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (MingW32)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iEYEARECAAYFAkpIM18ACgkQ9VpNnHc4zAzDlACfSzu1yhA4pFWTcUCH59F4HyX2
> JB8AnjYeVuJ1ngMTiTtrm/RXXzbi95Is
> =7Oqr
> -----END PGP SIGNATURE-----
>
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
The uniformity of DELETE is essentially that it is non-safe and idempotent From the spec: "The client cannot be guaranteed that the operation has been carried out, even if the status code returned from the origin server indicates that the action has been completed successfully". The spec mentions 202 (Accepted) as an acceptable response; which would result in a situation where a 'successful' (2xx) DELETE will not necessarily result in a subsequent GET returning 404. With your proposed 'uniform' DELETE pattern there is also a real possibility that some other agent somewhere will 'reinstate' the resource in between the time it takes you to receive the DELETE response and reissue the subsequent GET - which, from the client's perspective, results in the exact same behavior as the 'hack' anyway. This is the reason I don't think that interpretation of DELETE gains you anything. - Mike Ant�nio Mota wrote: > Let me rephrase: > > Having in the same app a situation where a DELETE causes a subsequent > GET to return a 200 and a DELETE causing a subsequent GET returning a > 404 breaks the uniform constraint. Because we are talking about two > different operations here, the first DELETE isn't really deleting a > resource, it's modifying it's state, so it's more of a hack. But hacks > happen, I use them lot's of time... > > Mike Kelly wrote: >> Ant�nio Mota wrote: >> >>> Also, if you have in the same application a DELETE that deletes a >>> resource and not the URI and other DELETE that deletes both, then >>> DELETE isn't uniform any more. >>> >> >> I disagree with that, because I don't think it's actually violating >> the DELETE specification. >> >> If you have self-descriptive messages and are leveraging HATEOAS >> there should be no issue mixing both approaches in one system. >> >> >
Mike Kelly wrote: > The uniformity of DELETE is essentially that it is non-safe and > idempotent > > From the spec: "The client cannot be guaranteed that the operation has > been carried out, even if the status code returned from the origin > server indicates that the action has been completed successfully". The > spec mentions 202 (Accepted) as an acceptable response; which would > result in a situation where a 'successful' (2xx) DELETE will not > necessarily result in a subsequent GET returning 404. > > With your proposed 'uniform' DELETE pattern there is also a real > possibility that some other agent somewhere will 'reinstate' the > resource in between the time it takes you to receive the DELETE > response and reissue the subsequent GET - which, from the client's > perspective, results in the exact same behavior as the 'hack' anyway. > This is the reason I don't think that interpretation of DELETE gains > you anything. > > - Mike Bill and Sam were discussing what I was trying to say about the DELETE, maybe I couldn't make myself clear probably because I'm far for being a good english speaker. Bill Burke wrote: > Sam Johnston wrote: >> Bill, >> >> As a matter of interest, are the individual entries available (e.g. >> at /cache/123 or /cache/data/123)? I don't really grok/like this >> /cache/data idea - /cache is the resource and cache entries should be >> subordinates (which you can retrieve, delete, etc. individually). >> >> Say this "cache" is a queue (for a real world example of a similar >> problem), and you can create as many queues as you want (e.g. >> /queue/123)... there is now a need to be able to permanently delete >> the queue such that attempting to submit messages fails. If we've >> already used DELETE for flushing then we're in trouble. Plus, if >> there's parameters (e.g. capacity) then recreation after a DELETE >> should reset those. In that case "curl -X POST >> http://example.com/queue/123/_flush >> <http://example.com/queue/123/_flush>" seems appropriate to me. >> >> <http://wiki.apache.org/couchdb/Compaction>: >> http://example.com/my_db/_compact <http://example.com/my_db/_compact>. > > I see what you're saying. There's a possibility of having the need to > overload DELETE, so don't use DELETE. >
It seems to me that there is a "language" factor surrounding the different
combinations of operations and URIs. A lot of what was submitted is
functionally correct. Taking a lot of what I have read from the previous
contributions, I wonder if this representation meets both functional
correctness AND intuitiveness ...
Think of a /cache as a collection of caches that cannot be removed (deleted)
then
/cache/1 could represent a cache
and
/cache/1/entry/1 could represent a cache entry
Then the following could be used to model the cache management ...
Entry level :
DELETE /cache/1/entry/1 -> removes a cache entry
POST /cache/1/entry/1 -> creates a cache entry.
PUT /cache/1/entry/1 -> updates a cache entry
Cache level
DELETE /cache/1 -> blow away the whole cache
POST /cache/1 -> create a new cache.
PUT /cache/1 -> replace the cache with a new cache
Cache Collection level
DELETE /cache -> is FORBIDDEN. Cannot destroy the collection of caches.
I think this is both functionally correct (from a REST standpoint) as well
as intuitive (from a Developer standpoint).
Thoughts?
On Fri, Jun 26, 2009 at 7:21 AM, Mike Kelly <mike@...> wrote:
>
>
> Agreed, I can see how a transition to a flush state could be modeled as
> idempotent;
>
> PUT /cache { .... 'flushed': 'true' ...... }
>
> Having said that, if I was to model that transition as idempotent I'd
> probably prefer DELETE /cache because it's a more descriptive message
> and cleaner to implement on the server side... but my understanding was
> that DELETE didn't necessarily require the URI to be removed?
>
> Also - I think a good argument can be made for treating it as
> non-idempotent and creating separate flush resources for each request
>
> e.g. removing specific entities:
>
> POST /flushes { "targets": [ "/cache/A324234FE87", "/cache/D546F092123",
> ... ] } => 202 Accepted ; Location: /flushes/4123
>
> ..or clearing the whole cache:
>
> POST /flushes { "targets": [ "/cache" ] } => 202 Accepted ; Location:
> /flushes/4124
>
> On reflection, I think I prefer the POST solution
>
> - Mike
>
>
> Ebenezer Ikonne wrote:
> >> It should be, because PUT is idempotent, while POST is not.
> >>
> >
> > What am I missing here? PUT (should) guarantee idempotency but does that
> mean a POST cannot be idempotent. Additionally whether "other" resources are
> modified as a result of PUT/POST is not what qualifies the usage of
> PUT/POST. It's what happens to resource directly being manipulated that is
> of interest here.
> >
> > If using PUT, I would GET the representation of the cache (because it had
> to have existed) and return an empty representation. I personally don't the
> get the "flusher" resource but that's probably not important.
> >
> > Many ways to address this however.
> >
> > Eb
> >
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
> >
>
>
>
--
Bediako George
Partner - Lucid Technics, LLC
Think Clearly, Think Lucid
(p) 202.683.7486 (f) 703.563.6279
I agree with Sam in that the purpose of DELETE depends on the resource in question. But as I understand - it doesn't actually matter whether all your resources use one implementation or the other, or a mixture of both, if the application is leveraging HATEOAS + self-descriptive messages. For this example I think DELETE /cache is acceptable as a RESTful and intuitive solution. Not the only solution, though. - Mike Ant�nio Mota wrote: > Mike Kelly wrote: > >> The uniformity of DELETE is essentially that it is non-safe and >> idempotent >> >> From the spec: "The client cannot be guaranteed that the operation has >> been carried out, even if the status code returned from the origin >> server indicates that the action has been completed successfully". The >> spec mentions 202 (Accepted) as an acceptable response; which would >> result in a situation where a 'successful' (2xx) DELETE will not >> necessarily result in a subsequent GET returning 404. >> >> With your proposed 'uniform' DELETE pattern there is also a real >> possibility that some other agent somewhere will 'reinstate' the >> resource in between the time it takes you to receive the DELETE >> response and reissue the subsequent GET - which, from the client's >> perspective, results in the exact same behavior as the 'hack' anyway. >> This is the reason I don't think that interpretation of DELETE gains >> you anything. >> >> - Mike >> > > Bill and Sam were discussing what I was trying to say about the DELETE, > maybe I couldn't make myself clear probably because I'm far for being a > good english speaker. > > Bill Burke wrote: > >> Sam Johnston wrote: >> >>> Bill, >>> >>> As a matter of interest, are the individual entries available (e.g. >>> at /cache/123 or /cache/data/123)? I don't really grok/like this >>> /cache/data idea - /cache is the resource and cache entries should be >>> subordinates (which you can retrieve, delete, etc. individually). >>> >>> Say this "cache" is a queue (for a real world example of a similar >>> problem), and you can create as many queues as you want (e.g. >>> /queue/123)... there is now a need to be able to permanently delete >>> the queue such that attempting to submit messages fails. If we've >>> already used DELETE for flushing then we're in trouble. Plus, if >>> there's parameters (e.g. capacity) then recreation after a DELETE >>> should reset those. In that case "curl -X POST >>> http://example.com/queue/123/_flush >>> <http://example.com/queue/123/_flush>" seems appropriate to me. >>> >>> <http://wiki.apache.org/couchdb/Compaction>: >>> http://example.com/my_db/_compact <http://example.com/my_db/_compact>. >>> >> I see what you're saying. There's a possibility of having the need to >> overload DELETE, so don't use DELETE. >> >>
On Jun 29, 2009, at 3:31 PM, Bediako George wrote: > POST /cache/1 -> create a new cache. This one would either have to be POST to /cache, returning Location: / cache/1, or a PUT to /cache/1. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Mike Kelly wrote: > I agree with Sam in that the purpose of DELETE depends on the resource > in question. > > But as I understand - it doesn't actually matter whether all your > resources use one implementation or the other, or a mixture of both, > if the application is leveraging HATEOAS + self-descriptive messages. > > For this example I think DELETE /cache is acceptable as a RESTful and > intuitive solution. Not the only solution, though. > > - Mike > Well, I think my english is worse than I though. I can't read from what I quoted from Sam that "DELETE depends on the resource in question". I though that the situation here was: If, to empty a cache, you use DELETE /cache what do you use if you want to nuke, destroy, remove, kill that same cache?
Mike Kelly wrote:
> I think a good argument can be made for treating it as
> non-idempotent and creating separate flush resources for each request
>
> e.g. removing specific entities:
>
> POST /flushes { "targets": [ "/cache/A324234FE87", "/cache/D546F092123",
> ... ] } => 202 Accepted ; Location: /flushes/4123
>
> ..or clearing the whole cache:
>
> POST /flushes { "targets": [ "/cache" ] } => 202 Accepted ; Location:
> /flushes/4124
>
>
.. the server can then take these flush jobs and perform a DELETE
request to each of the URIs provided.
Using DELETE to flush the cache means that the /cache URI doesn't
require different treatment than a /cache/{entry}.
2009/6/29 António Mota <amsmota@...> > Mike Kelly wrote: > > I agree with Sam in that the purpose of DELETE depends on the resource > > in question. > > > > But as I understand - it doesn't actually matter whether all your > > resources use one implementation or the other, or a mixture of both, > > if the application is leveraging HATEOAS + self-descriptive messages. > > > > For this example I think DELETE /cache is acceptable as a RESTful and > > intuitive solution. Not the only solution, though. > > > > - Mike > > > Well, I think my english is worse than I though. I can't read from what > I quoted from Sam that "DELETE depends on the resource in question". > > I though that the situation here was: If, to empty a cache, you use > > DELETE /cache > > what do you use if you want to nuke, destroy, remove, kill that same cache? I think there needs to be some common sense exercised here - if you are sure your application will only ever have one cache at that location, and that cache is permanent (in that it "springs back" when deleted or when a new entry comes along) then the simplest option is to use DELETE for both individual entries and flushing the cache itself. If however your system is more complicated - for example, implementing multiple queues - then you will want to reserve DELETE for actually destroying the resource and devise some other mechanism for flushing it. Sam
Ant�nio Mota wrote: > Mike Kelly wrote: > >> I agree with Sam in that the purpose of DELETE depends on the resource >> in question. >> >> But as I understand - it doesn't actually matter whether all your >> resources use one implementation or the other, or a mixture of both, >> if the application is leveraging HATEOAS + self-descriptive messages. >> >> For this example I think DELETE /cache is acceptable as a RESTful and >> intuitive solution. Not the only solution, though. >> >> - Mike >> >> > Well, I think my english is worse than I though. I can't read from what > I quoted from Sam that "DELETE depends on the resource in question". > > I though that the situation here was: If, to empty a cache, you use > > DELETE /cache > > what do you use if you want to nuke, destroy, remove, kill that same cache? > > There's always "shutdown -h now" :-D It's a trade-off - my point was that it is a RESTful solution, not 'the best' in every context though
Interesting. Isn't the client allowed to stipulate what the identifier of a new resource is allowed to be? If this is true then POST/cache/1 should be allowed with the appropriate error returned if is already exists. To your point it could also be modeled with the server creating the identifier, which would add the expectation that the resource identifier is returned in the response. Regards, Bediako On Mon, Jun 29, 2009 at 10:01 AM, Stefan Tilkov <stefan.tilkov@...>wrote: > > > On Jun 29, 2009, at 3:31 PM, Bediako George wrote: > > POST /cache/1 -> create a new cache. > > > This one would either have to be POST to /cache, returning Location: > /cache/1, or a PUT to /cache/1. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly, Think Lucid (p) 202.683.7486 (f) 703.563.6279
On Mon, Jun 29, 2009 at 4:47 PM, Bediako George <bediakogeorge@...>wrote: > Interesting. Isn't the client allowed to stipulate what the identifier of > a new resource is allowed to be? If this is true then POST/cache/1 should be > allowed with the appropriate error returned if is already exists. > > To your point it could also be modeled with the server creating the > identifier, which would add the expectation that the resource identifier is > returned in the response. > Generally the server sets the identifier (URL) for POSTs and the client sets it for PUTs - it took me a while to work this out at the start but now it's clear and I wouldn't have it any other way. It makes sense when you consider that POSTs typically add subordinate resources (like comments on a blog entry) in which case you don't know what your identifier will be. You can also use hints like the Slug: header<http://tools.ietf.org/html/rfc5023#section-9.7>or in-band data if you want some amount of control over this, but otherwise just use PUT. Sam
If by "Generally" you mean for alot of web applications, I wholeheartedly agree with you. As a matter of fact this is the default implementation of my companies open source project Hannibal ( http://code.google.com/p/hannibalcodegenerator/) . Having said that, I do believe that having the client stipulate the resource identifier is a completely valid use case. Your comments example is a good one, but you might imagine that the blog entry URI for which those comments were posted was created using the blog title, which is almost invariably client stipulated. In our open source SVNServices project ( http://code.google.com/p/svnservices/) which relies on the aforementioned Hannibal, the web service client uses the SVN project name as the repository identifier. The server just allows the POST to occur iff the SVN resource for that project does not exist. So again, what you described is valid, I just think that having the client stipulate the resource URI is also valid as well. Regards, Bediako On Mon, Jun 29, 2009 at 10:56 AM, Sam Johnston <samj@...> wrote: > > > On Mon, Jun 29, 2009 at 4:47 PM, Bediako George <bediakogeorge@...>wrote: > >> Interesting. Isn't the client allowed to stipulate what the identifier of >> a new resource is allowed to be? If this is true then POST/cache/1 should be >> allowed with the appropriate error returned if is already exists. >> >> To your point it could also be modeled with the server creating the >> identifier, which would add the expectation that the resource identifier is >> returned in the response. >> > > Generally the server sets the identifier (URL) for POSTs and the client > sets it for PUTs - it took me a while to work this out at the start but now > it's clear and I wouldn't have it any other way. > > It makes sense when you consider that POSTs typically add subordinate > resources (like comments on a blog entry) in which case you don't know what > your identifier will be. You can also use hints like the Slug: header<http://tools.ietf.org/html/rfc5023#section-9.7>or in-band data if you want some amount of control over this, but otherwise > just use PUT. > > Sam > > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly, Think Lucid (p) 202.683.7486 (f) 703.563.6279
> Having said that, I do believe that having the client stipulate the > > resource identifier is a completely valid use case. When using PUT. I would expect the behavior of POST when the URI exists to be completely different. (Mistaken sent directly to George). Eb
POST isn't as good in this scenario anyways for creation as writing to the entry is idempotent anyways. IMO, use idempotent methods if you can, then the user doesn't have to worry about duplicate mesages. Bediako George wrote: > > > > Interesting. Isn't the client allowed to stipulate what the identifier > of a new resource is allowed to be? If this is true then POST/cache/1 > should be allowed with the appropriate error returned if is already exists. > > > To your point it could also be modeled with the server creating the > identifier, which would add the expectation that the resource identifier > is returned in the response. > > Regards, > > Bediako > > On Mon, Jun 29, 2009 at 10:01 AM, Stefan Tilkov <stefan.tilkov@... > <mailto:stefan.tilkov@...>> wrote: > > > > On Jun 29, 2009, at 3:31 PM, Bediako George wrote: > >> POST /cache/1 -> create a new cache. > > This one would either have to be POST to /cache, returning Location: > /cache/1, or a PUT to /cache/1. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > <http://www.innoq.com/blog/st/> > > > > > -- > Bediako George > Partner - Lucid Technics, LLC > Think Clearly, Think Lucid > (p) 202.683.7486 (f) 703.563.6279 > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill: I find POST is very handy in cases like this when, in addition to handling the request for work, I also want an audit log of the request along with the final result of the request. In those cases, I define a representation that contains the details of the state request that the client can send to the server. When the work is done, the results are stored as a unique resource in an archive that can be used for searching, reporting, etc. mca http://amundsen.com/blog/ On Mon, Jun 29, 2009 at 13:01, Bill Burke <bburke@...> wrote: > POST isn't as good in this scenario anyways for creation as writing to > the entry is idempotent anyways. IMO, use idempotent methods if you > can, then the user doesn't have to worry about duplicate mesages. > > Bediako George wrote: > > > > > > > > Interesting. Isn't the client allowed to stipulate what the identifier > > of a new resource is allowed to be? If this is true then POST/cache/1 > > should be allowed with the appropriate error returned if is already > exists. > > > > > > To your point it could also be modeled with the server creating the > > identifier, which would add the expectation that the resource identifier > > is returned in the response. > > > > Regards, > > > > Bediako > > > > On Mon, Jun 29, 2009 at 10:01 AM, Stefan Tilkov <stefan.tilkov@... > > <mailto:stefan.tilkov@...>> wrote: > > > > > > > > On Jun 29, 2009, at 3:31 PM, Bediako George wrote: > > > >> POST /cache/1 -> create a new cache. > > > > This one would either have to be POST to /cache, returning Location: > > /cache/1, or a PUT to /cache/1. > > > > Stefan > > -- > > Stefan Tilkov, http://www.innoq.com/blog/st/ > > <http://www.innoq.com/blog/st/> > > > > > > > > > > -- > > Bediako George > > Partner - Lucid Technics, LLC > > Think Clearly, Think Lucid > > (p) 202.683.7486 (f) 703.563.6279 > > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Mon, Jun 29, 2009 at 7:16 PM, mike amundsen <mamund@...> wrote: > I find POST is very handy in cases like this when, in addition to handling > the request for work, I also want an audit log of the request along with the > final result of the request. In those cases, I define a representation that > contains the details of the state request that the client can send to the > server. When the work is done, the results are stored as a unique resource > in an archive that can be used for searching, reporting, etc. Well PUT shouldn't really be doing any (well, much) work anyway so there's no need for logs and so on. Returning multiple resources (e.g. the byproduct(s), logs, etc.) when there's only one Location: header is an interesting problem, but one that could likely be solved with HTTP Link: headers. Sam
Sam: I think I didn't explain myself properly. I cases like this (clearing a cache, queue, work-list, etc.) when I would like the server to not only clear the list, but also create an auditable record of the action (something beyond standard HTTP server logs), I tend to use POST as this results in the work being performed (clearing the list) and the creating of a new resources (the auditable record). POST also helps when I expect the client to be able to send some state information that would affect the task at hand (filter for items to remove from the list, etc.). The POST returns 201 Created w/ a Location header as expected. To go a bit further along this line, In cases where I don't need an auditable record, but still want to support sending state, I usually use a PUT against a known resource. This may result in a GET-able resource that shows details on the last action committed, too. Finally, when I don't need a record of the action and I don't expect the client to send any state along at all, DELETE against the worklist resource works for me. mca http://amundsen.com/blog/ On Mon, Jun 29, 2009 at 14:52, Sam Johnston <samj@...> wrote: > On Mon, Jun 29, 2009 at 7:16 PM, mike amundsen <mamund@...> wrote: > >> I find POST is very handy in cases like this when, in addition to handling >> the request for work, I also want an audit log of the request along with the >> final result of the request. In those cases, I define a representation that >> contains the details of the state request that the client can send to the >> server. When the work is done, the results are stored as a unique resource >> in an archive that can be used for searching, reporting, etc. > > > Well PUT shouldn't really be doing any (well, much) work anyway so there's > no need for logs and so on. > > Returning multiple resources (e.g. the byproduct(s), logs, etc.) when > there's only one Location: header is an interesting problem, but one that > could likely be solved with HTTP Link: headers. > > Sam > >
>>> LINK suffers from a problem - it magically pops into existence as a >>> header, but without a means to manage the implied relationship. >> >> I'm not understanding what you mean by this. > > It's like atom:id - you must have one in the format, but how to create > one is undefined. Atom's format only considered the read/syndication > usecase. That was awkward when it came to specifying AtomPub. LINK is > similar - how a LINK relationship is created/managed/destroyed is > undefined. Why isn't that up to the server(s) managing the resources? Links are for servers to describe relations between resources, and not for clients to manage such relationships. Subbu
Hi Subbu, On Tue, Jun 30, 2009 at 5:45 AM, Subbu Allamaraju <subbu@...> wrote: > > LINK is similar - how a LINK relationship is created/managed/destroyed is > undefined. > > Why isn't that up to the server(s) managing the resources? Links are > for servers to describe relations between resources, and not for > clients to manage such relationships. Why so? This use case requires that clients be able to manage links: virtual infrastructure is modeled as compute, storage and network resources and clients create, delete and link them as they see fit. The server can too (for example, implicitly creating a storage resource and linking it when you create a compute resource) but the point of OCCI <http://www.occi-wg.org/>is to allow for client manipulation. We're not the only ones who see a need either... the original authors of the HTTP spec (RFC 2068) including LINK and UNLINK<http://tools.ietf.org/html/rfc2068#section-19.6.1.2>verbs for this around the same as this I-D <http://ftp.ics.uci.edu/pub/ietf/http/draft-pritchard-http-links-00.txt>specifying same in more detail. This is what Mark Nottingham (author of the Link: header I-D among other things, copied) had to say this morning on apps-discuss: *- First and foremost, in the absence of the LINK and UNLINK verbs originally defined in RFC 2068[2] but specifically omitted from RFC 2616[3], what is the preferred mechanism for manipulating these links via HTTP? It appears that this header is intended for GET requests only, but presumably specifying it in POST and PUT requests would be one option that avoids the creation of [not so] "new" verbs (bearing in mind that short of accepting Link: headers from empty POST/PUT requests, it would be necessary to GET and then PUT the entire payload to update links - twice if they were reciprocal). While there was an attempt a dozen years ago to better define the relevant HTTP verbs[4], it strikes me as more sensible to follow the example of the Set-Cookie: header for this rather than WebDAV's example of creating new verbs (even if we've seen them before) but you guys are the experts.* Undefined, but I imagine in a PUT/POST body does indeed make the most sense. Using the Link header in a request doesn't have well-defined semantics. I wonder then whether it's not sensible to define these semantics in an[other] Internet Draft (ala Set-Cookie) rather than having everyone running off and inventing their own in-band solutions... doing so would make for some really clever RESTful interfaces. Sam
Sam, I don't disagree that there are use cases, but I am not sure if letting clients manage relations is the right way to implement distributed systems. The approach you describe below is similar to a client trying to setup foreign key relations between different database entities. This model leaks abstractions and is not ideal for writing large systems. For instance, take a simple shopping cart application. The server may have decided to use links to associate products to a cart, but that does not mean that, clients should be able to create/edit/delete those links. Instead, links come into being when the client "adds products to a cart" and they go away when the client "removes a product from the cart". That is the right level of abstraction for the client. IMO, links are for servers to provide navigability between resources, and to let clients make state transitions via links. Subbu On Jun 30, 2009, at 3:44 AM, Sam Johnston wrote: > Hi Subbu, > > On Tue, Jun 30, 2009 at 5:45 AM, Subbu Allamaraju <subbu@...> > wrote: > >>> LINK is similar - how a LINK relationship is created/managed/ >>> destroyed is >> undefined. >> >> Why isn't that up to the server(s) managing the resources? Links are >> for servers to describe relations between resources, and not for >> clients to manage such relationships. > > > Why so? This use case requires that clients be able to manage links: > virtual > infrastructure is modeled as compute, storage and network resources > and > clients create, delete and link them as they see fit. The server can > too > (for example, implicitly creating a storage resource and linking it > when you > create a compute resource) but the point of OCCI > <http://www.occi-wg.org/>is to allow for client manipulation. > > We're not the only ones who see a need either... the original > authors of the > HTTP spec (RFC 2068) including LINK and > UNLINK<http://tools.ietf.org/html/rfc2068#section-19.6.1.2>verbs for > this around the same as this > I-D <http://ftp.ics.uci.edu/pub/ietf/http/draft-pritchard-http-links-00.txt > >specifying > same in more detail. This is what Mark Nottingham (author of the > Link: header I-D among other things, copied) had to say this morning > on > apps-discuss: > > *- First and foremost, in the absence of the LINK and UNLINK verbs > originally defined in RFC 2068[2] but specifically omitted from RFC > 2616[3], > what is the preferred mechanism for manipulating these links via > HTTP? It > appears that this header is intended for GET requests only, but > presumably > specifying it in POST and PUT requests would be one option that > avoids the > creation of [not so] "new" verbs (bearing in mind that short of > accepting > Link: headers from empty POST/PUT requests, it would be necessary to > GET and > then PUT the entire payload to update links - twice if they were > reciprocal). While there was an attempt a dozen years ago to better > define > the relevant HTTP verbs[4], it strikes me as more sensible to follow > the > example of the Set-Cookie: header for this rather than WebDAV's > example of > creating new verbs (even if we've seen them before) but you guys are > the > experts.* > > Undefined, but I imagine in a PUT/POST body does indeed make the > most sense. > Using the Link header in a request doesn't have well-defined > semantics. > > I wonder then whether it's not sensible to define these semantics in > an[other] Internet Draft (ala Set-Cookie) rather than having everyone > running off and inventing their own in-band solutions... doing so > would make > for some really clever RESTful interfaces. > > Sam
Hi Subbu- (sorry for top-responding FF 3.5 is having trouble w/ gmail format). Adding products to a cart is an excellent example. How, though, effect that modification by way of an API? I can think of: 1. post the product to the cart "collection" 2. add a link to a product pointing to the cart 3. add a link to a cart resource pointing to the product 4. create a new resource (presumably by POSTing to a known endpoint) that is essentially a "cart-product instance" that has a link to each I am not sure that abstraction solves the problem in actual implementation. Our atom/atompub based system uses a local atom extension that adds an "edit-href" attribute to link elements that can be modified or deleted. I do think, absent a standard PATCH, it would be nice to have a standard way to manipulate just specific link elements in Atom documents. (I realize the case being discussed here is broader than Atom, and includes link headers). --peter keane On Tue, Jun 30, 2009 at 11:59 AM, Subbu Allamaraju <subbu@...> wrote: > > > Sam, > > I don't disagree that there are use cases, but I am not sure if > letting clients manage relations is the right way to implement > distributed systems. The approach you describe below is similar to a > client trying to setup foreign key relations between different > database entities. This model leaks abstractions and is not ideal for > writing large systems. > > For instance, take a simple shopping cart application. The server may > have decided to use links to associate products to a cart, but that > does not mean that, clients should be able to create/edit/delete those > links. Instead, links come into being when the client "adds products > to a cart" and they go away when the client "removes a product from > the cart". That is the right level of abstraction for the client. > > > > IMO, links are for servers to provide navigability between resources, > and to let clients make state transitions via links. > > Subbu > > > On Jun 30, 2009, at 3:44 AM, Sam Johnston wrote: > > > Hi Subbu, > > > > On Tue, Jun 30, 2009 at 5:45 AM, Subbu Allamaraju <subbu@...<subbu%40subbu.org>> > > > wrote: > > > >>> LINK is similar - how a LINK relationship is created/managed/ > >>> destroyed is > >> undefined. > >> > >> Why isn't that up to the server(s) managing the resources? Links are > >> for servers to describe relations between resources, and not for > >> clients to manage such relationships. > > > > > > Why so? This use case requires that clients be able to manage links: > > virtual > > infrastructure is modeled as compute, storage and network resources > > and > > clients create, delete and link them as they see fit. The server can > > too > > (for example, implicitly creating a storage resource and linking it > > when you > > create a compute resource) but the point of OCCI > > <http://www.occi-wg.org/>is to allow for client manipulation. > > > > We're not the only ones who see a need either... the original > > authors of the > > HTTP spec (RFC 2068) including LINK and > > UNLINK<http://tools.ietf.org/html/rfc2068#section-19.6.1.2>verbs for > > this around the same as this > > I-D < > http://ftp.ics.uci.edu/pub/ietf/http/draft-pritchard-http-links-00.txt > > >specifying > > same in more detail. This is what Mark Nottingham (author of the > > Link: header I-D among other things, copied) had to say this morning > > on > > apps-discuss: > > > > *- First and foremost, in the absence of the LINK and UNLINK verbs > > originally defined in RFC 2068[2] but specifically omitted from RFC > > 2616[3], > > what is the preferred mechanism for manipulating these links via > > HTTP? It > > appears that this header is intended for GET requests only, but > > presumably > > specifying it in POST and PUT requests would be one option that > > avoids the > > creation of [not so] "new" verbs (bearing in mind that short of > > accepting > > Link: headers from empty POST/PUT requests, it would be necessary to > > GET and > > then PUT the entire payload to update links - twice if they were > > reciprocal). While there was an attempt a dozen years ago to better > > define > > the relevant HTTP verbs[4], it strikes me as more sensible to follow > > the > > example of the Set-Cookie: header for this rather than WebDAV's > > example of > > creating new verbs (even if we've seen them before) but you guys are > > the > > experts.* > > > > Undefined, but I imagine in a PUT/POST body does indeed make the > > most sense. > > Using the Link header in a request doesn't have well-defined > > semantics. > > > > I wonder then whether it's not sensible to define these semantics in > > an[other] Internet Draft (ala Set-Cookie) rather than having everyone > > running off and inventing their own in-band solutions... doing so > > would make > > for some really clever RESTful interfaces. > > > > Sam > > >
Hi Peter, > 1. post the product to the cart "collection" > 2. add a link to a product pointing to the cart > 3. add a link to a cart resource pointing to the product > 4. create a new resource (presumably by POSTing to a known > endpoint) that > is essentially a "cart-product instance" that has a link to each This still leaks many server-side details to the client. Here is an alternative. 1. The server has a cart resource, and product resources. 2. Each product resource found in a search will have a link <link rel="http://shop.org/rels/buy" href="http://shop.org/subbu/cart"/> The definition of rel says that the client should use POST to add the product to the cart. 3. Client adds the product to the cart POST /subbu/cart Content-Type: application/xml id=1234 4. Server redirects back to the updated cart 303 See Other Location: http://shop.org/subbu/cart This is just generalized version of a web based shopping cart, and provides a simplified interface to the client. As I said before, expecting the client to manage links is akin to clients posting SQL statements to servers. Subbu > On Tue, Jun 30, 2009 at 11:59 AM, Subbu Allamaraju <subbu@...> > wrote: > >> >> >> Sam, >> >> I don't disagree that there are use cases, but I am not sure if >> letting clients manage relations is the right way to implement >> distributed systems. The approach you describe below is similar to a >> client trying to setup foreign key relations between different >> database entities. This model leaks abstractions and is not ideal for >> writing large systems. >> >> For instance, take a simple shopping cart application. The server may >> have decided to use links to associate products to a cart, but that >> does not mean that, clients should be able to create/edit/delete >> those >> links. Instead, links come into being when the client "adds products >> to a cart" and they go away when the client "removes a product from >> the cart". That is the right level of abstraction for the client. >> > > > > >> >> >> IMO, links are for servers to provide navigability between resources, >> and to let clients make state transitions via links. >> >> Subbu >> >> >> On Jun 30, 2009, at 3:44 AM, Sam Johnston wrote: >> >>> Hi Subbu, >>> >>> On Tue, Jun 30, 2009 at 5:45 AM, Subbu Allamaraju >>> <subbu@...<subbu%40subbu.org>> >> >>> wrote: >>> >>>>> LINK is similar - how a LINK relationship is created/managed/ >>>>> destroyed is >>>> undefined. >>>> >>>> Why isn't that up to the server(s) managing the resources? Links >>>> are >>>> for servers to describe relations between resources, and not for >>>> clients to manage such relationships. >>> >>> >>> Why so? This use case requires that clients be able to manage links: >>> virtual >>> infrastructure is modeled as compute, storage and network resources >>> and >>> clients create, delete and link them as they see fit. The server can >>> too >>> (for example, implicitly creating a storage resource and linking it >>> when you >>> create a compute resource) but the point of OCCI >>> <http://www.occi-wg.org/>is to allow for client manipulation. >>> >>> We're not the only ones who see a need either... the original >>> authors of the >>> HTTP spec (RFC 2068) including LINK and >>> UNLINK<http://tools.ietf.org/html/rfc2068#section-19.6.1.2>verbs for >>> this around the same as this >>> I-D < >> http://ftp.ics.uci.edu/pub/ietf/http/draft-pritchard-http- >> links-00.txt >>>> specifying >>> same in more detail. This is what Mark Nottingham (author of the >>> Link: header I-D among other things, copied) had to say this morning >>> on >>> apps-discuss: >>> >>> *- First and foremost, in the absence of the LINK and UNLINK verbs >>> originally defined in RFC 2068[2] but specifically omitted from RFC >>> 2616[3], >>> what is the preferred mechanism for manipulating these links via >>> HTTP? It >>> appears that this header is intended for GET requests only, but >>> presumably >>> specifying it in POST and PUT requests would be one option that >>> avoids the >>> creation of [not so] "new" verbs (bearing in mind that short of >>> accepting >>> Link: headers from empty POST/PUT requests, it would be necessary to >>> GET and >>> then PUT the entire payload to update links - twice if they were >>> reciprocal). While there was an attempt a dozen years ago to better >>> define >>> the relevant HTTP verbs[4], it strikes me as more sensible to follow >>> the >>> example of the Set-Cookie: header for this rather than WebDAV's >>> example of >>> creating new verbs (even if we've seen them before) but you guys are >>> the >>> experts.* >>> >>> Undefined, but I imagine in a PUT/POST body does indeed make the >>> most sense. >>> Using the Link header in a request doesn't have well-defined >>> semantics. >>> >>> I wonder then whether it's not sensible to define these semantics in >>> an[other] Internet Draft (ala Set-Cookie) rather than having >>> everyone >>> running off and inventing their own in-band solutions... doing so >>> would make >>> for some really clever RESTful interfaces. >>> >>> Sam >> >> >>
Please read the POST as > POST /subbu/cart > Content-Type: application/x-www-form-urlencoded > id=1234 Subbu On Jun 30, 2009, at 12:41 PM, Subbu Allamaraju wrote: > Hi Peter, > >> 1. post the product to the cart "collection" >> 2. add a link to a product pointing to the cart >> 3. add a link to a cart resource pointing to the product >> 4. create a new resource (presumably by POSTing to a known >> endpoint) that >> is essentially a "cart-product instance" that has a link to each > > This still leaks many server-side details to the client. Here is an > alternative. > > 1. The server has a cart resource, and product resources. > > 2. Each product resource found in a search will have a link > > <link rel="http://shop.org/rels/buy" href="http://shop.org/subbu/ > cart"/> > > The definition of rel says that the client should use POST to add > the product to the cart. > > 3. Client adds the product to the cart > > POST /subbu/cart > Content-Type: application/xml > > id=1234 > > 4. Server redirects back to the updated cart > > 303 See Other > Location: http://shop.org/subbu/cart > > This is just generalized version of a web based shopping cart, and > provides a simplified interface to the client. As I said before, > expecting the client to manage links is akin to clients posting SQL > statements to servers. > > Subbu > > > >> On Tue, Jun 30, 2009 at 11:59 AM, Subbu Allamaraju >> <subbu@...> wrote: >> >>> >>> >>> Sam, >>> >>> I don't disagree that there are use cases, but I am not sure if >>> letting clients manage relations is the right way to implement >>> distributed systems. The approach you describe below is similar to a >>> client trying to setup foreign key relations between different >>> database entities. This model leaks abstractions and is not ideal >>> for >>> writing large systems. >>> >>> For instance, take a simple shopping cart application. The server >>> may >>> have decided to use links to associate products to a cart, but that >>> does not mean that, clients should be able to create/edit/delete >>> those >>> links. Instead, links come into being when the client "adds products >>> to a cart" and they go away when the client "removes a product from >>> the cart". That is the right level of abstraction for the client. >>> >> >> >> >> >>> >>> >>> IMO, links are for servers to provide navigability between >>> resources, >>> and to let clients make state transitions via links. >>> >>> Subbu >>> >>> >>> On Jun 30, 2009, at 3:44 AM, Sam Johnston wrote: >>> >>>> Hi Subbu, >>>> >>>> On Tue, Jun 30, 2009 at 5:45 AM, Subbu Allamaraju >>>> <subbu@...<subbu%40subbu.org>> >>> >>>> wrote: >>>> >>>>>> LINK is similar - how a LINK relationship is created/managed/ >>>>>> destroyed is >>>>> undefined. >>>>> >>>>> Why isn't that up to the server(s) managing the resources? Links >>>>> are >>>>> for servers to describe relations between resources, and not for >>>>> clients to manage such relationships. >>>> >>>> >>>> Why so? This use case requires that clients be able to manage >>>> links: >>>> virtual >>>> infrastructure is modeled as compute, storage and network resources >>>> and >>>> clients create, delete and link them as they see fit. The server >>>> can >>>> too >>>> (for example, implicitly creating a storage resource and linking it >>>> when you >>>> create a compute resource) but the point of OCCI >>>> <http://www.occi-wg.org/>is to allow for client manipulation. >>>> >>>> We're not the only ones who see a need either... the original >>>> authors of the >>>> HTTP spec (RFC 2068) including LINK and >>>> UNLINK<http://tools.ietf.org/html/rfc2068#section-19.6.1.2>verbs >>>> for >>>> this around the same as this >>>> I-D < >>> http://ftp.ics.uci.edu/pub/ietf/http/draft-pritchard-http-links-00.txt >>>>> specifying >>>> same in more detail. This is what Mark Nottingham (author of the >>>> Link: header I-D among other things, copied) had to say this >>>> morning >>>> on >>>> apps-discuss: >>>> >>>> *- First and foremost, in the absence of the LINK and UNLINK verbs >>>> originally defined in RFC 2068[2] but specifically omitted from RFC >>>> 2616[3], >>>> what is the preferred mechanism for manipulating these links via >>>> HTTP? It >>>> appears that this header is intended for GET requests only, but >>>> presumably >>>> specifying it in POST and PUT requests would be one option that >>>> avoids the >>>> creation of [not so] "new" verbs (bearing in mind that short of >>>> accepting >>>> Link: headers from empty POST/PUT requests, it would be necessary >>>> to >>>> GET and >>>> then PUT the entire payload to update links - twice if they were >>>> reciprocal). While there was an attempt a dozen years ago to better >>>> define >>>> the relevant HTTP verbs[4], it strikes me as more sensible to >>>> follow >>>> the >>>> example of the Set-Cookie: header for this rather than WebDAV's >>>> example of >>>> creating new verbs (even if we've seen them before) but you guys >>>> are >>>> the >>>> experts.* >>>> >>>> Undefined, but I imagine in a PUT/POST body does indeed make the >>>> most sense. >>>> Using the Link header in a request doesn't have well-defined >>>> semantics. >>>> >>>> I wonder then whether it's not sensible to define these semantics >>>> in >>>> an[other] Internet Draft (ala Set-Cookie) rather than having >>>> everyone >>>> running off and inventing their own in-band solutions... doing so >>>> would make >>>> for some really clever RESTful interfaces. >>>> >>>> Sam >>> >>> >>> >
Subbu, This is a fairly large deviation from HTTP as the "universal interface" and the details would need to be specified for each implementation. HTTP was designed to create a web of opaque resources, only the linking requirement was (until now) well satisfied by another standard developed by another SSO (that is, HTML). The clients specify the links today so it makes sense that they continue to be able to create the links tomorrow, does it not? If the server doesn't like the proposed link it doesn't have to accept it, and it can always specify links of its own (which is the way it works with hypertext today - consider "manual" links in blog comments vs "automatic" links to stylesheets, feeds, etc.) Consider some of the things I need to be able to do: - Mount a storage resource on a compute resource - Connect a compute resource to a network (or a network to a network etc.) - Associate arbitrary resources which may be hosted elsewhere (for example, PDF build documentation for a server) Why would I want to create what is essentially an RPC-style interface (e.g. "mount", "attach", "associate", etc.) for this functionality? Granted if that's what I wanted to do then the method you propose below is clean (except that the ID should perhaps be the URL) but is there not another way? Sam On Tue, Jun 30, 2009 at 9:57 PM, Subbu Allamaraju <subbu@...> wrote: > Please read the POST as > > POST /subbu/cart >> Content-Type: application/x-www-form-urlencoded >> > > id=1234 >> > > Subbu > > > On Jun 30, 2009, at 12:41 PM, Subbu Allamaraju wrote: > > Hi Peter, >> >> 1. post the product to the cart "collection" >>> 2. add a link to a product pointing to the cart >>> 3. add a link to a cart resource pointing to the product >>> 4. create a new resource (presumably by POSTing to a known endpoint) >>> that >>> is essentially a "cart-product instance" that has a link to each >>> >> >> This still leaks many server-side details to the client. Here is an >> alternative. >> >> 1. The server has a cart resource, and product resources. >> >> 2. Each product resource found in a search will have a link >> >> <link rel="http://shop.org/rels/buy" href="http://shop.org/subbu/cart"/> >> >> The definition of rel says that the client should use POST to add the >> product to the cart. >> >> 3. Client adds the product to the cart >> >> POST /subbu/cart >> Content-Type: application/xml >> >> id=1234 >> >> 4. Server redirects back to the updated cart >> >> 303 See Other >> Location: http://shop.org/subbu/cart >> >> This is just generalized version of a web based shopping cart, and >> provides a simplified interface to the client. As I said before, expecting >> the client to manage links is akin to clients posting SQL statements to >> servers. >> >> Subbu >> >> >> >> On Tue, Jun 30, 2009 at 11:59 AM, Subbu Allamaraju <subbu@...> >>> wrote: >>> >>> >>>> >>>> Sam, >>>> >>>> I don't disagree that there are use cases, but I am not sure if >>>> letting clients manage relations is the right way to implement >>>> distributed systems. The approach you describe below is similar to a >>>> client trying to setup foreign key relations between different >>>> database entities. This model leaks abstractions and is not ideal for >>>> writing large systems. >>>> >>>> For instance, take a simple shopping cart application. The server may >>>> have decided to use links to associate products to a cart, but that >>>> does not mean that, clients should be able to create/edit/delete those >>>> links. Instead, links come into being when the client "adds products >>>> to a cart" and they go away when the client "removes a product from >>>> the cart". That is the right level of abstraction for the client. >>>> >>>> >>> >>> >>> >>> >>>> >>>> IMO, links are for servers to provide navigability between resources, >>>> and to let clients make state transitions via links. >>>> >>>> Subbu >>>> >>>> >>>> On Jun 30, 2009, at 3:44 AM, Sam Johnston wrote: >>>> >>>> Hi Subbu, >>>>> >>>>> On Tue, Jun 30, 2009 at 5:45 AM, Subbu Allamaraju <subbu@... >>>>> <subbu%40subbu.org>> >>>>> >>>> >>>> wrote: >>>>> >>>>> LINK is similar - how a LINK relationship is created/managed/ >>>>>>> destroyed is >>>>>>> >>>>>> undefined. >>>>>> >>>>>> Why isn't that up to the server(s) managing the resources? Links are >>>>>> for servers to describe relations between resources, and not for >>>>>> clients to manage such relationships. >>>>>> >>>>> >>>>> >>>>> Why so? This use case requires that clients be able to manage links: >>>>> virtual >>>>> infrastructure is modeled as compute, storage and network resources >>>>> and >>>>> clients create, delete and link them as they see fit. The server can >>>>> too >>>>> (for example, implicitly creating a storage resource and linking it >>>>> when you >>>>> create a compute resource) but the point of OCCI >>>>> <http://www.occi-wg.org/>is to allow for client manipulation. >>>>> >>>>> We're not the only ones who see a need either... the original >>>>> authors of the >>>>> HTTP spec (RFC 2068) including LINK and >>>>> UNLINK<http://tools.ietf.org/html/rfc2068#section-19.6.1.2>verbs for >>>>> this around the same as this >>>>> I-D < >>>>> >>>> http://ftp.ics.uci.edu/pub/ietf/http/draft-pritchard-http-links-00.txt >>>> >>>>> specifying >>>>>> >>>>> same in more detail. This is what Mark Nottingham (author of the >>>>> Link: header I-D among other things, copied) had to say this morning >>>>> on >>>>> apps-discuss: >>>>> >>>>> *- First and foremost, in the absence of the LINK and UNLINK verbs >>>>> originally defined in RFC 2068[2] but specifically omitted from RFC >>>>> 2616[3], >>>>> what is the preferred mechanism for manipulating these links via >>>>> HTTP? It >>>>> appears that this header is intended for GET requests only, but >>>>> presumably >>>>> specifying it in POST and PUT requests would be one option that >>>>> avoids the >>>>> creation of [not so] "new" verbs (bearing in mind that short of >>>>> accepting >>>>> Link: headers from empty POST/PUT requests, it would be necessary to >>>>> GET and >>>>> then PUT the entire payload to update links - twice if they were >>>>> reciprocal). While there was an attempt a dozen years ago to better >>>>> define >>>>> the relevant HTTP verbs[4], it strikes me as more sensible to follow >>>>> the >>>>> example of the Set-Cookie: header for this rather than WebDAV's >>>>> example of >>>>> creating new verbs (even if we've seen them before) but you guys are >>>>> the >>>>> experts.* >>>>> >>>>> Undefined, but I imagine in a PUT/POST body does indeed make the >>>>> most sense. >>>>> Using the Link header in a request doesn't have well-defined >>>>> semantics. >>>>> >>>>> I wonder then whether it's not sensible to define these semantics in >>>>> an[other] Internet Draft (ala Set-Cookie) rather than having everyone >>>>> running off and inventing their own in-band solutions... doing so >>>>> would make >>>>> for some really clever RESTful interfaces. >>>>> >>>>> Sam >>>>> >>>> >>>> >>>> >>>> >> >
Hi Subbu- > > 1. The server has a cart resource, and product resources. > > 2. Each product resource found in a search will have a link > > <link rel="http://shop.org/rels/buy" href="http://shop.org/subbu/cart"/> > > The definition of rel says that the client should use POST to add the > product to the cart. > > 3. Client adds the product to the cart > > POST /subbu/cart > Content-Type: application/xml > > id=1234 > > 4. Server redirects back to the updated cart > > 303 See Other > Location: http://shop.org/subbu/cart > This looks just right for a one-many link (as in my #1 option -- post to a cart "collection" resource) in which the "one" is obvious and easily discoverable (like a cart). But a common case I run into is a one in which a resource must be linked to any of a large number of possible related resources. (In my case it is a digital image library, in which we regularly need to create links between items, e.g., a link from a photo of an architect to an image of a building that she created.). We have needed to devise as general an operation as possible to *relate* two resources. In the case of a cart and a product, it is obvious what the relationship will be once it is created. We have need to create links between resources that may have any of a large number of relations (e.g., "created-by"). I wish to stay in the realm of Atom (avoiding complexities of RDF). I am reminded somewhat here by the work going on in activities streams.... Anyway, I agree with Sam that it is currently an unsolved problem with wide applicability. --peter > > This is just generalized version of a web based shopping cart, and provides > a simplified interface to the client. As I said before, expecting the client > to manage links is akin to clients posting SQL statements to servers. > > Subbu > > > > On Tue, Jun 30, 2009 at 11:59 AM, Subbu Allamaraju <subbu@...> >> wrote: >> >> >>> >>> Sam, >>> >>> I don't disagree that there are use cases, but I am not sure if >>> letting clients manage relations is the right way to implement >>> distributed systems. The approach you describe below is similar to a >>> client trying to setup foreign key relations between different >>> database entities. This model leaks abstractions and is not ideal for >>> writing large systems. >>> >>> For instance, take a simple shopping cart application. The server may >>> have decided to use links to associate products to a cart, but that >>> does not mean that, clients should be able to create/edit/delete those >>> links. Instead, links come into being when the client "adds products >>> to a cart" and they go away when the client "removes a product from >>> the cart". That is the right level of abstraction for the client. >>> >>> >> >> >> >> >>> >>> IMO, links are for servers to provide navigability between resources, >>> and to let clients make state transitions via links. >>> >>> Subbu >>> >>> >>> On Jun 30, 2009, at 3:44 AM, Sam Johnston wrote: >>> >>> Hi Subbu, >>>> >>>> On Tue, Jun 30, 2009 at 5:45 AM, Subbu Allamaraju <subbu@... >>>> <subbu%40subbu.org>> >>>> >>> >>> wrote: >>>> >>>> LINK is similar - how a LINK relationship is created/managed/ >>>>>> destroyed is >>>>>> >>>>> undefined. >>>>> >>>>> Why isn't that up to the server(s) managing the resources? Links are >>>>> for servers to describe relations between resources, and not for >>>>> clients to manage such relationships. >>>>> >>>> >>>> >>>> Why so? This use case requires that clients be able to manage links: >>>> virtual >>>> infrastructure is modeled as compute, storage and network resources >>>> and >>>> clients create, delete and link them as they see fit. The server can >>>> too >>>> (for example, implicitly creating a storage resource and linking it >>>> when you >>>> create a compute resource) but the point of OCCI >>>> <http://www.occi-wg.org/>is to allow for client manipulation. >>>> >>>> We're not the only ones who see a need either... the original >>>> authors of the >>>> HTTP spec (RFC 2068) including LINK and >>>> UNLINK<http://tools.ietf.org/html/rfc2068#section-19.6.1.2>verbs for >>>> this around the same as this >>>> I-D < >>>> >>> http://ftp.ics.uci.edu/pub/ietf/http/draft-pritchard-http-links-00.txt >>> >>>> specifying >>>>> >>>> same in more detail. This is what Mark Nottingham (author of the >>>> Link: header I-D among other things, copied) had to say this morning >>>> on >>>> apps-discuss: >>>> >>>> *- First and foremost, in the absence of the LINK and UNLINK verbs >>>> originally defined in RFC 2068[2] but specifically omitted from RFC >>>> 2616[3], >>>> what is the preferred mechanism for manipulating these links via >>>> HTTP? It >>>> appears that this header is intended for GET requests only, but >>>> presumably >>>> specifying it in POST and PUT requests would be one option that >>>> avoids the >>>> creation of [not so] "new" verbs (bearing in mind that short of >>>> accepting >>>> Link: headers from empty POST/PUT requests, it would be necessary to >>>> GET and >>>> then PUT the entire payload to update links - twice if they were >>>> reciprocal). While there was an attempt a dozen years ago to better >>>> define >>>> the relevant HTTP verbs[4], it strikes me as more sensible to follow >>>> the >>>> example of the Set-Cookie: header for this rather than WebDAV's >>>> example of >>>> creating new verbs (even if we've seen them before) but you guys are >>>> the >>>> experts.* >>>> >>>> Undefined, but I imagine in a PUT/POST body does indeed make the >>>> most sense. >>>> Using the Link header in a request doesn't have well-defined >>>> semantics. >>>> >>>> I wonder then whether it's not sensible to define these semantics in >>>> an[other] Internet Draft (ala Set-Cookie) rather than having everyone >>>> running off and inventing their own in-band solutions... doing so >>>> would make >>>> for some really clever RESTful interfaces. >>>> >>>> Sam >>>> >>> >>> >>> >>> >
Peter, On Tue, Jun 30, 2009 at 10:28 PM, Peter Keane <pkeane@...>wrote: > We have need to create links between resources that may have any of a > large number of relations (e.g., "created-by"). > This is a great use case - thanks a lot. > I wish to stay in the realm of Atom (avoiding complexities of RDF). I am > reminded somewhat here by the work going on in activities streams.... > Anyway, I agree with Sam that it is currently an unsolved problem with wide > applicability. > I'm going one step further in eliminating the Atom for individual resources. That would allow you simply to PUT a new photo and set a Link: header in a single, atomic action. Note that as it's raw HTTP there's no encoding necessary so less cycles and bandwidth needlessly burnt, plus less room for error and *significantly* less complexity in the clients: curl -T building.jpg -H 'Link: <http://example.com/architect/123>; > rel="created-by"' http://example.com/building/123 > Note that link relations are extensible in that you can specify attributes like "quantity=2" (for the shopping cart), "role=surveyor" (for the buildings) or "interface=eth0" (for cloud infrastructure). Devil's in the detail though - mostly around partial updates and deletion of links (that is, there should be a way to "delete" a link - perhaps another header like Delete-Link: or an attribute like "expire=now"). Set-Cookie: works because it has expiry, but this doesn't make much sense for a link. Sam
Could you explain why this is a "large deviation from HTTP"? Perhaps some reference to an RFC or spec would help. I am also not sure what you mean by "HTTP was designed to create a web of opaque resources". Opaque to the protocol operations, or the client, or the server? Even in HTML, clients don't specify links. They just follow them. > Consider some of the things I need to be able to do: > > - Mount a storage resource on a compute resource > - Connect a compute resource to a network (or a network to a network > etc.) > - Associate arbitrary resources which may be hosted elsewhere (for > example, PDF build documentation for a server) > > Why would I want to create what is essentially an RPC-style > interface (e.g. > "mount", "attach", "associate", etc.) for this functionality? > Granted if > that's what I wanted to do then the method you propose below is clean > (except that the ID should perhaps be the URL) but is there not > another way? > As far as this use case is concerned, with the approach you are suggesting, the client will end up implementing a lot of code that really belongs to the server. For instance, it will have to know what it means to mount a storage resource. I am not expert in the particular domain of allocating computing/storage devices, but my hunch is that the interface you are describing is not abstract enough for clients. HTTP is not SQL. It is not a data manipulation API. Whether you call "attach", "mount" etc RPC or resources, such functionality belongs to the server. In the book we're currently writing (http://www.restful-webservices-cookbook.org/ ), we explicitly encourage using what we call as "sidekick" and "controller" resources to provide a meaningful abstraction to clients. In the absence of such notions, servers will end up providing leaky abstractions to clients, which is certainly not the intent of REST. Subbu On Jun 30, 2009, at 1:09 PM, Sam Johnston wrote: > Subbu, > > This is a fairly large deviation from HTTP as the "universal > interface" and > the details would need to be specified for each implementation. HTTP > was > designed to create a web of opaque resources, only the linking > requirement > was (until now) well satisfied by another standard developed by > another SSO > (that is, HTML). The clients specify the links today so it makes > sense that > they continue to be able to create the links tomorrow, does it not? > If the > server doesn't like the proposed link it doesn't have to accept it, > and it > can always specify links of its own (which is the way it works with > hypertext today - consider "manual" links in blog comments vs > "automatic" > links to stylesheets, feeds, etc.) > > Consider some of the things I need to be able to do: > > - Mount a storage resource on a compute resource > - Connect a compute resource to a network (or a network to a network > etc.) > - Associate arbitrary resources which may be hosted elsewhere (for > example, PDF build documentation for a server) > > Why would I want to create what is essentially an RPC-style > interface (e.g. > "mount", "attach", "associate", etc.) for this functionality? > Granted if > that's what I wanted to do then the method you propose below is clean > (except that the ID should perhaps be the URL) but is there not > another way? > > Sam > > On Tue, Jun 30, 2009 at 9:57 PM, Subbu Allamaraju <subbu@...> > wrote: > >> Please read the POST as >> >> POST /subbu/cart >>> Content-Type: application/x-www-form-urlencoded >>> >> >> id=1234 >>> >> >> Subbu >> >> >> On Jun 30, 2009, at 12:41 PM, Subbu Allamaraju wrote: >> >> Hi Peter, >>> >>> 1. post the product to the cart "collection" >>>> 2. add a link to a product pointing to the cart >>>> 3. add a link to a cart resource pointing to the product >>>> 4. create a new resource (presumably by POSTing to a known >>>> endpoint) >>>> that >>>> is essentially a "cart-product instance" that has a link to each >>>> >>> >>> This still leaks many server-side details to the client. Here is an >>> alternative. >>> >>> 1. The server has a cart resource, and product resources. >>> >>> 2. Each product resource found in a search will have a link >>> >>> <link rel="http://shop.org/rels/buy" href="http://shop.org/subbu/cart >>> "/> >>> >>> The definition of rel says that the client should use POST to add >>> the >>> product to the cart. >>> >>> 3. Client adds the product to the cart >>> >>> POST /subbu/cart >>> Content-Type: application/xml >>> >>> id=1234 >>> >>> 4. Server redirects back to the updated cart >>> >>> 303 See Other >>> Location: http://shop.org/subbu/cart >>> >>> This is just generalized version of a web based shopping cart, and >>> provides a simplified interface to the client. As I said before, >>> expecting >>> the client to manage links is akin to clients posting SQL >>> statements to >>> servers. >>> >>> Subbu >>> >>> >>> >>> On Tue, Jun 30, 2009 at 11:59 AM, Subbu Allamaraju <subbu@...> >>>> wrote: >>>> >>>> >>>>> >>>>> Sam, >>>>> >>>>> I don't disagree that there are use cases, but I am not sure if >>>>> letting clients manage relations is the right way to implement >>>>> distributed systems. The approach you describe below is similar >>>>> to a >>>>> client trying to setup foreign key relations between different >>>>> database entities. This model leaks abstractions and is not >>>>> ideal for >>>>> writing large systems. >>>>> >>>>> For instance, take a simple shopping cart application. The >>>>> server may >>>>> have decided to use links to associate products to a cart, but >>>>> that >>>>> does not mean that, clients should be able to create/edit/delete >>>>> those >>>>> links. Instead, links come into being when the client "adds >>>>> products >>>>> to a cart" and they go away when the client "removes a product >>>>> from >>>>> the cart". That is the right level of abstraction for the client. >>>>> >>>>> >>>> >>>> >>>> >>>> >>>>> >>>>> IMO, links are for servers to provide navigability between >>>>> resources, >>>>> and to let clients make state transitions via links. >>>>> >>>>> Subbu >>>>> >>>>> >>>>> On Jun 30, 2009, at 3:44 AM, Sam Johnston wrote: >>>>> >>>>> Hi Subbu, >>>>>> >>>>>> On Tue, Jun 30, 2009 at 5:45 AM, Subbu Allamaraju >>>>>> <subbu@... >>>>>> <subbu%40subbu.org>> >>>>>> >>>>> >>>>> wrote: >>>>>> >>>>>> LINK is similar - how a LINK relationship is created/managed/ >>>>>>>> destroyed is >>>>>>>> >>>>>>> undefined. >>>>>>> >>>>>>> Why isn't that up to the server(s) managing the resources? >>>>>>> Links are >>>>>>> for servers to describe relations between resources, and not for >>>>>>> clients to manage such relationships. >>>>>>> >>>>>> >>>>>> >>>>>> Why so? This use case requires that clients be able to manage >>>>>> links: >>>>>> virtual >>>>>> infrastructure is modeled as compute, storage and network >>>>>> resources >>>>>> and >>>>>> clients create, delete and link them as they see fit. The >>>>>> server can >>>>>> too >>>>>> (for example, implicitly creating a storage resource and >>>>>> linking it >>>>>> when you >>>>>> create a compute resource) but the point of OCCI >>>>>> <http://www.occi-wg.org/>is to allow for client manipulation. >>>>>> >>>>>> We're not the only ones who see a need either... the original >>>>>> authors of the >>>>>> HTTP spec (RFC 2068) including LINK and >>>>>> UNLINK<http://tools.ietf.org/html/ >>>>>> rfc2068#section-19.6.1.2>verbs for >>>>>> this around the same as this >>>>>> I-D < >>>>>> >>>>> http://ftp.ics.uci.edu/pub/ietf/http/draft-pritchard-http-links-00.txt >>>>> >>>>>> specifying >>>>>>> >>>>>> same in more detail. This is what Mark Nottingham (author of the >>>>>> Link: header I-D among other things, copied) had to say this >>>>>> morning >>>>>> on >>>>>> apps-discuss: >>>>>> >>>>>> *- First and foremost, in the absence of the LINK and UNLINK >>>>>> verbs >>>>>> originally defined in RFC 2068[2] but specifically omitted from >>>>>> RFC >>>>>> 2616[3], >>>>>> what is the preferred mechanism for manipulating these links via >>>>>> HTTP? It >>>>>> appears that this header is intended for GET requests only, but >>>>>> presumably >>>>>> specifying it in POST and PUT requests would be one option that >>>>>> avoids the >>>>>> creation of [not so] "new" verbs (bearing in mind that short of >>>>>> accepting >>>>>> Link: headers from empty POST/PUT requests, it would be >>>>>> necessary to >>>>>> GET and >>>>>> then PUT the entire payload to update links - twice if they were >>>>>> reciprocal). While there was an attempt a dozen years ago to >>>>>> better >>>>>> define >>>>>> the relevant HTTP verbs[4], it strikes me as more sensible to >>>>>> follow >>>>>> the >>>>>> example of the Set-Cookie: header for this rather than WebDAV's >>>>>> example of >>>>>> creating new verbs (even if we've seen them before) but you >>>>>> guys are >>>>>> the >>>>>> experts.* >>>>>> >>>>>> Undefined, but I imagine in a PUT/POST body does indeed make the >>>>>> most sense. >>>>>> Using the Link header in a request doesn't have well-defined >>>>>> semantics. >>>>>> >>>>>> I wonder then whether it's not sensible to define these >>>>>> semantics in >>>>>> an[other] Internet Draft (ala Set-Cookie) rather than having >>>>>> everyone >>>>>> running off and inventing their own in-band solutions... doing so >>>>>> would make >>>>>> for some really clever RESTful interfaces. >>>>>> >>>>>> Sam >>>>>> >>>>> >>>>> >>>>> >>>>> >>> >>
> This looks just right for a one-many link (as in my #1 option -- > post to a > cart "collection" resource) in which the "one" is obvious and easily > discoverable (like a cart). But a common case I run into is a one > in which > a resource must be linked to any of a large number of possible related > resources. (In my case it is a digital image library, in which we > regularly > need to create links between items, e.g., a link from a photo of an > architect to an image of a building that she created.). We have > needed to > devise as general an operation as possible to *relate* two > resources. In > the case of a cart and a product, it is obvious what the > relationship will > be once it is created. We have need to create links between > resources that > may have any of a large number of relations (e.g., "created-by"). I > wish to > stay in the realm of Atom (avoiding complexities of RDF). I am > reminded > somewhat here by the work going on in activities streams.... > Anyway, I > agree with Sam that it is currently an unsolved problem with wide > applicability. > So, when you let clients establish links between resources, who controls whether a link is valid or bogus? What is the advantage of the client owning this problem? Forgetting about HTTP and REST for a bit, would you still take the same approach if you are building this application using a different style? What I am getting at is, is the "link management" problem real, or is it a manifestation of an implementation choice? Subbu
On Tue, Jun 30, 2009 at 5:17 PM, Subbu Allamaraju <subbu@...> wrote: > This looks just right for a one-many link (as in my #1 option -- post to a >> cart "collection" resource) in which the "one" is obvious and easily >> discoverable (like a cart). But a common case I run into is a one in >> which >> a resource must be linked to any of a large number of possible related >> resources. (In my case it is a digital image library, in which we >> regularly >> need to create links between items, e.g., a link from a photo of an >> architect to an image of a building that she created.). We have needed to >> devise as general an operation as possible to *relate* two resources. In >> the case of a cart and a product, it is obvious what the relationship will >> be once it is created. We have need to create links between resources >> that >> may have any of a large number of relations (e.g., "created-by"). I wish >> to >> stay in the realm of Atom (avoiding complexities of RDF). I am reminded >> somewhat here by the work going on in activities streams.... Anyway, I >> agree with Sam that it is currently an unsolved problem with wide >> applicability. >> >> > So, when you let clients establish links between resources, who controls > whether a link is valid or bogus? > > What is the advantage of the client owning this problem? > > Forgetting about HTTP and REST for a bit, would you still take the same > approach if you are building this application using a different style? What > I am getting at is, is the "link management" problem real, or is it a > manifestation of an implementation choice? Good questions. Certainly, it'll be the server that controls what's a valid "linkage", but the range of possibilities might be (nearly) infinite. And I agree there are few cases in which the problem should be owned by the client. (I can't help but think about all of the sticky issue around RDF here -- by whose authority is a particular assertion made given that the power to make an assertion is "granted" to any "client"?). I can decompose the problem such that a UI might be a drag-and-drop, an "add-to-cart" button, etc. More commonly, in my digital library case, I'll have an embedded search feature where I allow the user to do an open search over the collection and select any found items to be "related" to the current item (and probably provide a pull-down selection of relation types). Just under the hood, though, a link is being created between these two resources. Since the range of possible links is so huge, I'd find it hard to offer a @rel=buy link as in your example. I suppose a reasonable extrapolation of that would be to provide a "linker" global resource that would allow me, say, to post form encoded: id_one=123&id_two=224&relation=created-by (?). OR perhaps each item has it's own <link rel="http://example.org/linker" href=" http://example.org/item/linker/123"/> that I could post an id and relationship type to (hmmm -- need to provide a way for client to know the possible relation types -- possibly an atom:link for each relation type that I can POST to?). Anyway, it gets unwieldy quickly. It's unavoidable (I think) that a case in which you might simply add a foreign key or a bridge table when thinking relationally, you need to do lots of contortions to do RESTfully. When all is said and done, link headers seems reasonably elegant. I agree with you, though, that it potentially gives the client too much (unconstrained, or at least not constrained in a discoverable way) ownership of the problem. --peter > > Subbu >
FWIW, Link *Headers* make sense to me when the link are metadata about the resource. However, when the links are to be treated as first-class resources themselves, I think Link Headers is not the right choice. For example, I assert that links appearing in the <head /> section of typical HTML documents are metadata. Links that appear in the <body /> section are not. I am not privy to the details of your particular use of links, but I get the impression that they are more than metadata. If true, I would consider placing the links either in the body of the resource or, when the resource does not support body links easily (certain binary files, etc.), I would add a single link header (Link: <http://www.example.org/resource123/links>; rel=related) that points to the related resource that holds all the links. One possible advantage of this approach is that the related links are now exposed in a way that easily allows searching and filtering using well-established mechanisms. Also, this sentence from your last post caught my eye: <snip> It's unavoidable (I think) that a case in which you might simply add a foreign key or a bridge table when thinking relationally, you need to do lots of contortions to do RESTfully. </snip> This is very true. For me, that's an excellent reminder that the relational approach is not the same as a RESTful approach. mca http://amundsen.com/blog/ On Tue, Jun 30, 2009 at 21:32, Peter Keane <pkeane@...> wrote: > > > > > On Tue, Jun 30, 2009 at 5:17 PM, Subbu Allamaraju <subbu@...> wrote: > >> This looks just right for a one-many link (as in my #1 option -- post to >>> a >>> cart "collection" resource) in which the "one" is obvious and easily >>> discoverable (like a cart). But a common case I run into is a one in >>> which >>> a resource must be linked to any of a large number of possible related >>> resources. (In my case it is a digital image library, in which we >>> regularly >>> need to create links between items, e.g., a link from a photo of an >>> architect to an image of a building that she created.). We have needed >>> to >>> devise as general an operation as possible to *relate* two resources. In >>> the case of a cart and a product, it is obvious what the relationship >>> will >>> be once it is created. We have need to create links between resources >>> that >>> may have any of a large number of relations (e.g., "created-by"). I wish >>> to >>> stay in the realm of Atom (avoiding complexities of RDF). I am reminded >>> somewhat here by the work going on in activities streams.... Anyway, I >>> agree with Sam that it is currently an unsolved problem with wide >>> applicability. >>> >>> >> So, when you let clients establish links between resources, who controls >> whether a link is valid or bogus? >> >> What is the advantage of the client owning this problem? >> >> Forgetting about HTTP and REST for a bit, would you still take the same >> approach if you are building this application using a different style? What >> I am getting at is, is the "link management" problem real, or is it a >> manifestation of an implementation choice? > > > Good questions. Certainly, it'll be the server that controls what's a > valid "linkage", but the range of possibilities might be (nearly) infinite. > And I agree there are few cases in which the problem should be owned by the > client. (I can't help but think about all of the sticky issue around RDF > here -- by whose authority is a particular assertion made given that the > power to make an assertion is "granted" to any "client"?). > > I can decompose the problem such that a UI might be a drag-and-drop, an > "add-to-cart" button, etc. More commonly, in my digital library case, I'll > have an embedded search feature where I allow the user to do an open search > over the collection and select any found items to be "related" to the > current item (and probably provide a pull-down selection of relation > types). > > Just under the hood, though, a link is being created between these two > resources. Since the range of possible links is so huge, I'd find it hard > to offer a @rel=buy link as in your example. I suppose a reasonable > extrapolation of that would be to provide a "linker" global resource that > would allow me, say, to post form encoded: > id_one=123&id_two=224&relation=created-by (?). OR perhaps each item has > it's own <link rel="http://example.org/linker" href=" > http://example.org/item/linker/123"/> that I could post an id and > relationship type to (hmmm -- need to provide a way for client to know the > possible relation types -- possibly an atom:link for each relation type that > I can POST to?). > > Anyway, it gets unwieldy quickly. It's unavoidable (I think) that a case > in which you might simply add a foreign key or a bridge table when thinking > relationally, you need to do lots of contortions to do RESTfully. When all > is said and done, link headers seems reasonably elegant. I agree with you, > though, that it potentially gives the client too much (unconstrained, or at > least not constrained in a discoverable way) ownership of the problem. > > --peter > > > > >> >> Subbu >> > > > > >
I agree with mike.
IMHO, it seems it is here a meta level problem w.r.t. the application
vs. protocol POV.
Taking the building/architect example, the architect of a building is a
metadata for the application (information about the building). But as
long as the UA needs to manage this information (read/edit), it should
be a resource by itself, i.e. data at the protocol level. The fact that
this metadata can't be embed in one of the representations of the
building resource (e.g. an image) just imply the need for another
representation (conneg is your friend), not that this information
pertains to HTTP headers. RDF was just design for this use case: provide
external metadata about resources that can't embed them. Turtle or N3
can be used for xml reticents, or just a custom key:value format (but
don't reinvent the wheel :)
Metadata being a resource, they can be easily edited (using POST, PUT,
PATCH, or lower granularity (sub)resource manipulation).
example (turtle not tested)
==============
GET /buildings/1234
Accept: image/jpeg
----------
200 OK
...
(jpeg representation of the building)
==============
GET /buildings/1234
Accept: text/turtle
---------
200 Ok
...
@prefix rel: <http://www.example.com/relations/ns#> .
@prefix dc: <http://purl.org/dc/elements/1.1/> .
@prefix ent: <http://www.example.com/entities/ns#> .
<http://www.example.com/buildings/1234>
a ent:building ;
dc:title "A nice building";
rel:architects <http://www.example.com/buildings/1234/architects;
==============
GET /buildings/1234/architects
Accept: text/turtle
----
200 Ok
@prefix ent: <http://www.example.com/entities/ns#> .
<http://www.example.com/buildings/1234>
ent:Architect <http://www.example.com/architects/spamegg> .
==============
POST /buildings/1234/architects
...
http://www.example.com/architects/foobar
-------
201 Created
...
@prefix ent: <http://www.example.com/entities/ns#> .
<http://www.example.com/buildings/1234>
ent:Architect <http://www.example.com/architects/foobar>,
<http://www.example.com/architects/spamegg> .
===============
GET /architects/foobar
Accept: text/turtle
------
200 Ok
@prefix ent: <http://www.example.com/entities/ns#> .
@prefix rel: <http://www.example.com/relations/ns#> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
<http://www.example.com/architects/foobar>
a foaf:Person ;
foaf:name "Foo Bar";
rel:hasBuilt [
<http://www.example.com/buildings/1234> ;
<http://www.example.com/buildings/5678> ;
] .
If the list of buildings for an architect needs to be manipulated, the
same pattern can be used, creating a /architects/foobar/buildings
resource, using HATEOAS...
>
> FWIW, Link *Headers* make sense to me when the link are metadata about
> the resource. However, when the links are to be treated as first-class
> resources themselves, I think Link Headers is not the right choice. For
> example, I assert that links appearing in the <head /> section of
> typical HTML documents are metadata. Links that appear in the <body />
> section are not.
>
> I am not privy to the details of your particular use of links, but I get
> the impression that they are more than metadata. If true, I would
> consider placing the links either in the body of the resource or, when
> the resource does not support body links easily (certain binary files,
> etc.), I would add a single link header (Link:
> <http://www.example.org/resource123/links>; rel=related) that points to
> the related resource that holds all the links. One possible advantage of
> this approach is that the related links are now exposed in a way that
> easily allows searching and filtering using well-established mechanisms.
>
> Also, this sentence from your last post caught my eye:
> <snip>
> It's unavoidable (I think) that a case in which you might simply add a
> foreign key or a bridge table when thinking relationally, you need to do
> lots of contortions to do RESTfully.
> </snip>
>
> This is very true. For me, that's an excellent reminder that the
> relational approach is not the same as a RESTful approach.
>
> mca
> http://amundsen.com/blog/
>
>
--
Yannick Loiseau, PhD
Laboratoire LIMOS - UMR 6158
Universit� Blaise Pascal
Subbu Allamaraju wrote: > > > > Sam, > > I don't disagree that there are use cases, but I am not sure if > letting clients manage relations is the right way to implement > distributed systems. The approach you describe below is similar to a > client trying to setup foreign key relations between different > database entities. This model leaks abstractions and is not ideal for > writing large systems. > > For instance, take a simple shopping cart application. The server may > have decided to use links to associate products to a cart, but that > does not mean that, clients should be able to create/edit/delete those > links. Instead, links come into being when the client "adds products > to a cart" and they go away when the client "removes a product from > the cart". That is the right level of abstraction for the client. > > IMO, links are for servers to provide navigability between resources, > and to let clients make state transitions via links. > Let's leave aside the issue on whether to introduce LINK and UNLINK. What about the value of Link headers themselves? I really like the idea of propogating additional metadata about the resource without polluting the resource. Sometimes you just can't modify the resource (image). Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
What is "polluting" about links? I suspect that you are talking about adding links to an object serialized into XML is polluting the representation. But is that polluting, or making the representation contextual and more useful? In any case, link headers and links in a representation body are not always equivalent. For images etc., certainly link headers are useful. But in XML cases, you can define more context around link elements. Subbu On Jul 1, 2009, at 5:23 AM, Bill Burke wrote: > > > Subbu Allamaraju wrote: >> Sam, >> I don't disagree that there are use cases, but I am not sure if >> letting clients manage relations is the right way to implement >> distributed systems. The approach you describe below is similar to a >> client trying to setup foreign key relations between different >> database entities. This model leaks abstractions and is not ideal for >> writing large systems. >> For instance, take a simple shopping cart application. The server may >> have decided to use links to associate products to a cart, but that >> does not mean that, clients should be able to create/edit/delete >> those >> links. Instead, links come into being when the client "adds products >> to a cart" and they go away when the client "removes a product from >> the cart". That is the right level of abstraction for the client. >> IMO, links are for servers to provide navigability between resources, >> and to let clients make state transitions via links. > > Let's leave aside the issue on whether to introduce LINK and UNLINK. > What about the value of Link headers themselves? I really like the > idea of propogating additional metadata about the resource without > polluting the resource. Sometimes you just can't modify the > resource (image). > > > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com
On Tue, Jun 30, 2009 at 11:52 PM, Subbu Allamaraju <subbu@...> wrote: > Could you explain why this is a "large deviation from HTTP"? Perhaps some > reference to an RFC or spec would help. > You have designed your own interface with rules such as using your own identifiers, how to find the id field (which presumably lives in the URL and/or entity-body), how and where to submit it, what to expect in return, etc. These rules would have to be documented, understood and (faithfully) implemented by clients wishing to consume your service. The task at hand however simply involves linking two resources (identified by URLs) together ala section 19.6.1.2 of RFC2068<http://tools.ietf.org/html/rfc2068#section-19.6.1.2>. The question then is, knowing that introducing [not so] new HTTP verbs is difficult bordering on impossible, how can it be done using only headers ala Set-Cookie. I am also not sure what you mean by "HTTP was designed to create a web of > opaque resources". Opaque to the protocol operations, or the client, or the > server? > HTTP servers typically don't care what's inside the entity-body - it's up to the client to interpret the contents. Now we're talking about using non-HyperText resources over HTTP in which case we need somewhere for metadata including links - either in-band courtesy wrapper formats like Atom, out-of-band with HTTP headers or separately altogether with things like RDF. I know what I think is the most simple & elegant option... Even in HTML, clients don't specify links. They just follow them. > Yes they do - both directly (think links in rich text areas) and indirectly (as the result of some action). This only works because HTML is HyperText... for anything else it's a non-starter. > As far as this use case is concerned, with the approach you are suggesting, > the client will end up implementing a lot of code that really belongs to the > server. For instance, it will have to know what it means to mount a storage > resource. I am not expert in the particular domain of allocating > computing/storage devices, but my hunch is that the interface you are > describing is not abstract enough for clients. > The client just needs to be able to tell the server it wants two things linked - the server works out what that means and how it translates to operations on underlying infrastructure. Whether the client has to pull a lever (e.g. your controller suggestion above) or suggest the link directly doesn't make a great deal of difference and certainly doesn't translate to "implementing a lot of code that really belongs to the server". > HTTP is not SQL. It is not a data manipulation API. Whether you call > "attach", "mount" etc RPC or resources, such functionality belongs to the > server. > Obviously the actual functionality *is* on the server - it's just a case of exposing it to the clients in an intuitive way. Having to define rules isn't intuitive. > In the book we're currently writing ( > http://www.restful-webservices-cookbook.org/), we explicitly encourage > using what we call as "sidekick" and "controller" resources to provide a > meaningful abstraction to clients. In the absence of such notions, servers > will end up providing leaky abstractions to clients, which is certainly not > the intent of REST. > I'm not sure that it's necessary to channel linking requests through "controllers" - after all there's no such mechanism for the WWW and as a result links are used in all manner of weird and wonderful ways. Perhaps it makes sense to link storage devices together for example (e.g. a logical volume pointing back to the physical SAN on which it resides) - I'd rather have clients "just know" how to make such associations rather than have to implement controllers for them on both sides of the conversation. Sam > On Jun 30, 2009, at 1:09 PM, Sam Johnston wrote: > > Subbu, >> >> This is a fairly large deviation from HTTP as the "universal interface" >> and >> the details would need to be specified for each implementation. HTTP was >> designed to create a web of opaque resources, only the linking requirement >> was (until now) well satisfied by another standard developed by another >> SSO >> (that is, HTML). The clients specify the links today so it makes sense >> that >> they continue to be able to create the links tomorrow, does it not? If the >> server doesn't like the proposed link it doesn't have to accept it, and it >> can always specify links of its own (which is the way it works with >> hypertext today - consider "manual" links in blog comments vs "automatic" >> links to stylesheets, feeds, etc.) >> >> Consider some of the things I need to be able to do: >> >> - Mount a storage resource on a compute resource >> - Connect a compute resource to a network (or a network to a network >> etc.) >> - Associate arbitrary resources which may be hosted elsewhere (for >> example, PDF build documentation for a server) >> >> Why would I want to create what is essentially an RPC-style interface >> (e.g. >> "mount", "attach", "associate", etc.) for this functionality? Granted if >> that's what I wanted to do then the method you propose below is clean >> (except that the ID should perhaps be the URL) but is there not another >> way? >> >> Sam >> >> On Tue, Jun 30, 2009 at 9:57 PM, Subbu Allamaraju <subbu@...> >> wrote: >> >> Please read the POST as >>> >>> POST /subbu/cart >>> >>>> Content-Type: application/x-www-form-urlencoded >>>> >>>> >>> id=1234 >>> >>>> >>>> >>> Subbu >>> >>> >>> On Jun 30, 2009, at 12:41 PM, Subbu Allamaraju wrote: >>> >>> Hi Peter, >>> >>>> >>>> 1. post the product to the cart "collection" >>>> >>>>> 2. add a link to a product pointing to the cart >>>>> 3. add a link to a cart resource pointing to the product >>>>> 4. create a new resource (presumably by POSTing to a known endpoint) >>>>> that >>>>> is essentially a "cart-product instance" that has a link to each >>>>> >>>>> >>>> This still leaks many server-side details to the client. Here is an >>>> alternative. >>>> >>>> 1. The server has a cart resource, and product resources. >>>> >>>> 2. Each product resource found in a search will have a link >>>> >>>> <link rel="http://shop.org/rels/buy" href="http://shop.org/subbu/cart >>>> "/> >>>> >>>> The definition of rel says that the client should use POST to add the >>>> product to the cart. >>>> >>>> 3. Client adds the product to the cart >>>> >>>> POST /subbu/cart >>>> Content-Type: application/xml >>>> >>>> id=1234 >>>> >>>> 4. Server redirects back to the updated cart >>>> >>>> 303 See Other >>>> Location: http://shop.org/subbu/cart >>>> >>>> This is just generalized version of a web based shopping cart, and >>>> provides a simplified interface to the client. As I said before, >>>> expecting >>>> the client to manage links is akin to clients posting SQL statements to >>>> servers. >>>> >>>> Subbu >>>> >>>> >>>> >>>> On Tue, Jun 30, 2009 at 11:59 AM, Subbu Allamaraju <subbu@...> >>>> >>>>> wrote: >>>>> >>>>> >>>>> >>>>>> Sam, >>>>>> >>>>>> I don't disagree that there are use cases, but I am not sure if >>>>>> letting clients manage relations is the right way to implement >>>>>> distributed systems. The approach you describe below is similar to a >>>>>> client trying to setup foreign key relations between different >>>>>> database entities. This model leaks abstractions and is not ideal for >>>>>> writing large systems. >>>>>> >>>>>> For instance, take a simple shopping cart application. The server may >>>>>> have decided to use links to associate products to a cart, but that >>>>>> does not mean that, clients should be able to create/edit/delete those >>>>>> links. Instead, links come into being when the client "adds products >>>>>> to a cart" and they go away when the client "removes a product from >>>>>> the cart". That is the right level of abstraction for the client. >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> >>>>> >>>>>> IMO, links are for servers to provide navigability between resources, >>>>>> and to let clients make state transitions via links. >>>>>> >>>>>> Subbu >>>>>> >>>>>> >>>>>> On Jun 30, 2009, at 3:44 AM, Sam Johnston wrote: >>>>>> >>>>>> Hi Subbu, >>>>>> >>>>>>> >>>>>>> On Tue, Jun 30, 2009 at 5:45 AM, Subbu Allamaraju <subbu@... >>>>>>> <subbu%40subbu.org>> >>>>>>> >>>>>>> >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> LINK is similar - how a LINK relationship is created/managed/ >>>>>>> >>>>>>>> destroyed is >>>>>>>>> >>>>>>>>> undefined. >>>>>>>> >>>>>>>> Why isn't that up to the server(s) managing the resources? Links are >>>>>>>> for servers to describe relations between resources, and not for >>>>>>>> clients to manage such relationships. >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> Why so? This use case requires that clients be able to manage links: >>>>>>> virtual >>>>>>> infrastructure is modeled as compute, storage and network resources >>>>>>> and >>>>>>> clients create, delete and link them as they see fit. The server can >>>>>>> too >>>>>>> (for example, implicitly creating a storage resource and linking it >>>>>>> when you >>>>>>> create a compute resource) but the point of OCCI >>>>>>> <http://www.occi-wg.org/>is to allow for client manipulation. >>>>>>> >>>>>>> We're not the only ones who see a need either... the original >>>>>>> authors of the >>>>>>> HTTP spec (RFC 2068) including LINK and >>>>>>> UNLINK<http://tools.ietf.org/html/rfc2068#section-19.6.1.2>verbs for >>>>>>> this around the same as this >>>>>>> I-D < >>>>>>> >>>>>>> >>>>>> http://ftp.ics.uci.edu/pub/ietf/http/draft-pritchard-http-links-00.txt >>>>>> >>>>>> specifying >>>>>>> >>>>>>>> >>>>>>>> same in more detail. This is what Mark Nottingham (author of the >>>>>>> Link: header I-D among other things, copied) had to say this morning >>>>>>> on >>>>>>> apps-discuss: >>>>>>> >>>>>>> *- First and foremost, in the absence of the LINK and UNLINK verbs >>>>>>> originally defined in RFC 2068[2] but specifically omitted from RFC >>>>>>> 2616[3], >>>>>>> what is the preferred mechanism for manipulating these links via >>>>>>> HTTP? It >>>>>>> appears that this header is intended for GET requests only, but >>>>>>> presumably >>>>>>> specifying it in POST and PUT requests would be one option that >>>>>>> avoids the >>>>>>> creation of [not so] "new" verbs (bearing in mind that short of >>>>>>> accepting >>>>>>> Link: headers from empty POST/PUT requests, it would be necessary to >>>>>>> GET and >>>>>>> then PUT the entire payload to update links - twice if they were >>>>>>> reciprocal). While there was an attempt a dozen years ago to better >>>>>>> define >>>>>>> the relevant HTTP verbs[4], it strikes me as more sensible to follow >>>>>>> the >>>>>>> example of the Set-Cookie: header for this rather than WebDAV's >>>>>>> example of >>>>>>> creating new verbs (even if we've seen them before) but you guys are >>>>>>> the >>>>>>> experts.* >>>>>>> >>>>>>> Undefined, but I imagine in a PUT/POST body does indeed make the >>>>>>> most sense. >>>>>>> Using the Link header in a request doesn't have well-defined >>>>>>> semantics. >>>>>>> >>>>>>> I wonder then whether it's not sensible to define these semantics in >>>>>>> an[other] Internet Draft (ala Set-Cookie) rather than having everyone >>>>>>> running off and inventing their own in-band solutions... doing so >>>>>>> would make >>>>>>> for some really clever RESTful interfaces. >>>>>>> >>>>>>> Sam >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>> >>> >
Subbu, I'm thinking more of the case where you have more generic services that need to know about certain relationships but don't understand the data format. Subbu Allamaraju wrote: > What is "polluting" about links? I suspect that you are talking about > adding links to an object serialized into XML is polluting the > representation. But is that polluting, or making the representation > contextual and more useful? > > In any case, link headers and links in a representation body are not > always equivalent. For images etc., certainly link headers are useful. > But in XML cases, you can define more context around link elements. > > Subbu > > On Jul 1, 2009, at 5:23 AM, Bill Burke wrote: > >> >> >> Subbu Allamaraju wrote: >>> Sam, >>> I don't disagree that there are use cases, but I am not sure if >>> letting clients manage relations is the right way to implement >>> distributed systems. The approach you describe below is similar to a >>> client trying to setup foreign key relations between different >>> database entities. This model leaks abstractions and is not ideal for >>> writing large systems. >>> For instance, take a simple shopping cart application. The server may >>> have decided to use links to associate products to a cart, but that >>> does not mean that, clients should be able to create/edit/delete those >>> links. Instead, links come into being when the client "adds products >>> to a cart" and they go away when the client "removes a product from >>> the cart". That is the right level of abstraction for the client. >>> IMO, links are for servers to provide navigability between resources, >>> and to let clients make state transitions via links. >> >> Let's leave aside the issue on whether to introduce LINK and UNLINK. >> What about the value of Link headers themselves? I really like the >> idea of propogating additional metadata about the resource without >> polluting the resource. Sometimes you just can't modify the resource >> (image). >> >> >> Bill Burke >> JBoss, a division of Red Hat >> http://bill.burkecentral.com > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Yannick, Does this not strike you as far more complicated than necessary? Under your proposal I need to do a bunch of requests and understand a new language just to work out that two resources are related. The Link: header accomplishes the same job in a far simpler and more performant manner, only until now no mechanism (beyond the HTTP LINK and UNLINK verbs) has been specified for creating them. Therein lies the question... what is the most intuitive way to associate resources? Given the relationships are exposed via HTTP headers one would assume that the same mechanism could/should be used to manage them. Sam On Wed, Jul 1, 2009 at 11:48 AM, Yannick Loiseau <yloiseau@...> wrote: > > > I agree with mike. > IMHO, it seems it is here a meta level problem w.r.t. the application > vs. protocol POV. > Taking the building/architect example, the architect of a building is a > metadata for the application (information about the building). But as > long as the UA needs to manage this information (read/edit), it should > be a resource by itself, i.e. data at the protocol level. The fact that > this metadata can't be embed in one of the representations of the > building resource (e.g. an image) just imply the need for another > representation (conneg is your friend), not that this information > pertains to HTTP headers. RDF was just design for this use case: provide > external metadata about resources that can't embed them. Turtle or N3 > can be used for xml reticents, or just a custom key:value format (but > don't reinvent the wheel :) > Metadata being a resource, they can be easily edited (using POST, PUT, > PATCH, or lower granularity (sub)resource manipulation). > > example (turtle not tested) > ============== > GET /buildings/1234 > Accept: image/jpeg > ---------- > 200 OK > ... > (jpeg representation of the building) > ============== > GET /buildings/1234 > Accept: text/turtle > --------- > 200 Ok > ... > > @prefix rel: <http://www.example.com/relations/ns#> . > @prefix dc: <http://purl.org/dc/elements/1.1/> . > @prefix ent: <http://www.example.com/entities/ns#> . > > <http://www.example.com/buildings/1234> > a ent:building ; > dc:title "A nice building"; > rel:architects <http://www.example.com/buildings/1234/architects; > ============== > GET /buildings/1234/architects > Accept: text/turtle > ---- > 200 Ok > > @prefix ent: <http://www.example.com/entities/ns#> . > > <http://www.example.com/buildings/1234> > ent:Architect <http://www.example.com/architects/spamegg> . > > ============== > POST /buildings/1234/architects > ... > > http://www.example.com/architects/foobar > ------- > 201 Created > ... > > @prefix ent: <http://www.example.com/entities/ns#> . > > <http://www.example.com/buildings/1234> > ent:Architect <http://www.example.com/architects/foobar>, > <http://www.example.com/architects/spamegg> . > =============== > GET /architects/foobar > Accept: text/turtle > ------ > 200 Ok > > @prefix ent: <http://www.example.com/entities/ns#> . > @prefix rel: <http://www.example.com/relations/ns#> . > @prefix foaf: <http://xmlns.com/foaf/0.1/> . > > <http://www.example.com/architects/foobar> > a foaf:Person ; > foaf:name "Foo Bar"; > rel:hasBuilt [ > <http://www.example.com/buildings/1234> ; > <http://www.example.com/buildings/5678> ; > ] . > > If the list of buildings for an architect needs to be manipulated, the > same pattern can be used, creating a /architects/foobar/buildings > resource, using HATEOAS... > > > > > > FWIW, Link *Headers* make sense to me when the link are metadata about > > the resource. However, when the links are to be treated as first-class > > resources themselves, I think Link Headers is not the right choice. For > > example, I assert that links appearing in the <head /> section of > > typical HTML documents are metadata. Links that appear in the <body /> > > section are not. > > > > I am not privy to the details of your particular use of links, but I get > > the impression that they are more than metadata. If true, I would > > consider placing the links either in the body of the resource or, when > > the resource does not support body links easily (certain binary files, > > etc.), I would add a single link header (Link: > > <http://www.example.org/resource123/links>; rel=related) that points to > > the related resource that holds all the links. One possible advantage of > > this approach is that the related links are now exposed in a way that > > easily allows searching and filtering using well-established mechanisms. > > > > Also, this sentence from your last post caught my eye: > > <snip> > > It's unavoidable (I think) that a case in which you might simply add a > > foreign key or a bridge table when thinking relationally, you need to do > > lots of contortions to do RESTfully. > > </snip> > > > > This is very true. For me, that's an excellent reminder that the > > relational approach is not the same as a RESTful approach. > > > > mca > > http://amundsen.com/blog/ > > > > > > -- > Yannick Loiseau, PhD > Laboratoire LIMOS - UMR 6158 > Université Blaise Pascal > >
Exactly - it's about making non-hypertext resources first class citizens on the Internet in an intuitive, RESTful manner. Sure I could base64 encode them and embed them in Atom (good luck for multi-terabyte virtual hard drives though!), or reference them from Atom and put up with having to parse XML, understand it and make multiple requests, or even have separate documents dedicated soley to metadata, but that all sounds a bit masochistic. Incidentally the Link: headers are extensible so "you can define more context" (such as the quantity on a relationship between a product and a shopping cart) for them too. Sam On Wed, Jul 1, 2009 at 3:40 PM, Bill Burke <bburke@...> wrote: > Subbu, I'm thinking more of the case where you have more generic services > that need to know about certain relationships but don't understand the data > format. > > > Subbu Allamaraju wrote: > >> What is "polluting" about links? I suspect that you are talking about >> adding links to an object serialized into XML is polluting the >> representation. But is that polluting, or making the representation >> contextual and more useful? >> >> In any case, link headers and links in a representation body are not >> always equivalent. For images etc., certainly link headers are useful. But >> in XML cases, you can define more context around link elements. >> >> Subbu >> >> On Jul 1, 2009, at 5:23 AM, Bill Burke wrote: >> >> >>> >>> Subbu Allamaraju wrote: >>> >>>> Sam, >>>> I don't disagree that there are use cases, but I am not sure if >>>> letting clients manage relations is the right way to implement >>>> distributed systems. The approach you describe below is similar to a >>>> client trying to setup foreign key relations between different >>>> database entities. This model leaks abstractions and is not ideal for >>>> writing large systems. >>>> For instance, take a simple shopping cart application. The server may >>>> have decided to use links to associate products to a cart, but that >>>> does not mean that, clients should be able to create/edit/delete those >>>> links. Instead, links come into being when the client "adds products >>>> to a cart" and they go away when the client "removes a product from >>>> the cart". That is the right level of abstraction for the client. >>>> IMO, links are for servers to provide navigability between resources, >>>> and to let clients make state transitions via links. >>>> >>> >>> Let's leave aside the issue on whether to introduce LINK and UNLINK. What >>> about the value of Link headers themselves? I really like the idea of >>> propogating additional metadata about the resource without polluting the >>> resource. Sometimes you just can't modify the resource (image). >>> >>> >>> Bill Burke >>> JBoss, a division of Red Hat >>> http://bill.burkecentral.com >>> >> >> > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
I'm thinking even for hypertext-based resources you'd want link headers. Headers seem much much more simpler than requiring a envelope format like Atom. Sam Johnston wrote: > Exactly - it's about making non-hypertext resources first class citizens > on the Internet in an intuitive, RESTful manner. > > Sure I could base64 encode them and embed them in Atom (good luck for > multi-terabyte virtual hard drives though!), or reference them from Atom > and put up with having to parse XML, understand it and make multiple > requests, or even have separate documents dedicated soley to metadata, > but that all sounds a bit masochistic. > > Incidentally the Link: headers are extensible so "you can define more > context" (such as the quantity on a relationship between a product and a > shopping cart) for them too. > > Sam > > On Wed, Jul 1, 2009 at 3:40 PM, Bill Burke <bburke@... > <mailto:bburke@...>> wrote: > > Subbu, I'm thinking more of the case where you have more generic > services that need to know about certain relationships but don't > understand the data format. > > > Subbu Allamaraju wrote: > > What is "polluting" about links? I suspect that you are talking > about adding links to an object serialized into XML is polluting > the representation. But is that polluting, or making the > representation contextual and more useful? > > In any case, link headers and links in a representation body are > not always equivalent. For images etc., certainly link headers > are useful. But in XML cases, you can define more context around > link elements. > > Subbu > > On Jul 1, 2009, at 5:23 AM, Bill Burke wrote: > > > > Subbu Allamaraju wrote: > > Sam, > I don't disagree that there are use cases, but I am not > sure if > letting clients manage relations is the right way to > implement > distributed systems. The approach you describe below is > similar to a > client trying to setup foreign key relations between > different > database entities. This model leaks abstractions and is > not ideal for > writing large systems. > For instance, take a simple shopping cart application. > The server may > have decided to use links to associate products to a > cart, but that > does not mean that, clients should be able to > create/edit/delete those > links. Instead, links come into being when the client > "adds products > to a cart" and they go away when the client "removes a > product from > the cart". That is the right level of abstraction for > the client. > IMO, links are for servers to provide navigability > between resources, > and to let clients make state transitions via links. > > > Let's leave aside the issue on whether to introduce LINK and > UNLINK. What about the value of Link headers themselves? I > really like the idea of propogating additional metadata > about the resource without polluting the resource. > Sometimes you just can't modify the resource (image). > > > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Peter Keane wrote:
>
> 3. add a link to a cart resource pointing to the product
>
Do you mean like this?
GET /cart
"cart" : {... , "products": [], .... }
adding products:
PUT /cart
"cart": {..., "products": [ "/products/143", "/products/534" ], .... }
removing product 534:
PUT /cart
"cart": {..., "products": [ "/products/143" ], .... }
Ah. That makes sense. (To fully take advantage of these headers and entity headers in general, client APIs will need to start including headers as part of their representation objects. Most nice-looking object abstractions leave out headers.) On Wed, Jul 1, 2009 at 6:40 AM, Bill Burke <bburke@...> wrote: > Subbu, I'm thinking more of the case where you have more generic services > that need to know about certain relationships but don't understand the data > format. > > > Subbu Allamaraju wrote: > >> What is "polluting" about links? I suspect that you are talking about >> adding links to an object serialized into XML is polluting the >> representation. But is that polluting, or making the representation >> contextual and more useful? >> >> In any case, link headers and links in a representation body are not >> always equivalent. For images etc., certainly link headers are useful. But >> in XML cases, you can define more context around link elements. >> >> Subbu >> >> On Jul 1, 2009, at 5:23 AM, Bill Burke wrote: >> >> >>> >>> Subbu Allamaraju wrote: >>> >>>> Sam, >>>> I don't disagree that there are use cases, but I am not sure if >>>> letting clients manage relations is the right way to implement >>>> distributed systems. The approach you describe below is similar to a >>>> client trying to setup foreign key relations between different >>>> database entities. This model leaks abstractions and is not ideal for >>>> writing large systems. >>>> For instance, take a simple shopping cart application. The server may >>>> have decided to use links to associate products to a cart, but that >>>> does not mean that, clients should be able to create/edit/delete those >>>> links. Instead, links come into being when the client "adds products >>>> to a cart" and they go away when the client "removes a product from >>>> the cart". That is the right level of abstraction for the client. >>>> IMO, links are for servers to provide navigability between resources, >>>> and to let clients make state transitions via links. >>>> >>> >>> Let's leave aside the issue on whether to introduce LINK and UNLINK. What >>> about the value of Link headers themselves? I really like the idea of >>> propogating additional metadata about the resource without polluting the >>> resource. Sometimes you just can't modify the resource (image). >>> >>> >>> Bill Burke >>> JBoss, a division of Red Hat >>> http://bill.burkecentral.com >>> >> >> > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
Hi Sam, > On Tue, Jun 30, 2009 at 11:52 PM, Subbu Allamaraju <subbu@...> > wrote: > >> Could you explain why this is a "large deviation from HTTP"? >> Perhaps some >> reference to an RFC or spec would help. >> > > You have designed your own interface with rules such as using your own > identifiers, how to find the id field (which presumably lives in the > URL > and/or entity-body), how and where to submit it, what to expect in > return, > etc. These rules would have to be documented, understood and > (faithfully) > implemented by clients wishing to consume your service. You have not still provided a reference on what the deviation is. All the points you make are about application-level choices, and almost all HTTP based apps and sites make these choices all the time. Certainly, you are not calling all of them badly written! > I am also not sure what you mean by "HTTP was designed to create a > web of >> opaque resources". Opaque to the protocol operations, or the >> client, or the >> server? >> > > HTTP servers typically don't care what's inside the entity-body - > it's up to > the client to interpret the contents. Now we're talking about using > non-HyperText resources over HTTP in which case we need somewhere for > metadata including links - either in-band courtesy wrapper formats > like > Atom, out-of-band with HTTP headers or separately altogether with > things > like RDF. I know what I think is the most simple & elegant option... If you strongly think that client-driven link management is the best and most elegant solution to your problem, why not circulate a proposal to ietf-http-wg@...? > > Even in HTML, clients don't specify links. They just follow them. >> > > Yes they do - both directly (think links in rich text areas) and > indirectly > (as the result of some action). This only works because HTML is > HyperText... > for anything else it's a non-starter. I am sorry but you are misinterpreting. HTML Clients NEVER create or establish links between resources. They USE the URIs provided in links (link, a, img etc) and forms to navigate across resources. Subbu
Subbu, On Wed, Jul 1, 2009 at 4:44 PM, Subbu Allamaraju <subbu@...> wrote: You have not still provided a reference on what the deviation is. All the > points you make are about application-level choices, and almost all HTTP > based apps and sites make these choices all the time. Certainly, you are not > calling all of them badly written! > We only ever make "application-level choices" when we have to... basic functionality like creating a resource (PUT) and I would argue creating links should be done natively. If you strongly think that client-driven link management is the best and > most elegant solution to your problem, why not circulate a proposal to > ietf-http-wg@...? > Sure, that's probably a good next step. > I am sorry but you are misinterpreting. HTML Clients NEVER create or > establish links between resources. They USE the URIs provided in links > (link, a, img etc) and forms to navigate across resources. > What would you say is happening when I tweet a link then? So far as I can tell both servers and clients create links all the time. Sam
Sam Johnston a �crit : > Does this not strike you as far more complicated than necessary? Under > your proposal I need to do a bunch of requests and understand a new > language just to work out that two resources are related. Indeed, this is a little overhead since here I modeled a one-to-many (or many-to-many as suggested before) relationship, and added more semantic than necessary here for example, implying the use of RDF, which is IMO the better approach to represent metadata for and links between resources. A more simple approach could use text/uri-list [RFC2483] as media type for the representation of the resource containing the relations (i.e. a simple uri...) as in =============== GET /buildings/1234 Accept: image/jpeg ---------- 200 OK ... (jpeg representation of the building) ============== GET /buildings/1234 Accept: text/uri-list 200 Ok ... http://www.example.com/architects/spamegg ============== granted that it is a one-to-one relation and there is no ambiguity on the link semantic (as with Link header). The main point is that the link is itself a (sub)resource (or an alternate representation) of the initial resource and can therefore be manipulated directly using existing methods ============= PUT /buildings/1234 Content-Type: text/uri-list http://www.example.com/architects/foobar ============= > The Link: header accomplishes the same job in a far simpler and more > performant manner, only until now no mechanism (beyond the HTTP LINK and > UNLINK verbs) has been specified for creating them. > Therein lies the question... what is the most intuitive way to associate > resources? Given the relationships are exposed via HTTP headers one > would assume that the same mechanism could/should be used to manage them. As stated in Roy thesis (if I'm right), every piece of information that can or should be manipulated by the UA must be a resource by itself. Moreover, this approach allow many-to-many relations (not talking about several kinds of semantic relations) to be represented and manipulated in the same homogeneous way as simple non ambiguous unique links > On Wed, Jul 1, 2009 at 11:48 AM, Yannick Loiseau <yloiseau@... > <mailto:yloiseau@...>> wrote: > > > > I agree with mike. > IMHO, it seems it is here a meta level problem w.r.t. the application > vs. protocol POV. > Taking the building/architect example, the architect of a building is a > metadata for the application (information about the building). But as > long as the UA needs to manage this information (read/edit), it should > be a resource by itself, i.e. data at the protocol level. The fact that > this metadata can't be embed in one of the representations of the > building resource (e.g. an image) just imply the need for another > representation (conneg is your friend), not that this information > pertains to HTTP headers. RDF was just design for this use case: > provide > external metadata about resources that can't embed them. Turtle or N3 > can be used for xml reticents, or just a custom key:value format (but > don't reinvent the wheel :) > Metadata being a resource, they can be easily edited (using POST, PUT, > PATCH, or lower granularity (sub)resource manipulation). > > example (turtle not tested) > ============== > GET /buildings/1234 > Accept: image/jpeg > ---------- > 200 OK > ... > (jpeg representation of the building) > ============== > GET /buildings/1234 > Accept: text/turtle > --------- > 200 Ok > ... > > @prefix rel: <http://www.example.com/relations/ns#> . > @prefix dc: <http://purl.org/dc/elements/1.1/> . > @prefix ent: <http://www.example.com/entities/ns#> . > > <http://www.example.com/buildings/1234> > a ent:building ; > dc:title "A nice building"; > rel:architects <http://www.example.com/buildings/1234/architects; > ============== > GET /buildings/1234/architects > Accept: text/turtle > ---- > 200 Ok > > @prefix ent: <http://www.example.com/entities/ns#> . > > <http://www.example.com/buildings/1234> > ent:Architect <http://www.example.com/architects/spamegg> . > > ============== > POST /buildings/1234/architects > ... > > http://www.example.com/architects/foobar > ------- > 201 Created > ... > > @prefix ent: <http://www.example.com/entities/ns#> . > > <http://www.example.com/buildings/1234> > ent:Architect <http://www.example.com/architects/foobar>, > <http://www.example.com/architects/spamegg> . > =============== > GET /architects/foobar > Accept: text/turtle > ------ > 200 Ok > > @prefix ent: <http://www.example.com/entities/ns#> . > @prefix rel: <http://www.example.com/relations/ns#> . > @prefix foaf: <http://xmlns.com/foaf/0.1/> . > > <http://www.example.com/architects/foobar> > a foaf:Person ; > foaf:name "Foo Bar"; > rel:hasBuilt [ > <http://www.example.com/buildings/1234> ; > <http://www.example.com/buildings/5678> ; > ] . > > If the list of buildings for an architect needs to be manipulated, the > same pattern can be used, creating a /architects/foobar/buildings > resource, using HATEOAS... > > > > > > > FWIW, Link *Headers* make sense to me when the link are metadata > about > > the resource. However, when the links are to be treated as > first-class > > resources themselves, I think Link Headers is not the right > choice. For > > example, I assert that links appearing in the <head /> section of > > typical HTML documents are metadata. Links that appear in the > <body /> > > section are not. > > > > I am not privy to the details of your particular use of links, > but I get > > the impression that they are more than metadata. If true, I would > > consider placing the links either in the body of the resource or, > when > > the resource does not support body links easily (certain binary > files, > > etc.), I would add a single link header (Link: > > <http://www.example.org/resource123/links>; rel=related) that > points to > > the related resource that holds all the links. One possible > advantage of > > this approach is that the related links are now exposed in a way > that > > easily allows searching and filtering using well-established > mechanisms. > > > > Also, this sentence from your last post caught my eye: > > <snip> > > It's unavoidable (I think) that a case in which you might simply > add a > > foreign key or a bridge table when thinking relationally, you > need to do > > lots of contortions to do RESTfully. > > </snip> > > > > This is very true. For me, that's an excellent reminder that the > > relational approach is not the same as a RESTful approach. > > > > mca > > http://amundsen.com/blog/ > > > > >
Subbu Allamaraju wrote: > > I am sorry but you are misinterpreting. HTML Clients NEVER create or > establish links between resources. They USE the URIs provided in links > (link, a, img etc) and forms to navigate across resources. > > Subbu > Those aren't mutually exclusive behaviours, are they? I don't follow the logic here
> Subbu Allamaraju wrote: >> >> I am sorry but you are misinterpreting. HTML Clients NEVER create >> or establish links between resources. They USE the URIs provided >> in links (link, a, img etc) and forms to navigate across resources. >> >> Subbu >> > > Those aren't mutually exclusive behaviours, are they? > > I don't follow the logic here Sorry, but I don't follow the question. Sam's comment was about HTML clients creating links, which they don't. Servers create links, sometimes using information provided by clients. Subbu
Subbu Allamaraju wrote: > Servers create links, sometimes using information provided by clients. > Why does this mean clients cannot create links?
On Wed, Jul 1, 2009 at 9:24 AM, Mike Kelly <mike@...> wrote:
> Peter Keane wrote:
>
>>
>> 3. add a link to a cart resource pointing to the product
>>
>> Do you mean like this?
>
> GET /cart
> "cart" : {... , "products": [], .... }
>
> adding products:
>
> PUT /cart
> "cart": {..., "products": [ "/products/143", "/products/534" ], .... }
>
> removing product 534:
>
> PUT /cart
> "cart": {..., "products": [ "/products/143" ], .... }
>
Mike-
Yes, that's basically what I was proposing. (I was thinking specifically in
terms of Atom, but the GET/EDIT/PUT pattern is exactly the approach I was
suggesting). That exact operation might well be implemented by
drag-and-drop (for example) in the UI, but the underlying API operation
would be much as you describe.
--peter
Mike Kelly wrote: > > > > Subbu Allamaraju wrote: > > > Servers create links, sometimes using information provided by clients. > > > > Why does this mean clients cannot create links? > Why would they? Seems like a corner case. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Wed, Jul 1, 2009 at 6:16 PM, Subbu Allamaraju <subbu@...> wrote: > Subbu Allamaraju wrote: >> >>> >>> I am sorry but you are misinterpreting. HTML Clients NEVER create or >>> establish links between resources. They USE the URIs provided in links >>> (link, a, img etc) and forms to navigate across resources. >>> >>> Subbu >>> >>> >> Those aren't mutually exclusive behaviours, are they? >> >> I don't follow the logic here >> > > Sorry, but I don't follow the question. > The point is you don't have to be able to create links to consume them and vice versa. Both servers and clients create links and there seems little point in denying, discouraging or indeed even discussing it (or at least, I don't see the point). > Sam's comment was about HTML clients creating links, which they don't. > Servers create links, sometimes using information provided by clients. > Actually in the example(s) I gave the servers don't even [need to] know that there are links - they just accept 1's and 0's from one client and blat them out to another[1]. Remembering that pretty much everything was static when HTTP was created one could argue that *all* the links are (or at least were) created by clients :) Sam 1. Technically Twitter detects URLs and wraps them in A elements but that's neither here nor there - often this function is done on the client side ala Blogger, Gmail, etc.
On Jul 1, 2009, at 9:24 AM, Mike Kelly wrote: > Subbu Allamaraju wrote: > >> Servers create links, sometimes using information provided by >> clients. >> > > Why does this mean clients cannot create links? They (browsers) do not. But that is besides the point for this thread. The key question being debated is whether link management is a client concern or it is a consequence of some protocol operation performed by a client. Subbu
On Wed, Jul 1, 2009 at 6:37 PM, Subbu Allamaraju <subbu@...> wrote:
> The key question being debated is whether link management is a client
> concern or it is a consequence of some protocol operation performed by a
> client.
>
Ok so here's a separate but similar issue. I've just released an
Internet-Draft (draft-johnston-http-category-header<http://tools.ietf.org/search/draft-johnston-http-category-header-00>)
which allows web resources to be categorised, following Atom's example.
Basically between this and the Link: header you can "unwrap" individual
resources which leaves a more performant, intuitive "universal" interface
(while still using Atom for collections).
Again both clients and servers will be wanting to set categories but I would
say certainly that this should be done directly via the headers rather than
via some "controller" with associated rules.
Sam
Internet Engineering Task Force S. Johnston
Internet-Draft Australian Online Solutions
Intended status: Experimental July 1, 2009
Expires: January 2, 2010
Web Categories
draft-johnston-http-category-header-00
Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet-
Drafts.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html.
This Internet-Draft will expire on January 2, 2010.
Copyright Notice
Copyright (c) 2009 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents in effect on the date of
publication of this document (http://trustee.ietf.org/license-info).
Please review these documents carefully, as they describe your rights
and restrictions with respect to this document.
Abstract
This document specifies the Category header-field for HyperText
Transfer Protocol (HTTP), which enables the sending of taxonomy
information in HTTP headers.
Johnston Expires January 2, 2010 [Page 1]
Internet-Draft Abbreviated Title July 2009
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1. Requirements Language . . . . . . . . . . . . . . . . . . . 3
2. Categories . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3. The Category Header Field . . . . . . . . . . . . . . . . . . . 4
3.1. Examples . . . . . . . . . . . . . . . . . . . . . . . . . 4
4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 5
4.1. Category Header Registration . . . . . . . . . . . . . . . 5
5. Security Considerations . . . . . . . . . . . . . . . . . . . . 5
6. Internationalisation Considerations . . . . . . . . . . . . . . 5
7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 6
7.1. Normative References . . . . . . . . . . . . . . . . . . . 6
7.2. Informative References . . . . . . . . . . . . . . . . . . 6
Appendix A. Notes on use with HTML . . . . . . . . . . . . . . . . 7
Appendix B. Notes on use with Atom . . . . . . . . . . . . . . . . 7
Appendix C. Acknowledgements . . . . . . . . . . . . . . . . . . . 8
Appendix D. Document History . . . . . . . . . . . . . . . . . . . 8
Appendix E. Outstanding Issues . . . . . . . . . . . . . . . . . . 8
Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 9
Johnston Expires January 2, 2010 [Page 2]
Internet-Draft Abbreviated Title July 2009
1. Introduction
A means of indicating categories for resources on the web has been
defined by Atom [RFC4287]. This document defines a framework for
exposing category information in the same format via HTTP headers.
The atom:category element conveys information about a category
associated with an entry or feed. A given atom:feed or atom:entry
element MAY have zero or more categories which MUST have a "term"
attribute (a string that identifies the category to which the entry
or feed belongs) and MAY also have a scheme attribute (an IRI that
identifies a categorization scheme) and/or a label attribute (a
human-readable label for display in end-user applications).
Similarly a web resource may be associated with zero or more
categories as indicated in the Category header-field(s). These
categories may be divided into separate vocabularies or "schemes"
and/or accompanied with human-friendly labels.
[[ Feedback is welcome on the ietf-http-wg@... mailing list,
although this is NOT a work item of the HTTPBIS WG. ]]
1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in BCP 14, [RFC2119], as
scoped to those conformance targets.
This document uses the Augmented Backus-Naur Form (ABNF) notation of
[RFC2616], and explicitly includes the following rules from it:
quoted-string, token. Additionally, the following rules are included
from [RFC3986]: URI.
2. Categories
In this specification, a category is a grouping of resources by
'term', from a vocabulary ('scheme') identified by an IRI [RFC3987].
It is comprised of:
o A "term" which is a string that identifies the category to which
the resource belongs.
o A "scheme" which is an IRI that identifies a categorization scheme
(optional).
Johnston Expires January 2, 2010 [Page 3]
Internet-Draft Abbreviated Title July 2009
o An "label" which is a human-readable label for display in end-user
applications (optional).
A category can be viewed as a statement of the form "resource is from
the {term} category of {scheme}, to be displayed as {label}", for
example "'Loewchen' is from the 'dog' category of 'animals', to be
displayed as 'Canine'".
3. The Category Header Field
The Category entity-header provides a means for serialising one or
more categories in HTTP headers. It is semantically equivalent to
the atom:category element in Atom [RFC4287].
Category = "Category" ":" #category-value
category-value = term *( ";" category-param )
category-param = ( ( "scheme" "=" <"> scheme <"> )
| ( "label" "=" quoted-string )
| ( "label*" "=" enc2231-string )
| ( category-extension ) )
category-extension = token [ "=" ( token | quoted-string ) ]
enc2231-string = <extended-value, see [RFC2231], Section 7>
term = token
scheme = URI
Each category-value conveys exactly one category but there may be
multiple category-values for each header-field and/or multiple
header-fields per [RFC2616].
Note that schemes are REQUIRED to be absolute URLs in Category
headers, and MUST be quoted if they contain a semicolon (";") or
comma (",") as these characters are used to separate category-params
and category-values respectively.
The "label" parameter is used to label the category such that it can
be used as a human-readable identifier (e.g. a menu entry).
Alternately, the "label*" parameter MAY be used encode this label in
a different character set, and/or contain language information as per
[RFC2231]. When using the enc2231-string syntax, producers MUST NOT
use a charset value other than 'ISO-8859-1' or 'UTF-8'.
3.1. Examples
NOTE: Non-ASCII characters used in prose for examples are encoded
using the format "Backslash-U with Delimiters", defined in Section
5.1 of [RFC5137].
Johnston Expires January 2, 2010 [Page 4]
Internet-Draft Abbreviated Title July 2009
For example:
Category: dog
indicates that the resource is in the "dog" category.
Category: dog; label="Canine"; scheme="http://purl.org/net/animals"
indicates that the resource is in the "dog" category, from the
"http://purl.org/net/animals" scheme, and should be displayed as
"Canine".
The example below shows an instance of the Category header encoding
multiple categories, and also the use of [RFC2231] encoding to
represent both non-ASCII characters and language information.
Category: dog; label="Canine"; scheme="http://purl.org/net/animals",
lowchen; label*=UTF-8'de'L%c3%b6wchen";
scheme="http://purl.org/net/animals/dogs"
Here, the second category has a label encoded in UTF-8, uses the
German language ("de"), and contains the Unicode code point \u'00F6'
("LATIN SMALL LETTER O WITH DIAERESIS").
4. IANA Considerations
4.1. Category Header Registration
This specification adds an entry for "Category" in HTTP to the
Message Header Registry [RFC3864] referring to this document:
Header Field Name: Category
Protocol: http
Status: standard
Author/change controller:
IETF (iesg@...)
Internet Engineering Task Force
Specification document(s):
[ this document ]
5. Security Considerations
The content of the Category header-field is not secure, private or
integrity-guaranteed, and due caution should be exercised when using
it.
6. Internationalisation Considerations
Category header-fields may be localised depending on the Accept-
Johnston Expires January 2, 2010 [Page 5]
Internet-Draft Abbreviated Title July 2009
Language header-field, as defined in section 14.4 of [RFC2616].
Scheme IRIs in atom:category elements may need to be converted to
URIs in order to express them in serialisations that do not support
IRIs, as defined in section 3.1 of [RFC3987]. This includes the
Category header-field.
7. References
7.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC2231] Freed, N. and K. Moore, "MIME Parameter Value and Encoded
Word Extensions: Character Sets, Languages, and
Continuations", RFC 2231, November 1997.
[RFC2616] Fielding, R., Gettys, J., Mogul, J., Frystyk, H.,
Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext
Transfer Protocol -- HTTP/1.1", RFC 2616, June 1999.
[RFC3864] Klyne, G., Nottingham, M., and J. Mogul, "Registration
Procedures for Message Header Fields", BCP 90, RFC 3864,
September 2004.
[RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform
Resource Identifier (URI): Generic Syntax", STD 66,
RFC 3986, January 2005.
[RFC3987] Duerst, M. and M. Suignard, "Internationalized Resource
Identifiers (IRIs)", RFC 3987, January 2005.
[RFC4287] Nottingham, M. and R. Sayre, "The Atom Syndication
Format", RFC 4287, December 2005.
[RFC5137] Klensin, J., "ASCII Escaping of Unicode Characters",
RFC 5137, February 2008.
7.2. Informative References
[OCCI] Open Grid Forum (OGF), Edmonds, A., Metsch, T., Johnston,
S., and A. Richardson, "Open Cloud Computing Interface
(OCCI)", <http://purl.org/occi>.
[RFC2068] Fielding, R., Gettys, J., Mogul, J., Nielsen, H., and T.
Berners-Lee, "Hypertext Transfer Protocol -- HTTP/1.1",
Johnston Expires January 2, 2010 [Page 6]
Internet-Draft Abbreviated Title July 2009
RFC 2068, January 1997.
[W3C.REC-html401-19991224]
Raggett, D., Hors, A., and I. Jacobs, "HTML 4.01
Specification",
<http://www.w3.org/TR/1999/REC-html401-19991224>.
[W3C.WD-html5-20090423]
Hyatt, D. and I. Hickson, "HTML 5", April 2009,
<http://www.w3.org/TR/2009/WD-html5-20090423>.
[draft-nottingham-http-link-header]
Nottingham, M., "Web Linking",
draft-nottingham-http-link-header-05 (work in progress),
April 2009.
[rel-tag-microformat]
Celik, T., Marks, K., and D. Powazek, "rel="tag"
Microformat", <http://microformats.org/wiki/rel-tag>.
Appendix A. Notes on use with HTML
In the absence of a dedicated category element in HTML 4
[W3C.REC-html401-19991224] and HTML 5 [W3C.WD-html5-20090423],
category information (including user supllied folksonomy
classifications) MAY be exposed using HTML A and/or LINK elements by
concatenating the scheme and term:
category-link = scheme term
scheme = URI
term = token
These category-links MAY form a resolveable "tag space" in which case
they SHOULD use the "tag" relation-type per [rel-tag-microformat].
Alternatively META elements MAY be used:
o where the "name" attribute is "keywords" and the "content"
attribute is a comma-separated list of term(s)
o where the "http-equiv" attribute is "Category" and the "content"
attribute is a comma-separated list of category-value(s)
Appendix B. Notes on use with Atom
Where the cardinality is known to be one (for example, when
retrieving an individual resource) it MAY be preferable to render the
Johnston Expires January 2, 2010 [Page 7]
Internet-Draft Abbreviated Title July 2009
resource natively over HTTP without Atom structures. In this case
the contents of the atom:content element SHOULD be returned as the
HTTP entity-body and metadata including the type attribute and atom:
category element(s) via HTTP header-field(s).
This approach SHOULD NOT be used where the cardinality is guaranteed
to be one (for example, search results which MAY return one result).
Appendix C. Acknowledgements
The author would like to thank Mark Nottingham for his work on Web
Linking [draft-nottingham-http-link-header] (on which this document
was based) and to the authors of [RFC2068] for specification of the
Link: header-field on which this is based.
The author would like to thank members of the OGF's Open Cloud
Computing Interface [OCCI] working group for their contributions and
others who commented upon, encouraged and gave feedback to this
draft.
Appendix D. Document History
[[ to be removed by the RFC editor should document proceed to
publication as an RFC. ]]
-00
* Initial draft based on draft-nottingham-http-link-header-05
Appendix E. Outstanding Issues
[[ to be removed by the RFC editor should document proceed to
publication as an RFC. ]]
The following issues are oustanding and should be addressed:
1. Is extensibility of Category headers necessary as is the case for
Link: headers? If so, what are the use cases?
2. Is supporting multi-lingual representations of the same
category(s) necessary? If so, what are the risks of doing so?
3. Is a mechanism for maintaining Category header-fields required?
If so, should it use the headers themselves or some other
mechanism?
Johnston Expires January 2, 2010 [Page 8]
Internet-Draft Abbreviated Title July 2009
4. Does this proposal conflict with others in the same space? If
so, is it an improvement on what exists?
Author's Address
Sam Johnston
Australian Online Solutions
GPO Box 296
Sydney, NSW 2001
Email: samj@...
URI: http://samj.net/
Johnston Expires January 2, 2010 [Page 9]
Hi, everyone, I'm a newbie here (though not to REST in general), and the list archives have been a great help in clarifying my understanding of a lot of REST concepts and suggesting good design elements. I have one part of my design right now where I'm unsure what a good RESTful approach would be. I have resources that support GET and PUT, but contain some parts that clients are not allowed to modify. (This doesn't seem like an uncommon case; I would think that navigation links, for example, would typically not be modifiable in a PUT.) So is it better to: 1. Require clients to submit all the read-only parts unmodified in a PUT, and respond with an error code if they are absent or altered? 2. Take advantage of the leniency allowed in a server's implementation of PUT to ignore the read-only elements (or their absence)? 3. Separate read-only elements into a sub-resource that only supports GET? (This may not be feasible for resources which must be created as a whole.) or something else? Second, there are some elements that are modifiable or not depending on the privileges held by the (authenticated) user. I would think this would be expressed by a difference in the representation returned to the client, but what should that difference be? (My representations are XML documents, if there isn't a more general solution.) And in a broader sense, I'd like the client to know which elements of the resource the user can modify, for presentation purposes. Is there a generally accepted way to do this, perhaps with form templates or XForms? I'd be interested in any comments or alternative approaches, if I'm just looking at it from the wrong angle. Thanks, -- Jim
Jim, Typically you would express the overall writeability of a resource via OPTIONS (e.g. if you can only GET it's read only), but if you've got, say, a template driven website and you only want the body to be updated then that's something different. I would almost certainly NOT be using PUT for this, rather accepting POSTs of just the midifiable data (perhaps in HTML forms or some XML-based format). If you were to use XML then a GET (with the appropriate Accept: header) could return just the parts which are modifiable by the client. Optionally you could add information to the URL about whether the client wants just the writeable elements or the whole lot, or even markup the elements as writable (or not). Hope that helps, Sam On Wed, Jul 1, 2009 at 9:42 PM, Jim Edwards-Hewitt <jimeh@...> wrote: > > > Hi, everyone, > > I'm a newbie here (though not to REST in general), and the list archives > have been a great help in clarifying my understanding of a lot of REST > concepts and suggesting good design elements. I have one part of my design > right now where I'm unsure what a good RESTful approach would be. > > I have resources that support GET and PUT, but contain some parts that > clients are not allowed to modify. (This doesn't seem like an uncommon case; > I would think that navigation links, for example, would typically not be > modifiable in a PUT.) So is it better to: > > 1. Require clients to submit all the read-only parts unmodified in a PUT, > and respond with an error code if they are absent or altered? > 2. Take advantage of the leniency allowed in a server's implementation of > PUT to ignore the read-only elements (or their absence)? > 3. Separate read-only elements into a sub-resource that only supports GET? > (This may not be feasible for resources which must be created as a whole.) > > or something else? > > Second, there are some elements that are modifiable or not depending on the > privileges held by the (authenticated) user. I would think this would be > expressed by a difference in the representation returned to the client, but > what should that difference be? (My representations are XML documents, if > there isn't a more general solution.) > > And in a broader sense, I'd like the client to know which elements of the > resource the user can modify, for presentation purposes. Is there a > generally accepted way to do this, perhaps with form templates or XForms? > > I'd be interested in any comments or alternative approaches, if I'm just > looking at it from the wrong angle. > > Thanks, > > -- Jim > > >
Jim: I am addressing the security portion of your post. Hopefully this will give you some ideas. <snip> there are some elements that are modifiable or not depending on the privileges held by the (authenticated) user. </snip> First, I favor managing access rights using a combination of URI + HTTP_Method + Auth'ed_User. For example: - user1 has GET,HEAD,OPTIONS for /collection/ - manager1 has GET,HEAD,OPTIONS,POST,PUT for /collection/ - admin1 has GET,HEAD,OPTIONS,POST,PUT,DELETE for /collection/ This means I focus on defining the proper Resources (addressed via URI) when I want to limit what state representations clients and server share. In the case you provide, I would consider different Resources to handle different update privileges for the same stored data. This also clears up any attempts at doing partial updates (and therefore clears up caching issues) since none are needed now that there are different resources to handle the details. In my example, the ability to pass different state representations is modeled as different resources. These various "secured" resource variations might still all "map" to the same data storage on the server, but that's not interesting to the client anyway since the data model is *not* the resource model on RESTful implementations. mca http://amundsen.com/blog/ On Wed, Jul 1, 2009 at 15:42, Jim Edwards-Hewitt <jimeh@...> wrote: > Hi, everyone, > > I'm a newbie here (though not to REST in general), and the list archives > have been a great help in clarifying my understanding of a lot of REST > concepts and suggesting good design elements. I have one part of my design > right now where I'm unsure what a good RESTful approach would be. > > I have resources that support GET and PUT, but contain some parts that > clients are not allowed to modify. (This doesn't seem like an uncommon case; > I would think that navigation links, for example, would typically not be > modifiable in a PUT.) So is it better to: > > 1. Require clients to submit all the read-only parts unmodified in a PUT, > and respond with an error code if they are absent or altered? > 2. Take advantage of the leniency allowed in a server's implementation of > PUT to ignore the read-only elements (or their absence)? > 3. Separate read-only elements into a sub-resource that only supports GET? > (This may not be feasible for resources which must be created as a whole.) > > or something else? > > Second, there are some elements that are modifiable or not depending on the > privileges held by the (authenticated) user. I would think this would be > expressed by a difference in the representation returned to the client, but > what should that difference be? (My representations are XML documents, if > there isn't a more general solution.) > > And in a broader sense, I'd like the client to know which elements of the > resource the user can modify, for presentation purposes. Is there a > generally accepted way to do this, perhaps with form templates or XForms? > > I'd be interested in any comments or alternative approaches, if I'm just > looking at it from the wrong angle. > > Thanks, > > -- Jim > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi Jim, can you provide an example representation for the mutable/immutable use case? Jan On Jul 1, 2009, at 3:42 PM, Jim Edwards-Hewitt wrote: > Hi, everyone, > > I'm a newbie here (though not to REST in general), and the list > archives have been a great help in clarifying my understanding of a > lot of REST concepts and suggesting good design elements. I have one > part of my design right now where I'm unsure what a good RESTful > approach would be. > > I have resources that support GET and PUT, but contain some parts > that clients are not allowed to modify. (This doesn't seem like an > uncommon case; I would think that navigation links, for example, > would typically not be modifiable in a PUT.) So is it better to: > > 1. Require clients to submit all the read-only parts unmodified in a > PUT, and respond with an error code if they are absent or altered? > 2. Take advantage of the leniency allowed in a server's > implementation of PUT to ignore the read-only elements (or their > absence)? > 3. Separate read-only elements into a sub-resource that only > supports GET? (This may not be feasible for resources which must be > created as a whole.) > > or something else? > > Second, there are some elements that are modifiable or not depending > on the privileges held by the (authenticated) user. I would think > this would be expressed by a difference in the representation > returned to the client, but what should that difference be? (My > representations are XML documents, if there isn't a more general > solution.) > > And in a broader sense, I'd like the client to know which elements > of the resource the user can modify, for presentation purposes. Is > there a generally accepted way to do this, perhaps with form > templates or XForms? > > I'd be interested in any comments or alternative approaches, if I'm > just looking at it from the wrong angle. > > Thanks, > > -- Jim > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hi,
suppose a situation where clients know that creating a lock on some
resource
http://www.example.com/docs/1234
is done by PUTing to
http://www.example.com/properties/1234?lock
I see two ways how to address the situation that the lock can alreay
exist and that the
request should then fail:
a) PUT /properties/1234?lock
If-None-Match: *
304 Precondition Failed
b) PUT /properties/1234?lock
409 Conflict
The former bears the question what the server should do when the
client does not make
the request condtional (-> 409??) and regading the latter I am not
sure if the semantics
are correct.
Can anybody provide a clue?
Jan
Doh - got things mixed up 304 should obviously have been 412 . On Jul 3, 2009, at 10:01 PM, Jan Algermissen wrote: > Hi, > > suppose a situation where clients know that creating a lock on some > resource > > http://www.example.com/docs/1234 > > is done by PUTing to > > http://www.example.com/properties/1234?lock > > > I see two ways how to address the situation that the lock can alreay > exist and that the > request should then fail: > > a) PUT /properties/1234?lock > If-None-Match: * > > > 412 Precondition Failed > *** > > > b) PUT /properties/1234?lock > > > 409 Conflict > > > The former bears the question what the server should do when the > client does not make > the request condtional (-> 409??) and regading the latter I am not > sure if the semantics > are correct. > > Can anybody provide a clue? > > Jan > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
if a concurrency header is required, but missing I usually return 412 with additonal text reminding of the requirement. On 2009-07-03, Jan Algermissen <algermissen1971@...> wrote: > Hi, > > suppose a situation where clients know that creating a lock on some > resource > > http://www.example.com/docs/1234 > > is done by PUTing to > > http://www.example.com/properties/1234?lock > > > I see two ways how to address the situation that the lock can alreay > exist and that the > request should then fail: > > a) PUT /properties/1234?lock > If-None-Match: * > > > 304 Precondition Failed > > > > b) PUT /properties/1234?lock > > > 409 Conflict > > > The former bears the question what the server should do when the > client does not make > the request condtional (-> 409??) and regading the latter I am not > sure if the semantics > are correct. > > Can anybody provide a clue? > > Jan > > > > > ------------------------------------ > > Yahoo! Groups Links > > > > -- mca http://amundsen.com/blog/
Hey Jim, On Thu, Jul 2, 2009 at 1:12 AM, Jim Edwards-Hewitt<jimeh@...> wrote: > I have resources that support GET and PUT, but contain some parts that > clients are not allowed to modify. (This doesn't seem like an uncommon case; > I would think that navigation links, for example, would typically not be > modifiable in a PUT.) IMO, you seem to be confusing between the state of the resource and its representation. GET and PUT allow you to retrieve and set the state of the resource. The data format used for transferring that state is the media type. "Navigation links" are specific to the media type and not the state of the resource. I could PUT application/atom+xml or application/x-www-form-urlencoded and GET text/html. -- Sandeep Shetty http://sandeep.shetty.in/
Just curious - why would the server want to lock a resource? Subbu On Jul 3, 2009, at 7:01 PM, Jan Algermissen wrote: > > > Hi, > > suppose a situation where clients know that creating a lock on some > resource > > http://www.example.com/docs/1234 > > is done by PUTing to > > http://www.example.com/properties/1234?lock > > I see two ways how to address the situation that the lock can alreay > exist and that the > request should then fail: > > a) PUT /properties/1234?lock > If-None-Match: * > > 304 Precondition Failed > > b) PUT /properties/1234?lock > > 409 Conflict > > The former bears the question what the server should do when the > client does not make > the request condtional (-> 409??) and regading the latter I am not > sure if the semantics > are correct. > > Can anybody provide a clue? > > Jan > > >
Representations in a request (e.g. PUT or POST) and representations in a response (e.g. GET or a PUT) need not be absolutely the same. HTTP servers are not databases that blindly store the data. Using PUT to update the mutable parts of a resource is perfectly okay. At least, I don't see anything that breaks HTTP. Subbu On Jul 2, 2009, at 8:01 AM, Sam Johnston wrote: > > > Jim, > > Typically you would express the overall writeability of a resource > via OPTIONS (e.g. if you can only GET it's read only), but if you've > got, say, a template driven website and you only want the body to be > updated then that's something different. > > I would almost certainly NOT be using PUT for this, rather accepting > POSTs of just the midifiable data (perhaps in HTML forms or some XML- > based format). If you were to use XML then a GET (with the > appropriate Accept: header) could return just the parts which are > modifiable by the client. Optionally you could add information to > the URL about whether the client wants just the writeable elements > or the whole lot, or even markup the elements as writable (or not). > > Hope that helps, > > Sam > > > On Wed, Jul 1, 2009 at 9:42 PM, Jim Edwards-Hewitt > <jimeh@...>wrote: > > > Hi, everyone, > > I'm a newbie here (though not to REST in general), and the list > archives have been a great help in clarifying my understanding of a > lot of REST concepts and suggesting good design elements. I have one > part of my design right now where I'm unsure what a good RESTful > approach would be. > > I have resources that support GET and PUT, but contain some parts > that clients are not allowed to modify. (This doesn't seem like an > uncommon case; I would think that navigation links, for example, > would typically not be modifiable in a PUT.) So is it better to: > > 1. Require clients to submit all the read-only parts unmodified in a > PUT, and respond with an error code if they are absent or altered? > 2. Take advantage of the leniency allowed in a server's > implementation of PUT to ignore the read-only elements (or their > absence)? > 3. Separate read-only elements into a sub-resource that only > supports GET? (This may not be feasible for resources which must be > created as a whole.) > > or something else? > > Second, there are some elements that are modifiable or not depending > on the privileges held by the (authenticated) user. I would think > this would be expressed by a difference in the representation > returned to the client, but what should that difference be? (My > representations are XML documents, if there isn't a more general > solution.) > > And in a broader sense, I'd like the client to know which elements > of the resource the user can modify, for presentation purposes. Is > there a generally accepted way to do this, perhaps with form > templates or XForms? > > I'd be interested in any comments or alternative approaches, if I'm > just looking at it from the wrong angle. > > Thanks, > > -- Jim > > > > >
Just curious - why would the server want to offer such a functionality as locking to its clients? Subbu On Jul 3, 2009, at 7:01 PM, Jan Algermissen wrote: > > > Hi, > > suppose a situation where clients know that creating a lock on some > resource > > http://www.example.com/docs/1234 > > is done by PUTing to > > http://www.example.com/properties/1234?lock > > I see two ways how to address the situation that the lock can alreay > exist and that the > request should then fail: > > a) PUT /properties/1234?lock > If-None-Match: * > > 304 Precondition Failed > > b) PUT /properties/1234?lock > > 409 Conflict > > The former bears the question what the server should do when the > client does not make > the request condtional (-> 409??) and regading the latter I am not > sure if the semantics > are correct. > > Can anybody provide a clue? > > Jan > > >
On Jul 4, 2009, at 7:04 PM, Subbu Allamaraju wrote: > Just curious - why would the server want to offer such a functionality > as locking to its clients? Because it is (currently) a requirement of the owners of the system to use a pessimistic locking strategy, IOW, not HTTPs conditional write approach with If-Match. Jan > > Subbu > > On Jul 3, 2009, at 7:01 PM, Jan Algermissen wrote: > >> >> >> Hi, >> >> suppose a situation where clients know that creating a lock on some >> resource >> >> http://www.example.com/docs/1234 >> >> is done by PUTing to >> >> http://www.example.com/properties/1234?lock >> >> I see two ways how to address the situation that the lock can alreay >> exist and that the >> request should then fail: >> >> a) PUT /properties/1234?lock >> If-None-Match: * >> >> 304 Precondition Failed >> >> b) PUT /properties/1234?lock >> >> 409 Conflict >> >> The former bears the question what the server should do when the >> client does not make >> the request condtional (-> 409??) and regading the latter I am not >> sure if the semantics >> are correct. >> >> Can anybody provide a clue? >> >> Jan >> >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Leaving aside the question of whether pessimistic locking over the web is a good or bad, I would expect a lock at the end of this operation. This of course, leads to the pattern that the author(s) of RETRO tried. Subbu On Jul 4, 2009, at 4:08 PM, Jan Algermissen wrote: > > On Jul 4, 2009, at 7:04 PM, Subbu Allamaraju wrote: > >> Just curious - why would the server want to offer such a >> functionality >> as locking to its clients? > > Because it is (currently) a requirement of the owners of the system > to use a pessimistic locking strategy, IOW, not HTTPs conditional > write approach with If-Match. > > Jan > > > >> >> Subbu >> >> On Jul 3, 2009, at 7:01 PM, Jan Algermissen wrote: >> >>> >>> >>> Hi, >>> >>> suppose a situation where clients know that creating a lock on some >>> resource >>> >>> http://www.example.com/docs/1234 >>> >>> is done by PUTing to >>> >>> http://www.example.com/properties/1234?lock >>> >>> I see two ways how to address the situation that the lock can alreay >>> exist and that the >>> request should then fail: >>> >>> a) PUT /properties/1234?lock >>> If-None-Match: * >>> >>> 304 Precondition Failed >>> >>> b) PUT /properties/1234?lock >>> >>> 409 Conflict >>> >>> The former bears the question what the server should do when the >>> client does not make >>> the request condtional (-> 409??) and regading the latter I am not >>> sure if the semantics >>> are correct. >>> >>> Can anybody provide a clue? >>> >>> Jan >>> >>> >>> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >
Jan Algermissen wrote: > > > > > On Jul 4, 2009, at 7:04 PM, Subbu Allamaraju wrote: > > > Just curious - why would the server want to offer such a functionality > > as locking to its clients? > > Because it is (currently) a requirement of the owners of the system to > use a pessimistic locking strategy, IOW, not HTTPs conditional write > approach with If-Match. Use WebDAV LOCK http://msdn.microsoft.com/en-us/library/aa142897%28EXCHG.65%29.aspx Bill
Hi, Just to say Alan Dean will be presenting on REST online tomorrow (Monday 6th July). We use Live Meeting for these sessions and the link is http://snipr.com/virtualaltnet and the timing information is: In France/Germany/Belgium: 9:00PM In UK is: 8:00PM EST in the US is: 3:00PM (EST) PST in the US is: 12:00PM (Midday) The session should last about an hour and a half (or less) and you can submit questions before or during the session. Also this is the start of 3 REST related E-VANs we're going to do and we will announce the others shortly. Ta, Colin
On Sun, Jul 5, 2009 at 1:04 AM, Subbu Allamaraju <subbu@...> wrote:
> Just curious - why would the server want to offer such a functionality
> as locking to its clients?
>
For my current application (cloud infrastructure API) there are a number of
places where such locking would be useful - for example when dealing with
virtual machine disk images, and it's not hard to conceive of other similar
applications like event ticketing.
In any case my preference is to facilitate users and see what is used (e.g.
draft-johnston-http-category-header<http://tools.ietf.org/html/draft-johnston-http-category-header-00>)
than to try to dictate to them what they can and can't do... what the BBC
said today <http://twitter.com/PaulMiller/statuses/2495155769> about content
("We need to stop acting like custodians, and more like facilitators")
arguably applies to the standards community too... else they'll just make
stuff up <http://samj.net/2009/04/revcanonical-considered-harmful.html>.
Sam
On Jul 3, 2009, at 7:01 PM, Jan Algermissen wrote:
>
> >
> >
> > Hi,
> >
> > suppose a situation where clients know that creating a lock on some
> > resource
> >
> > http://www.example.com/docs/1234
> >
> > is done by PUTing to
> >
> > http://www.example.com/properties/1234?lock
> >
> > I see two ways how to address the situation that the lock can alreay
> > exist and that the
> > request should then fail:
> >
> > a) PUT /properties/1234?lock
> > If-None-Match: *
> >
> > 304 Precondition Failed
> >
> > b) PUT /properties/1234?lock
> >
> > 409 Conflict
> >
> > The former bears the question what the server should do when the
> > client does not make
> > the request condtional (-> 409??) and regading the latter I am not
> > sure if the semantics
> > are correct.
> >
> > Can anybody provide a clue?
> >
> > Jan
> >
> >
> >
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Bill de hOra wrote: > > > > Jan Algermissen wrote: > > > > > > > > > > On Jul 4, 2009, at 7:04 PM, Subbu Allamaraju wrote: > > > > > Just curious - why would the server want to offer such a functionality > > > as locking to its clients? > > > > Because it is (currently) a requirement of the owners of the system to > > use a pessimistic locking strategy, IOW, not HTTPs conditional write > > approach with If-Match. > > Use WebDAV LOCK > > http://msdn. microsoft. com/en-us/ library/aa142897 %28EXCHG. 65%29.aspx > <http://msdn.microsoft.com/en-us/library/aa142897%28EXCHG.65%29.aspx> Or <http://greenbytes.de/tech/webdav/rfc4918.html#METHOD_LOCK>. BR, Julian
On Sun, Jul 5, 2009 at 1:03 AM, Subbu Allamaraju <subbu@...> wrote: > Representations in a request (e.g. PUT or POST) and representations in a > response (e.g. GET or a PUT) need not be absolutely the same. HTTP servers > are not databases that blindly store the data. Using PUT to update the > mutable parts of a resource is perfectly okay. At least, I don't see > anything that breaks HTTP. > The way I read the RFC is "*The PUT method requests that the enclosed entity be stored [as is] under the supplied Request-URI*", which is obvious for "simple" media types like images where anything else doesn't really make sense. While it does go on to talk about partial updates (mentioning the Content-Range header), PUTting a resource in its entirity knowing that immutable parts will be ignored and/or trigger errors seems neither efficient nor clean to me. Sam On Jul 2, 2009, at 8:01 AM, Sam Johnston wrote: > > >> >> Jim, >> >> Typically you would express the overall writeability of a resource via >> OPTIONS (e.g. if you can only GET it's read only), but if you've got, say, a >> template driven website and you only want the body to be updated then that's >> something different. >> >> I would almost certainly NOT be using PUT for this, rather accepting POSTs >> of just the midifiable data (perhaps in HTML forms or some XML-based >> format). If you were to use XML then a GET (with the appropriate Accept: >> header) could return just the parts which are modifiable by the client. >> Optionally you could add information to the URL about whether the client >> wants just the writeable elements or the whole lot, or even markup the >> elements as writable (or not). >> >> Hope that helps, >> >> Sam >> >> >> On Wed, Jul 1, 2009 at 9:42 PM, Jim Edwards-Hewitt <jimeh@... >> >wrote: >> >> >> Hi, everyone, >> >> I'm a newbie here (though not to REST in general), and the list archives >> have been a great help in clarifying my understanding of a lot of REST >> concepts and suggesting good design elements. I have one part of my design >> right now where I'm unsure what a good RESTful approach would be. >> >> I have resources that support GET and PUT, but contain some parts that >> clients are not allowed to modify. (This doesn't seem like an uncommon case; >> I would think that navigation links, for example, would typically not be >> modifiable in a PUT.) So is it better to: >> >> 1. Require clients to submit all the read-only parts unmodified in a PUT, >> and respond with an error code if they are absent or altered? >> 2. Take advantage of the leniency allowed in a server's implementation of >> PUT to ignore the read-only elements (or their absence)? >> 3. Separate read-only elements into a sub-resource that only supports GET? >> (This may not be feasible for resources which must be created as a whole.) >> >> or something else? >> >> Second, there are some elements that are modifiable or not depending on >> the privileges held by the (authenticated) user. I would think this would be >> expressed by a difference in the representation returned to the client, but >> what should that difference be? (My representations are XML documents, if >> there isn't a more general solution.) >> >> And in a broader sense, I'd like the client to know which elements of the >> resource the user can modify, for presentation purposes. Is there a >> generally accepted way to do this, perhaps with form templates or XForms? >> >> I'd be interested in any comments or alternative approaches, if I'm just >> looking at it from the wrong angle. >> >> Thanks, >> >> -- Jim >> >> >> >> >> >> > >
Well, the "[as is]" isn't actually part of the RFC. The body of the PUT request is simply a *representation* of the state. Consider a resource that could produce alternative representations for GET, say via content negotiation. I could do a PUT with an Atom representation and then still GET a JSON representation afterward. So I don't see that PUT requires you to literally store the exact message body (although, as you mention, that's entirely allowable). In AtomPub [1], for example, this is one reason why a successful PUT returns a 200 OK where the body contains the resulting representation-then the server can apply the PUT to the portions it needs, and the client can see the result. If you like, however, if the server sees that the client has modified an unmodifiable portion of the entity (e.g. some piece of computed metadata, like a last-updated timestamp), the server can reply with a 409 Conflict with additional detail [2]. Jon [1] http://bitworking.org/projects/atom/rfc5023.html#rfc.section.5.4.2 [2] http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.10 ........ Jon Moore Comcast Interactive Media ________________________________ From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Sam Johnston Sent: Monday, July 06, 2009 1:29 PM To: Subbu Allamaraju Cc: Jim Edwards-Hewitt; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Resources with read-only and read-write parts On Sun, Jul 5, 2009 at 1:03 AM, Subbu Allamaraju <subbu@... <mailto:subbu@...> > wrote: Representations in a request (e.g. PUT or POST) and representations in a response (e.g. GET or a PUT) need not be absolutely the same. HTTP servers are not databases that blindly store the data. Using PUT to update the mutable parts of a resource is perfectly okay. At least, I don't see anything that breaks HTTP. The way I read the RFC is "The PUT method requests that the enclosed entity be stored [as is] under the supplied Request-URI", which is obvious for "simple" media types like images where anything else doesn't really make sense. While it does go on to talk about partial updates (mentioning the Content-Range header), PUTting a resource in its entirity knowing that immutable parts will be ignored and/or trigger errors seems neither efficient nor clean to me. Sam On Jul 2, 2009, at 8:01 AM, Sam Johnston wrote: Jim, Typically you would express the overall writeability of a resource via OPTIONS (e.g. if you can only GET it's read only), but if you've got, say, a template driven website and you only want the body to be updated then that's something different. I would almost certainly NOT be using PUT for this, rather accepting POSTs of just the midifiable data (perhaps in HTML forms or some XML-based format). If you were to use XML then a GET (with the appropriate Accept: header) could return just the parts which are modifiable by the client. Optionally you could add information to the URL about whether the client wants just the writeable elements or the whole lot, or even markup the elements as writable (or not). Hope that helps, Sam On Wed, Jul 1, 2009 at 9:42 PM, Jim Edwards-Hewitt <jimeh@... <mailto:jimeh@...> >wrote: Hi, everyone, I'm a newbie here (though not to REST in general), and the list archives have been a great help in clarifying my understanding of a lot of REST concepts and suggesting good design elements. I have one part of my design right now where I'm unsure what a good RESTful approach would be. I have resources that support GET and PUT, but contain some parts that clients are not allowed to modify. (This doesn't seem like an uncommon case; I would think that navigation links, for example, would typically not be modifiable in a PUT.) So is it better to: 1. Require clients to submit all the read-only parts unmodified in a PUT, and respond with an error code if they are absent or altered? 2. Take advantage of the leniency allowed in a server's implementation of PUT to ignore the read-only elements (or their absence)? 3. Separate read-only elements into a sub-resource that only supports GET? (This may not be feasible for resources which must be created as a whole.) or something else? Second, there are some elements that are modifiable or not depending on the privileges held by the (authenticated) user. I would think this would be expressed by a difference in the representation returned to the client, but what should that difference be? (My representations are XML documents, if there isn't a more general solution.) And in a broader sense, I'd like the client to know which elements of the resource the user can modify, for presentation purposes. Is there a generally accepted way to do this, perhaps with form templates or XForms? I'd be interested in any comments or alternative approaches, if I'm just looking at it from the wrong angle. Thanks, -- Jim
On Mon, Jul 6, 2009 at 7:51 PM, Moore, Jonathan (CIM) < Jonathan_Moore@comcast.com> wrote: > Well, the “[as is]” isn’t actually part of the RFC. > Right, which is why I said "The way I read the RFC is..." > The body of the PUT request is simply a **representation** of the state. > Which brings me to a question I considered asking but didn't. REST talks about representations of resources, where one resource can have multiple representations. Let's say I have a person (http://example.com/person/123) and instead of transferring the person over HTTP (which is not yet possible) I make available their portrait, fingerprint(s), a scan of their national ID card and some XML demographics. Where I use distinct content types I can simply PUT a given representation and have the server side state updated accordingly. What's the best practice though when portrait, fingerprint and scan are JPGs? That is, I'm retrieving http://example.com/person/123 with Accept: image/jpeg but it's impossible to tell whether it's the portrait, fingerprint or scan I'm after. Similarly, what if I want the fingerprint in PNG? I immediately start thinking about putting the content-type and/or link-relation into the URL: http://example.com/person/123;rel=portrait;type=image/jpeg Then I start to think about cleaning this up a bit: http://example.com/person/123/portrait.jpg But this requires routes/rules and doesn't seem as clean/flexible as it should be. Sam
Sounds like the fingerprint, portrait, and scan could all be subordinate resources. Maybe http://example.com/person/123 returns an HTML or XML document with several links in it, like:
(excuse my not-exactly-Atom XML)...
<entry>
<id>http://example.com/person/123</id>
<link rel="http://example.com/schemas/#portrait"
href="http://example.com/person/123/portrait
type="image/jpeg"/>
<link rel="http://example.com/schemas/#fingerprint"
href="http://example.com/person/123/fingerprint"
type="image/jpeg"/>
...
</entry>
Etc.
Jon
________________________________________
From: Sam Johnston [mailto:samj@...]
Sent: Monday, July 06, 2009 2:27 PM
To: Moore, Jonathan (CIM)
Cc: Subbu Allamaraju; Jim Edwards-Hewitt; rest-discuss@yahoogroups.com
Subject: Re: [rest-discuss] Resources with read-only and read-write parts
On Mon, Jul 6, 2009 at 7:51 PM, Moore, Jonathan (CIM) <Jonathan_Moore@...> wrote:
Well, the "[as is]" isn't actually part of the RFC.
Right, which is why I said "The way I read the RFC is..."
The body of the PUT request is simply a *representation* of the state.
Which brings me to a question I considered asking but didn't. REST talks about representations of resources, where one resource can have multiple representations.
Let's say I have a person (http://example.com/person/123) and instead of transferring the person over HTTP (which is not yet possible) I make available their portrait, fingerprint(s), a scan of their national ID card and some XML demographics. Where I use distinct content types I can simply PUT a given representation and have the server side state updated accordingly.
What's the best practice though when portrait, fingerprint and scan are JPGs? That is, I'm retrieving http://example.com/person/123 with Accept: image/jpeg but it's impossible to tell whether it's the portrait, fingerprint or scan I'm after. Similarly, what if I want the fingerprint in PNG?
I immediately start thinking about putting the content-type and/or link-relation into the URL:
http://example.com/person/123;rel=portrait;type=image/jpeg
Then I start to think about cleaning this up a bit:
http://example.com/person/123/portrait.jpg
But this requires routes/rules and doesn't seem as clean/flexible as it should be.
Sam
You need to decide if the portrait of user is a resource or a representation in your system. If it's a resource, it should have a URI. If it's a representation, it should have a media-type. Keeping in mind that a proliferation of custom media types limits the usability of a system, I tend to lean on the side of URIs when identifying interesting items. Also, I see using the rel="" value as a way to add metadata to links, not a way to tell servers which representation/resource to return. mca http://amundsen.com/blog/ On Mon, Jul 6, 2009 at 14:27, Sam Johnston <samj@...> wrote: > > > On Mon, Jul 6, 2009 at 7:51 PM, Moore, Jonathan (CIM) < > Jonathan_Moore@comcast.com> wrote: > >> Well, the “[as is]” isn’t actually part of the RFC. >> > Right, which is why I said "The way I read the RFC is..." > >> The body of the PUT request is simply a **representation** of the state. >> > Which brings me to a question I considered asking but didn't. REST talks > about representations of resources, where one resource can have multiple > representations. > > Let's say I have a person (http://example.com/person/123) and instead of > transferring the person over HTTP (which is not yet possible) I make > available their portrait, fingerprint(s), a scan of their national ID card > and some XML demographics. Where I use distinct content types I can simply > PUT a given representation and have the server side state updated > accordingly. > > What's the best practice though when portrait, fingerprint and scan are > JPGs? That is, I'm retrieving http://example.com/person/123 with Accept: > image/jpeg but it's impossible to tell whether it's the portrait, > fingerprint or scan I'm after. Similarly, what if I want the fingerprint in > PNG? > > I immediately start thinking about putting the content-type and/or > link-relation into the URL: > > http://example.com/person/123;rel=portrait;type=image/jpeg > > Then I start to think about cleaning this up a bit: > > http://example.com/person/123/portrait.jpg > > But this requires routes/rules and doesn't seem as clean/flexible as it > should be. > > Sam > > > > > >
I agree with the inefficiency (it is an inconvenience, to be accurate) part. That is why, there is no need to require clients to supply the immutable parts. The "supply everything" requirement usually stems from XML-schema driven applications, which is unnecessary. Subbu On Jul 6, 2009, at 10:28 AM, Sam Johnston wrote: > On Sun, Jul 5, 2009 at 1:03 AM, Subbu Allamaraju <subbu@...> > wrote: > >> Representations in a request (e.g. PUT or POST) and representations >> in a >> response (e.g. GET or a PUT) need not be absolutely the same. HTTP >> servers >> are not databases that blindly store the data. Using PUT to update >> the >> mutable parts of a resource is perfectly okay. At least, I don't see >> anything that breaks HTTP. >> > > The way I read the RFC is "*The PUT method requests that the > enclosed entity > be stored [as is] under the supplied Request-URI*", which is obvious > for > "simple" media types like images where anything else doesn't really > make > sense. While it does go on to talk about partial updates (mentioning > the > Content-Range header), PUTting a resource in its entirity knowing that > immutable parts will be ignored and/or trigger errors seems neither > efficient nor clean to me. > > Sam > > On Jul 2, 2009, at 8:01 AM, Sam Johnston wrote: >> >> >>> >>> Jim, >>> >>> Typically you would express the overall writeability of a resource >>> via >>> OPTIONS (e.g. if you can only GET it's read only), but if you've >>> got, say, a >>> template driven website and you only want the body to be updated >>> then that's >>> something different. >>> >>> I would almost certainly NOT be using PUT for this, rather >>> accepting POSTs >>> of just the midifiable data (perhaps in HTML forms or some XML-based >>> format). If you were to use XML then a GET (with the appropriate >>> Accept: >>> header) could return just the parts which are modifiable by the >>> client. >>> Optionally you could add information to the URL about whether the >>> client >>> wants just the writeable elements or the whole lot, or even markup >>> the >>> elements as writable (or not). >>> >>> Hope that helps, >>> >>> Sam >>> >>> >>> On Wed, Jul 1, 2009 at 9:42 PM, Jim Edwards-Hewitt <jimeh@surety.com >>>> wrote: >>> >>> >>> Hi, everyone, >>> >>> I'm a newbie here (though not to REST in general), and the list >>> archives >>> have been a great help in clarifying my understanding of a lot of >>> REST >>> concepts and suggesting good design elements. I have one part of >>> my design >>> right now where I'm unsure what a good RESTful approach would be. >>> >>> I have resources that support GET and PUT, but contain some parts >>> that >>> clients are not allowed to modify. (This doesn't seem like an >>> uncommon case; >>> I would think that navigation links, for example, would typically >>> not be >>> modifiable in a PUT.) So is it better to: >>> >>> 1. Require clients to submit all the read-only parts unmodified in >>> a PUT, >>> and respond with an error code if they are absent or altered? >>> 2. Take advantage of the leniency allowed in a server's >>> implementation of >>> PUT to ignore the read-only elements (or their absence)? >>> 3. Separate read-only elements into a sub-resource that only >>> supports GET? >>> (This may not be feasible for resources which must be created as a >>> whole.) >>> >>> or something else? >>> >>> Second, there are some elements that are modifiable or not >>> depending on >>> the privileges held by the (authenticated) user. I would think >>> this would be >>> expressed by a difference in the representation returned to the >>> client, but >>> what should that difference be? (My representations are XML >>> documents, if >>> there isn't a more general solution.) >>> >>> And in a broader sense, I'd like the client to know which elements >>> of the >>> resource the user can modify, for presentation purposes. Is there a >>> generally accepted way to do this, perhaps with form templates or >>> XForms? >>> >>> I'd be interested in any comments or alternative approaches, if >>> I'm just >>> looking at it from the wrong angle. >>> >>> Thanks, >>> >>> -- Jim >>> >>> >>> >>> >>> >>> >> >>
On Mon, Jul 6, 2009 at 8:38 PM, Moore, Jonathan (CIM) < jonathan_moore@...> wrote: > > Sounds like the fingerprint, portrait, and scan could all be subordinate resources. Maybe http://example.com/person/123 returns an HTML or XML document with several links in it, like: Yes, you should certainly be able to enumerate the available resources but this gets me thinking about a server-side Accept: header (e.g. Offer:). Anyway I'm liking the idea of subordinate resources and it fits with the requirement to upload existing multi-file virtual machines. > (excuse my not-exactly-Atom XML)... > > <entry> > <id>http://example.com/person/123</id> > <link rel="http://example.com/schemas/#portrait" > href="http://example.com/person/123/portrait > type="image/jpeg"/> > <link rel="http://example.com/schemas/#fingerprint" > href="http://example.com/person/123/fingerprint" > type="image/jpeg"/> > ... > </entry> Another way of achieving the same while eliminating the dependency on Atom & XML is to serve up the best representation available along with Link: headers (draft-nottingham-http-link-header<http://tools.ietf.org/html/draft-nottingham-http-link-header-05>). Given "best" often translates to "biggest" you can get just the links in advance using HEAD (which is also compatible with "simple" clients like wget/curl, thus lowering the barriers to entry). Sam
On Jul 6, 2009, at 11:42 AM, mike amundsen wrote: > Keeping in mind that a proliferation of custom media types limits the > usability of a system, I tend to lean on the side of URIs when > identifying > interesting items. I don't blame custom media types for that. Proliferation of custom means of expressing semantics limits the usability of the system. A media type is one of the ways of expressing semantics. However, this is a contradiction in itself, since most non-browser applications have custom/non-standard semantics that do not completely fit standard definitions. My 2 cents Subbu
(I posted this reply privately instead of publicly, so I'm re-posting.) Ah, that does make it more clear. So I might have two (or more) different resources/URIs for one behind-the-scenes object, each of which contains only the elements that the user can modify, which supports GET/HEAD/OPTIONS/PUT? I like that; it seems much cleaner than the direction I was going. I think I'd also want to have a resource which contains all the elements and supports only GET/HEAD/OPTIONS (since, in my case, they're all readable if the user is authorized to see the resource at all), but I think I'd want it to have an edit link to navigate to the appropriate read/write resource. Does that mean that in order to avoid trouble with caching, I'd have to have parallel versions of that resource with different URIs as well, since the edit link would be different depending on the user? -- Jim --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > Jim: > I am addressing the security portion of your post. Hopefully this > will give you some ideas. > <snip> > there are some elements that are modifiable or not depending on the > privileges held by the (authenticated) user. > </snip> > > First, I favor managing access rights using a combination of URI + > HTTP_Method + Auth'ed_User. For example: > - user1 has GET,HEAD,OPTIONS for /collection/ > - manager1 has GET,HEAD,OPTIONS,POST,PUT for /collection/ > - admin1 has GET,HEAD,OPTIONS,POST,PUT,DELETE for /collection/ > > This means I focus on defining the proper Resources (addressed via > URI) when I want to limit what state representations clients and > server share. > > In the case you provide, I would consider different Resources to > handle different update privileges for the same stored data. This > also clears up any attempts at doing partial updates (and therefore > clears up caching issues) since none are needed now that there are > different resources to handle the details. > > In my example, the ability to pass different state representations > is modeled as different resources. These various "secured" resource > variations might still all "map" to the same data storage on the > server, but that's not interesting to the client anyway since the > data model is *not* the resource model on RESTful implementations. > > mca > http://amundsen.com/blog/ > > > > On Wed, Jul 1, 2009 at 15:42, Jim Edwards-Hewitt <jimeh@...> wrote: > > > Hi, everyone, > > > > I'm a newbie here (though not to REST in general), and the list archives > > have been a great help in clarifying my understanding of a lot of REST > > concepts and suggesting good design elements. I have one part of my design > > right now where I'm unsure what a good RESTful approach would be. > > > > I have resources that support GET and PUT, but contain some parts that > > clients are not allowed to modify. (This doesn't seem like an uncommon case; > > I would think that navigation links, for example, would typically not be > > modifiable in a PUT.) So is it better to: > > > > 1. Require clients to submit all the read-only parts unmodified in a PUT, > > and respond with an error code if they are absent or altered? > > 2. Take advantage of the leniency allowed in a server's implementation of > > PUT to ignore the read-only elements (or their absence)? > > 3. Separate read-only elements into a sub-resource that only supports GET? > > (This may not be feasible for resources which must be created as a whole.) > > > > or something else? > > > > Second, there are some elements that are modifiable or not depending on the > > privileges held by the (authenticated) user. I would think this would be > > expressed by a difference in the representation returned to the client, but > > what should that difference be? (My representations are XML documents, if > > there isn't a more general solution.) > > > > And in a broader sense, I'd like the client to know which elements of the > > resource the user can modify, for presentation purposes. Is there a > > generally accepted way to do this, perhaps with form templates or XForms? > > > > I'd be interested in any comments or alternative approaches, if I'm just > > looking at it from the wrong angle. > > > > Thanks, > > > > -- Jim > > > >
The most complicated resource of this type is an Administrator account. The current representation is:
<admin>
<uid>{id}</uid>
<status>{status string}</status>
<roles>
<role>{role-name}</role>
...
</roles>
<vcard>...</vcard>
<link rel="http://...#organization" href={org URL} title="{org name}" />
</admin>
The user may be an ordinary user or a privileged user (leaving aside users with read-only access, since that case is easy.) An ordinary user can modify the contact information in the vcard structure, a privileged user can modify the roles and status (active/disabled/etc.), and the uid is immutable.
(As a further complication, a privileged user can only add or remove roles that their own account possesses.)
I'm already thinking that my roles may be better expressed as a set of <role-name> tags with true/false values rather by the presence/absence of a <role> tag as in the structure above, to avoid ambiguity on PUTs.
-- Jim
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote:
>
> Hi Jim,
>
> can you provide an example representation for the mutable/immutable
> use case?
>
> Jan
>
> On Jul 1, 2009, at 3:42 PM, Jim Edwards-Hewitt wrote:
>
> > Hi, everyone,
> >
> > I'm a newbie here (though not to REST in general), and the list
> > archives have been a great help in clarifying my understanding of a
> > lot of REST concepts and suggesting good design elements. I have one
> > part of my design right now where I'm unsure what a good RESTful
> > approach would be.
> >
> > I have resources that support GET and PUT, but contain some parts
> > that clients are not allowed to modify. (This doesn't seem like an
> > uncommon case; I would think that navigation links, for example,
> > would typically not be modifiable in a PUT.) So is it better to:
> >
> > 1. Require clients to submit all the read-only parts unmodified in a
> > PUT, and respond with an error code if they are absent or altered?
> > 2. Take advantage of the leniency allowed in a server's
> > implementation of PUT to ignore the read-only elements (or their
> > absence)?
> > 3. Separate read-only elements into a sub-resource that only
> > supports GET? (This may not be feasible for resources which must be
> > created as a whole.)
> >
> > or something else?
> >
> > Second, there are some elements that are modifiable or not depending
> > on the privileges held by the (authenticated) user. I would think
> > this would be expressed by a difference in the representation
> > returned to the client, but what should that difference be? (My
> > representations are XML documents, if there isn't a more general
> > solution.)
> >
> > And in a broader sense, I'd like the client to know which elements
> > of the resource the user can modify, for presentation purposes. Is
> > there a generally accepted way to do this, perhaps with form
> > templates or XForms?
> >
> > I'd be interested in any comments or alternative approaches, if I'm
> > just looking at it from the wrong angle.
> >
> > Thanks,
> >
> > -- Jim
> >
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
>
Ah. That makes a lot of sense, but I don't think I've seen it expressed that way before. Are there content-type definitions that make that explicit? (I suppose it mostly applies to xml or html-based formats, where the same type of language is used to express the state of the resource and the purely content-type elements.) -- Jim --- In rest-discuss@yahoogroups.com, Sandeep Shetty <sandeep.shetty@...> wrote: > > Hey Jim, > > On Thu, Jul 2, 2009 at 1:12 AM, Jim Edwards-Hewitt<jimeh@...> wrote: > > I have resources that support GET and PUT, but contain some parts that > > clients are not allowed to modify. (This doesn't seem like an uncommon case; > > I would think that navigation links, for example, would typically not be > > modifiable in a PUT.) > > IMO, you seem to be confusing between the state of the resource and > its representation. GET and PUT allow you to retrieve and set the > state of the resource. The data format used for transferring that > state is the media type. "Navigation links" are specific to the media > type and not the state of the resource. I could PUT > application/atom+xml or application/x-www-form-urlencoded and GET > text/html. > > -- > Sandeep Shetty > http://sandeep.shetty.in/ >
Hey Jim, On Tue, Jul 7, 2009 at 3:34 AM, Jim Edwards-Hewitt<jimeh@...> wrote: > Ah. That makes a lot of sense, but I don't think I've seen it expressed that > way before. Are there content-type definitions that make that explicit? (I > suppose it mostly applies to xml or html-based formats, where the same type > of language is used to express the state of the resource and the purely > content-type elements.) A form (POST) that accepts only the values that represent the state of the resource is one way to make it explicit (sigh... if only I could say method="PUT" path="/foo" in the form tag) Sandeep Shetty http://sandeep.shetty.in/ > --- In rest-discuss@yahoogroups.com, Sandeep Shetty <sandeep.shetty@...> > wrote: >> >> Hey Jim, >> >> On Thu, Jul 2, 2009 at 1:12 AM, Jim Edwards-Hewitt<jimeh@...> wrote: >> > I have resources that support GET and PUT, but contain some parts that >> > clients are not allowed to modify. (This doesn't seem like an uncommon >> > case; >> > I would think that navigation links, for example, would typically not be >> > modifiable in a PUT.) >> >> IMO, you seem to be confusing between the state of the resource and >> its representation. GET and PUT allow you to retrieve and set the >> state of the resource. The data format used for transferring that >> state is the media type. "Navigation links" are specific to the media >> type and not the state of the resource. I could PUT >> application/atom+xml or application/x-www-form-urlencoded and GET >> text/html.
I agree with the inefficiency (it is an inconvenience, to be accurate) part. That is why, there is no need to require clients to supply the immutable parts. The "supply everything" requirement usually stems from XML-schema driven applications, which is unnecessary.
Subbu
On Jul 6, 2009, at 10:28 AM, Sam Johnston wrote:
On Sun, Jul 5, 2009 at 1:03 AM, Subbu Allamarajuwrote:
Representations in a request (e.g. PUT or POST) and representations in a
response (e.g. GET or a PUT) need not be absolutely the same. HTTP servers
are not databases that blindly store the data. Using PUT to update the
mutable parts of a resource is perfectly okay. At least, I don't see
anything that breaks HTTP.
The way I read the RFC is "*The PUT method requests that the enclosed entity
be stored [as is] under the supplied Request-URI*", which is obvious for
"simple" media types like images where anything else doesn't really make
sense. While it does go on to talk about partial updates (mentioning the
Content-Range header), PUTting a resource in its entirity knowing that
immutable parts will be ignored and/or trigger errors seems neither
efficient nor clean to me.
Sam
On Jul 2, 2009, at 8:01 AM, Sam Johnston wrote:
Jim,
Typically you would express the overall writeability of a resource via
OPTIONS (e.g. if you can only GET it's read only), but if you've got, say, a
template driven website and you only want the body to be updated then that's
something different.
I would almost certainly NOT be using PUT for this, rather accepting POSTs
of just the midifiable data (perhaps in HTML forms or some XML-based
format). If you were to use XML then a GET (with the appropriate Accept:
header) could return just the parts which are modifiable by the client.
Optionally you could add information to the URL about whether the client
wants just the writeable elements or the whole lot, or even markup the
elements as writable (or not).
Hope that helps,
Sam
On Wed, Jul 1, 2009 at 9:42 PM, Jim Edwards-Hewitt <jimeh@...
wrote:
Hi, everyone,
I'm a newbie here (though not to REST in general), and the list archives
have been a great help in clarifying my understanding of a lot of REST
concepts and suggesting good design elements. I have one part of my design
right now where I'm unsure what a good RESTful approach would be.
I have resources that support GET and PUT, but contain some parts that
clients are not allowed to modify. (This doesn't seem like an uncommon case;
I would think that navigation links, for example, would typically not be
modifiable in a PUT.) So is it better to:
1. Require clients to submit all the read-only parts unmodified in a PUT,
and respond with an error code if they are absent or altered?
2. Take advantage of the leniency allowed in a server's implementation of
PUT to ignore the read-only elements (or their absence)?
3. Separate read-only elements into a sub-resource that only supports GET?
(This may not be feasible for resources which must be created as a whole.)
or something else?
Second, there are some elements that are modifiable or not depending on
the privileges held by the (authenticated) user. I would think this would be
expressed by a difference in the representation returned to the client, but
what should that difference be? (My representations are XML documents, if
there isn't a more general solution.)
And in a broader sense, I'd like the client to know which elements of the
resource the user can modify, for presentation purposes. Is there a
generally accepted way to do this, perhaps with form templates or XForms?
I'd be interested in any comments or alternative approaches, if I'm just
looking at it from the wrong angle.
Thanks,
-- Jim
I worked on one project where the OPTIONS call returned documentation for that URI. This document detailed the methods, accept-types (GET), content-types (POST and PUT), and any other details.It was a small system, but the OPTIONS screens took a good deal of effort to keep up. I thought some form of automation of responses to OPTIONS would work, but we never got around to doing it. Internally I have used some additional headers on the OPTIONS method to help keep track of content-types: http://www.amundsen.com/blog/archives/716 <http://www.amundsen.com/blog/archives/716> mca http://amundsen.com/blog/ On Tue, Jul 7, 2009 at 15:28, Jim Edwards-Hewitt <jimeh@...> wrote: > > > I'll certainly admit to my thinking being influenced by past schema-driven > projects. > > I suppose I've also been thinking of GET/PUT formats being the same as a > way of communicating the expected format to the client. (Though obviously > that isn't the case when form-encoded input is accepted.) Is there any > standard method or common convention for telling the client what media types > and formats are accepted for a PUT request (other than forms)? The HTTP > standard seems a bit thin on the subject of PUT media types, compared to > GET. > > -- Jim > > > Subbu Allamaraju wrote: > > I agree with the inefficiency (it is an inconvenience, to be accurate) > part. That is why, there is no need to require clients to supply the > immutable parts. The "supply everything" requirement usually stems from > XML-schema driven applications, which is unnecessary. > > Subbu > > On Jul 6, 2009, at 10:28 AM, Sam Johnston wrote: > > On Sun, Jul 5, 2009 at 1:03 AM, Subbu Allamaraju <subbu@...><subbu@...>wrote: > > Representations in a request (e.g. PUT or POST) and representations in a > response (e.g. GET or a PUT) need not be absolutely the same. HTTP servers > are not databases that blindly store the data. Using PUT to update the > mutable parts of a resource is perfectly okay. At least, I don't see > anything that breaks HTTP. > > > The way I read the RFC is "*The PUT method requests that the enclosed > entity > be stored [as is] under the supplied Request-URI*", which is obvious for > "simple" media types like images where anything else doesn't really make > sense. While it does go on to talk about partial updates (mentioning the > Content-Range header), PUTting a resource in its entirity knowing that > immutable parts will be ignored and/or trigger errors seems neither > efficient nor clean to me. > > Sam > > On Jul 2, 2009, at 8:01 AM, Sam Johnston wrote: > > > > > Jim, > > Typically you would express the overall writeability of a resource via > OPTIONS (e.g. if you can only GET it's read only), but if you've got, say, > a > template driven website and you only want the body to be updated then > that's > something different. > > I would almost certainly NOT be using PUT for this, rather accepting POSTs > of just the midifiable data (perhaps in HTML forms or some XML-based > format). If you were to use XML then a GET (with the appropriate Accept: > header) could return just the parts which are modifiable by the client. > Optionally you could add information to the URL about whether the client > wants just the writeable elements or the whole lot, or even markup the > elements as writable (or not). > > Hope that helps, > > Sam > > > On Wed, Jul 1, 2009 at 9:42 PM, Jim Edwards-Hewitt <jimeh@... > > wrote: > > > > Hi, everyone, > > I'm a newbie here (though not to REST in general), and the list archives > have been a great help in clarifying my understanding of a lot of REST > concepts and suggesting good design elements. I have one part of my design > right now where I'm unsure what a good RESTful approach would be. > > I have resources that support GET and PUT, but contain some parts that > clients are not allowed to modify. (This doesn't seem like an uncommon > case; > I would think that navigation links, for example, would typically not be > modifiable in a PUT.) So is it better to: > > 1. Require clients to submit all the read-only parts unmodified in a PUT, > and respond with an error code if they are absent or altered? > 2. Take advantage of the leniency allowed in a server's implementation of > PUT to ignore the read-only elements (or their absence)? > 3. Separate read-only elements into a sub-resource that only supports GET? > (This may not be feasible for resources which must be created as a whole.) > > or something else? > > Second, there are some elements that are modifiable or not depending on > the privileges held by the (authenticated) user. I would think this would > be > expressed by a difference in the representation returned to the client, but > > what should that difference be? (My representations are XML documents, if > there isn't a more general solution.) > > And in a broader sense, I'd like the client to know which elements of the > resource the user can modify, for presentation purposes. Is there a > generally accepted way to do this, perhaps with form templates or XForms? > > I'd be interested in any comments or alternative approaches, if I'm just > looking at it from the wrong angle. > > Thanks, > > -- Jim > > > > > > > > > > > > > >
I'm curious as to others' experience in wrapping data with hyperlinks.
In my case, I am to write a gateway that provides access to documents
of a XML schema that does not have facility for linking. Here's an
example document:
<shipping_manifest id="9883">
<destination>
<customer id="82"><name>a name</name><address>an address</address></customer>
</destination>
<shipped_items>
<item id="102"><description>FlashLight FL-100</description>
<item id="382"><description>Lantern LA-221</description>
</shipped_items>
</shipping_manifest>
I do not want and am not allowed to change the schema because it may
break existing clients.
So I decided to create a new schema that wraps around the existing
one. It will allow links to be specified for any element in the
wrapped document. Example below.
How would you approach this problem?
YS
<hyperlinked_document>
<wrapped_document>
<shipping_manifest id="9883">
<destination>
<customer id="82"><name>a name</name><address>an address</address></customer>
</destination>
<shipped_items>
<item id="102"><description>FlashLight FL-100</description>
<item id="382"><description>Lantern LA-221</description>
</shipped_items>
</shipping_manifest>
</wrapped_document>
<links xmlns:xlink="http://www.w3.org/1999/xlink">
<link xpath="/hyperlinked_document/wrapped_document/shipping_manifest[@id='9893']"
xlink:title="Current Document"
xlink:type="simple"
xlink:role="http://gateway/linkprops/self"
xlink:href="http://gateway/wrapped/shipping_manifest/9883"/>
<link xpath="/hyperlinked_document/wrapped_document/shipping_manifest[@id='9893']"
xlink:title="Shipping Manifest #9893"
xlink:type="simple"
xlink:role="http://gateway/linkprops/shipping_manifest"
xlink:href="http://gateway/wrapped/shipping_manifest/9883"/>
<link xpath="/hyperlinked_document/wrapped_document/shipping_manifest[@id='9893']/destination/customer[@id='82']"
xlink:title="customer #82"
xlink:type="simple"
xlink:role="http://gateway/linkprops/customer"
xlink:href="http://gateway/wrapped/customer/82"/>
<link xpath="/hyperlinked_document/wrapped_document/shipping_manifest[@id='9893']/shipped_items/item[@id='102']"
xlink:title="item #102"
xlink:type="simple"
xlink:role="http://gateway/linkprops/item"
xlink:href="http://gateway/wrapped/item/102"/>
<link xpath="/hyperlinked_document/wrapped_document/shipping_manifest[@id='9893']/shipped_items/item[@id='382']"
xlink:title="item #382"
xlink:type="simple"
xlink:role="http://gateway/linkprops/item"
xlink:href="http://gateway/wrapped/item/382"/>
</links>
</hyperlinked_document>
Hi, Recently I came across a presentation(http://www.infoq.com/presentations/robinson-restful-enterprise) by Ian Robinson where he clarifies the role of Media types and schema language with the following statement: "media type for helping tune the hypermedia engine, schema for structure" (http://jim.webber.name/2008/11/23/61766710-3def-4dd9-9e36-b8d3147d14b1.aspx) i.e media type description includes the processing model that identifies hypermedia controls and defines what methods are applicable for the resources, the structure of the representation is the responsibility of schema languages. This means that an application can have a single media type describing the hypermedia controls. However, ATOM defines multiple media types. My questions are: Why does ATOM require multiple media types? Shouldn't one media type describing the hypermedia control suffice? When does it make sense to have multiple media types within a single application? Suresh
Hi Suresh It's not the best phrase, I admit... :) A media type value, as advertised in a Content-Type header or atom:content type attribute, is a key into a processing model. An XML namespace declaration is a key into a schema. Media type values are typically encountered by a client prior to its getting into the thick of a resource representation: to get to a namespace declarations, on the other hand, the client has to dig around in the representation. A decent (hyper) media processing model describes: How to identify hypermedia controls - links and forms - in representations belonging to that type Application protocol idioms - eg. HTTP verbs, headers and status codes - that can be used to manipulate resources belonging to that media type One or more resourec representation schemas A schema, as keyed by an XML namespace declaration, simply conveys structural information: it doesn't help the client understand how the server would prefer the representation interpreted and processed. There's a potential one-to-many relationship between a media type and its associated hypermedia controls: a given media type might, for example, define both links and forms. Atom defines only links. More generally, there's a whole bunch of potential many-to-many relationships between media types, specifications, schemas, resources, representation formats and namespaces. Atom defines one media type. AtomPub adds another two, one describing the processing model for category documents, another for service documents. (I started down the hyper media processing model route some time ago after Mark Baker pointed Jim, Savas and myself at http://www.markbaker.ca/blog/2004/09/why-namespaces-dont-replace-media-types/ It's not Mark's fault if I've just confused the issue) Kind regards ian
This clarifies a lot. Thanks Ian. A decent (hyper) media processing model describes: > How to identify hypermedia controls - links and forms - in representations > belonging to that type Application protocol idioms - eg. HTTP verbs, headers and status codes - > that can be used to manipulate resources belonging to that media type One or more resourec representation schemas Do you know of any media type definition that covers all the above three points? There's a potential one-to-many relationship between a media type and its > associated hypermedia controls: a given media type might, for example, > define both links and forms. Atom defines only links. More generally, > there's a whole bunch of potential many-to-many relationships between media > types, specifications, schemas, resources, representation formats and > namespaces. > Atom defines one media type. AtomPub adds another two, one describing the > processing model for category documents, another for service documents. Why does AtomPub require two media types when there could have been a single media type that did all the three points mentioned above? i.e a decent hyper media processing model contained in a single media type. Best Regards, Suresh On Wed, Jul 15, 2009 at 12:38 AM, is_robinson <iansrobinson@...>wrote: > > > Hi Suresh > > It's not the best phrase, I admit... :) > > A media type value, as advertised in a Content-Type header or atom:content > type attribute, is a key into a processing model. An XML namespace > declaration is a key into a schema. Media type values are typically > encountered by a client prior to its getting into the thick of a resource > representation: to get to a namespace declarations, on the other hand, the > client has to dig around in the representation. > > A decent (hyper) media processing model describes: > > How to identify hypermedia controls - links and forms - in representations > belonging to that type > Application protocol idioms - eg. HTTP verbs, headers and status codes - > that can be used to manipulate resources belonging to that media type > One or more resourec representation schemas > > A schema, as keyed by an XML namespace declaration, simply conveys > structural information: it doesn't help the client understand how the server > would prefer the representation interpreted and processed. > > There's a potential one-to-many relationship between a media type and its > associated hypermedia controls: a given media type might, for example, > define both links and forms. Atom defines only links. More generally, > there's a whole bunch of potential many-to-many relationships between media > types, specifications, schemas, resources, representation formats and > namespaces. > > Atom defines one media type. AtomPub adds another two, one describing the > processing model for category documents, another for service documents. > > (I started down the hyper media processing model route some time ago after > Mark Baker pointed Jim, Savas and myself at > http://www.markbaker.ca/blog/2004/09/why-namespaces-dont-replace-media-types/It's not Mark's fault if I've just confused the issue) > > Kind regards > > ian > > > -- When the facts change, I change my mind. What do you do, sir?
> > A decent (hyper) media processing model describes: > > >> How to identify hypermedia controls - links and forms - in representations >> belonging to that type > > Application protocol idioms - eg. HTTP verbs, headers and status codes - >> that can be used to manipulate resources belonging to that media type > > One or more resourec representation schemas > > > Do you know of any media type definition that covers all the above three > points? > I think Atom/AtomPub does a great job in this regard: Q. What do links look like? A. atom:link Q. What idioms ought a client use to manipulate resources? A. To create a member, POST to a collection, and expect 201 Created in response. To modify a member, PUT or DELETE to its member URI (atom:link with rel value of "edit"), expect 200 OK. Use entity tags and the conditional VERB idiom to protect against the lost update problem. Etc. Q. How ought a client format a representation? A. Atom/AtomPub provides RELAX NG schemas and non-normative examples. > Why does AtomPub require two media types when there could have been a > single media type that did all the three points mentioned above? i.e a > decent hyper media processing model contained in a single media type. > I don't know the answer to that question. Anybody? Kind regards ian
On Jul 15, 2009, at 2:58 AM, Ian Robinson wrote: > > Why does AtomPub require two media types when there could have been > a single media type that did all the three points mentioned above? > i.e a decent hyper media processing model contained in a single > media type. > > I don't know the answer to that question. Anybody? Because those are of "different" media "types". Subbu
[about multiple mediatypes being defined in ATOM]I don't know the answer to that question. Anybody? I'd assume because it was deemed necessary to do discovery with conneg maybe? A quick search on the mailing list doesn't trigger any useful result on the matter. That said, the addition of the media type attribute type=item would in effect mean the creation of an additional media type to those you mentioned :) Seb _________________________________________________________________ MSN straight to your mobile - news, entertainment, videos and more. http://clk.atdmt.com/UKM/go/147991039/direct/01/
On Wed, Jul 15, 2009 at 6:29 AM, Subbu Allamaraju<subbu@...> wrote:
>
>
>
> On Jul 15, 2009, at 2:58 AM, Ian Robinson wrote:
>
>>
>> Why does AtomPub require two media types when there could have been
>> a single media type that did all the three points mentioned above?
>> i.e a decent hyper media processing model contained in a single
>> media type.
>>
>> I don't know the answer to that question. Anybody?
>
> Because those are of "different" media "types".
>
Yep.
The original Atom Syndication Format spec (RFC 4287, Section 2) had this to say:
"Both kinds of Atom Documents are specified in terms of the
XML Information set, serialized as XML 1.0 [W3C.REC-xml-20040204]
and identified with the "application/atom+xml" media type."
where "both kinds" refers to the feed and entry representations. By
the time we get to the Atom Publishing Protocol spec (RFC 5023), we
see the situation changing a bit:
* Arbitrary media types are used for Media Resources.
* Category Document media type is "application/atomcat+xml".
* Service Document media type is "application/atomsvc+xml".
* A whole bunch of conneg based on media types (such as the accept element).
* Ability to use "application/atom+xml;type=entry" to distinguish
an entry from a feed.
The last one is interesting ... the way I read the history is that the
Atom folks figured out from real world use that a single media type
doesn't always give you enough to go on when doing content
negotiation.
I tend to take this lesson to heart when I design REST APIs, and use
different media type values for different media types :-).
Craig
> Subbu
>
Hi, Just to say Ian/Jim are doing a session on Monday (20th), to follow on from the one with Alan Dean. We use Live Meeting for these sessions and the link is: http://snipr.com/virtualaltnet. The timing information is: In France/Germany/Belgium: 8:00PM In UK is: 7:00PM EST in the US is: 2:00PM PST in the US is: 11:00AM Ta, Colin
Are state transitions in HTTP simply the URI/Verb combination, or the entire of the message? I prefer the latter - perhaps if browsers took this approach; serving multiple representations from one URI by conneg could work without breaking bookmarks and 'page' refreshes? - Mike
Is a state transition adequately defined by: GET /resource or is it more appropriate to include the entire of the message? i.e.: GET /resource Accept: application/pdf Accept-Language: en-us .... etc If browsers treated each state as the full HTTP message, bookmarks and page refreshes would not 'break'. As it stands; if a browser refreshed or bookmarked the latter state, the Accept header would revert back to default (text/html, etc..) because the only part of the state transition stored is the URI/Verb combination. - Mike Dhananjay Nene wrote: > Not sure if I am the only one .. but couldn't really understand the > question. Maybe you could describe the question by stating an example > of the choices ?
The problem comes from this: conneg served on the same URI should be used when the variation between the multiple representations is not important or substantial. If it's important enough that the returned representation be constant over time for you to send a link to that specific representation, that representation ought to be promoted to its own URI as a separate resource. So while conneg used for formatting dates may well be a valid use and does not necessitate a separate resource, I'm pretty sure a pdf / html or a jpg / gif will become quite problematic. > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@yahoogroups.com] On Behalf Of Mike Kelly > Sent: 22 July 2009 14:10 > To: Dhananjay Nene; Rest List > Subject: Re: [rest-discuss] HTTP State Transitions > > Is a state transition adequately defined by: > > GET /resource > > or is it more appropriate to include the entire of the message? i.e.: > > GET /resource > Accept: application/pdf > Accept-Language: en-us > .... etc > > If browsers treated each state as the full HTTP message, bookmarks and > page refreshes would not 'break'. > > As it stands; if a browser refreshed or bookmarked the latter state, > the > Accept header would revert back to default (text/html, etc..) because > the only part of the state transition stored is the URI/Verb > combination. > > - Mike > > > Dhananjay Nene wrote: > > Not sure if I am the only one .. but couldn't really understand the > > question. Maybe you could describe the question by stating an example > > of the choices ? > > > > ------------------------------------ > > Yahoo! Groups Links > > >
But Accept: application/pdf has to do with the "representation" of the state of the resource, not to the state of the resource. Mike Kelly wrote: > > > Is a state transition adequately defined by: > > GET /resource > > or is it more appropriate to include the entire of the message? i.e.: > > GET /resource > Accept: application/pdf > Accept-Language: en-us > .... etc > > If browsers treated each state as the full HTTP message, bookmarks and > page refreshes would not 'break'. > > As it stands; if a browser refreshed or bookmarked the latter state, the > Accept header would revert back to default (text/html, etc..) because > the only part of the state transition stored is the URI/Verb combination. > > - Mike > > Dhananjay Nene wrote: > > Not sure if I am the only one .. but couldn't really understand the > > question. Maybe you could describe the question by stating an example > > of the choices ? > >
Hmm.. how about html/rss/atom/json then? If you get sent a link to /resource and don't want HTML - don't open it with a browser! :) Those kinds of problems could be solved with something equivalent to an 'Open With..' menu, which could makes an OPTIONS request to the URI and list installed HTTP clients that accept any of the available Content-Type's listed. The distinction between representations and resources in your suggested approach seems pretty blurred (assuming the ends of URIs are opaque, of course!). - Mike Sebastien Lambla wrote: > The problem comes from this: conneg served on the same URI should be used > when the variation between the multiple representations is not important or > substantial. > > If it's important enough that the returned representation be constant over > time for you to send a link to that specific representation, that > representation ought to be promoted to its own URI as a separate resource. > > So while conneg used for formatting dates may well be a valid use and does > not necessitate a separate resource, I'm pretty sure a pdf / html or a jpg / > gif will become quite problematic. > > >> -----Original Message----- >> From: rest-discuss@yahoogroups.com [mailto:rest- >> discuss@yahoogroups.com] On Behalf Of Mike Kelly >> Sent: 22 July 2009 14:10 >> To: Dhananjay Nene; Rest List >> Subject: Re: [rest-discuss] HTTP State Transitions >> >> Is a state transition adequately defined by: >> >> GET /resource >> >> or is it more appropriate to include the entire of the message? i.e.: >> >> GET /resource >> Accept: application/pdf >> Accept-Language: en-us >> .... etc >> >> If browsers treated each state as the full HTTP message, bookmarks and >> page refreshes would not 'break'. >> >> As it stands; if a browser refreshed or bookmarked the latter state, >> the >> Accept header would revert back to default (text/html, etc..) because >> the only part of the state transition stored is the URI/Verb >> combination. >> >> - Mike >> >> >> Dhananjay Nene wrote: >> >>> Not sure if I am the only one .. but couldn't really understand the >>> question. Maybe you could describe the question by stating an example >>> of the choices ? >>> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > >
I was referring to application state, rather than resource state Ant�nio Mota wrote: > But > > Accept: application/pdf > > has to do with the "representation" of the state of the resource, not to > the state of the resource. > > > > Mike Kelly wrote: > >> >> >> Is a state transition adequately defined by: >> >> GET /resource >> >> or is it more appropriate to include the entire of the message? i.e.: >> >> GET /resource >> Accept: application/pdf >> Accept-Language: en-us >> .... etc >> >> If browsers treated each state as the full HTTP message, bookmarks and >> page refreshes would not 'break'. >> >> As it stands; if a browser refreshed or bookmarked the latter state, the >> Accept header would revert back to default (text/html, etc..) because >> the only part of the state transition stored is the URI/Verb combination. >> >> - Mike >> >> Dhananjay Nene wrote: >> >>> Not sure if I am the only one .. but couldn't really understand the >>> question. Maybe you could describe the question by stating an example >>> of the choices ? >>> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Well, let's put it this way. Provided the xml or json representation of a resource are semantically equivalent (aka contain the same state), I see no reason not to use conneg on those. The expectation of an automated user agent is not the same as the expectation of a user, and as such I don't see much benefit in pushing for separate URIs for different serializations of the same resource state. As for choosing multiple media types that are available, provided you promote the ones that should be linkable independently (and my definition of what is desirably independent is based on user expectation), we already have this, with a 300 response to a simple GET on a generic URI. Agent-driven conneg is perfectly acceptable but requires separate URIs for each notable representation. As for the difference between resource and representation, it can be blurry, as a resource may have multiple representations, and a representation may be a resource itself (aka have its own URI). The awww:resource is anything important enough to have a URI. When dereferencing such URI, the response may be no representation (in which case it may be an http URI denoting a thing, for example as used in rdf), one representation (there is one representation sent to the client, based on Conneg or current state of the resource), the resource could be the representation (in which case it's an IR, the resource is the representation, in the sense that all properties of the resource can be transferred in the representation), or a list of representations through a 300 (and then one cannot decipher the kind of resource we're talking about, but can access individual representations promoted as resources through the uris linked in the response). It's not blurred as such, it's the current landscape of what resources are and what URI denotes on the web. Seb > -----Original Message----- > From: Mike Kelly [mailto:mike@...] > Sent: 22 July 2009 14:41 > To: Sebastien Lambla > Cc: 'Dhananjay Nene'; 'Rest List' > Subject: Re: [rest-discuss] HTTP State Transitions > > Hmm.. how about html/rss/atom/json then? > > If you get sent a link to /resource and don't want HTML - don't open it > with a browser! :) > > Those kinds of problems could be solved with something equivalent to an > 'Open With..' menu, which could makes an OPTIONS request to the URI and > list installed HTTP clients that accept any of the available > Content-Type's listed. > > The distinction between representations and resources in your suggested > approach seems pretty blurred (assuming the ends of URIs are opaque, of > course!). > > - Mike > > > Sebastien Lambla wrote: > > The problem comes from this: conneg served on the same URI should be > used > > when the variation between the multiple representations is not > important or > > substantial. > > > > If it's important enough that the returned representation be constant > over > > time for you to send a link to that specific representation, that > > representation ought to be promoted to its own URI as a separate > resource. > > > > So while conneg used for formatting dates may well be a valid use and > does > > not necessitate a separate resource, I'm pretty sure a pdf / html or > a jpg / > > gif will become quite problematic. > > > > > >> -----Original Message----- > >> From: rest-discuss@yahoogroups.com [mailto:rest- > >> discuss@yahoogroups.com] On Behalf Of Mike Kelly > >> Sent: 22 July 2009 14:10 > >> To: Dhananjay Nene; Rest List > >> Subject: Re: [rest-discuss] HTTP State Transitions > >> > >> Is a state transition adequately defined by: > >> > >> GET /resource > >> > >> or is it more appropriate to include the entire of the message? > i.e.: > >> > >> GET /resource > >> Accept: application/pdf > >> Accept-Language: en-us > >> .... etc > >> > >> If browsers treated each state as the full HTTP message, bookmarks > and > >> page refreshes would not 'break'. > >> > >> As it stands; if a browser refreshed or bookmarked the latter state, > >> the > >> Accept header would revert back to default (text/html, etc..) > because > >> the only part of the state transition stored is the URI/Verb > >> combination. > >> > >> - Mike > >> > >> > >> Dhananjay Nene wrote: > >> > >>> Not sure if I am the only one .. but couldn't really understand the > >>> question. Maybe you could describe the question by stating an > example > >>> of the choices ? > >>> > >> > >> ------------------------------------ > >> > >> Yahoo! Groups Links > >> > >> > >> > > > > > >
I don't understand then. Application state resides on the client, so you want to bookmark that? I thought bookmarks point to resources, not to something that resides on the client. Mike Kelly wrote: > I was referring to application state, rather than resource state > > Ant�nio Mota wrote: >> But >> >> Accept: application/pdf >> >> has to do with the "representation" of the state of the resource, not >> to the state of the resource. >> >> >> >> Mike Kelly wrote: >> >>> >>> >>> Is a state transition adequately defined by: >>> >>> GET /resource >>> >>> or is it more appropriate to include the entire of the message? i.e.: >>> >>> GET /resource >>> Accept: application/pdf >>> Accept-Language: en-us >>> .... etc >>> >>> If browsers treated each state as the full HTTP message, bookmarks and >>> page refreshes would not 'break'. >>> >>> As it stands; if a browser refreshed or bookmarked the latter state, >>> the >>> Accept header would revert back to default (text/html, etc..) because >>> the only part of the state transition stored is the URI/Verb >>> combination. >>> >>> - Mike >>> >>> Dhananjay Nene wrote: >>> >>>> Not sure if I am the only one .. but couldn't really understand the >>>> question. Maybe you could describe the question by stating an example >>>> of the choices ? >>>> >>> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> >
Sebastien Lambla wrote: > Well, let's put it this way. Provided the xml or json representation of a > resource are semantically equivalent (aka contain the same state), I see no > reason not to use conneg on those. The expectation of an automated user > agent is not the same as the expectation of a user, and as such I don't see > much benefit in pushing for separate URIs for different serializations of > the same resource state. > > As for choosing multiple media types that are available, provided you > promote the ones that should be linkable independently (and my definition of > what is desirably independent is based on user expectation), we already have > this, with a 300 response to a simple GET on a generic URI. Agent-driven > conneg is perfectly acceptable but requires separate URIs for each notable > representation. > > As for the difference between resource and representation, it can be blurry, > as a resource may have multiple representations, and a representation may be > a resource itself (aka have its own URI). > > The awww:resource is anything important enough to have a URI. When > dereferencing such URI, the response may be no representation (in which case > it may be an http URI denoting a thing, for example as used in rdf), one > representation (there is one representation sent to the client, based on > Conneg or current state of the resource), the resource could be the > representation (in which case it's an IR, the resource is the > representation, in the sense that all properties of the resource can be > transferred in the representation), or a list of representations through a > 300 (and then one cannot decipher the kind of resource we're talking about, > but can access individual representations promoted as resources through the > uris linked in the response). > > It's not blurred as such, it's the current landscape of what resources are > and what URI denotes on the web. > We've discussed this already elsewhere - I understand why you might feel the need to give a representation a URI. As you know, I am of the opinion that the costs of taking that approach (complications in cache-invalidation/other intermediary mechanisms, degraded uniformity, convoluted messages with Link headers) outweigh the benefit (being able to exchange a plain-text hyperlink directly to a specific representation). We're missing the point a bit here; the purpose of this thread was to establish whether or not a state transition is defined by the whole message or just the URI/Verb, if the conclusion is the latter - then there is a far stronger case for URI 'conneg'. If it's the former, the case is not necessarily stronger for HTTP conneg - but parties such as browser vendors and WHATWG should be encouraged to provide the mechanisms necessary to pursue this alternative, non-conflicting, approach to content negotiation. - Mike
Hello all,
I'm new to this group. Posted this question on the DDD group but someone pointed me here. Now who would think that there would be a whole group focusing on REST :)
Right!
Every once-in-a-while a take another look at something just to refresh / improve my understanding.
So I had another good look at REST --- on wikipedia.
So now I'm thinking I had my idea of REST somewhat wonky. Using the correct way the following happes:
http://domain/cars --> returns ALL cars
http://domain/cars/abc123 --> returns car details with id abc123
so the 'format' is domain/{collection}/{id}
Now if there are 40 gazillion cars it would be silly returning the whole lot.
So there is no bahaviour built in other than the HTTP GET, POST, PUT, DELETE; not much behaviour at all.
When looking at MVC and what can be done with routing it makes more sense (to me, anyway):
http://domain/{aggregate}/{action}/{id}
so:
http://domain/cars/find --> post a search request
http://domain/cars/list --> returns SEARCH result
http://domain/cars/new --> starts a new car registration
http://domain/cars/create ---> post a create request
http://domain/cars/show/abc123 --> returns car details with id abc123
http://domain/cars/declareunroadworthy/abc123 --> car with id abc123 set to unroadworthy
Is this still REST. There may be session state on the server.
Maybe this is Representational Intent or something to that effect.
Ideas?
Regards,
Eben
> We're missing the point a bit here; the purpose of this thread was to > establish whether or not a state transition is defined by the whole > message or just the URI/Verb, if the conclusion is the latter - then > there is a far stronger case for URI 'conneg'. If it's the former, the > case is not necessarily stronger for HTTP conneg - but parties such as > browser vendors and WHATWG should be encouraged to provide the > mechanisms necessary to pursue this alternative, non-conflicting, > approach to content negotiation. As far as my understanding of your question goes, you're trying to define an identifier of a representation as URI of resource, plus verb, plus whatever message headers impact on the selection of the representation. I make the argument that one shouldn't try to define such an identifier. The identifier of a resource is enough, and if it is the case that you want to identify a specific representation, then making that representation a resource ought to be enough. The state transition is the result of dereferencing such identifier, and is dependent on the state of the application. As such, it is of course dependent on the current state the client has, if any, and the current state the server holds. I just don't see what the conneg of an entity body has to do with the state transition, I see them at different levels in the http layers, and I certainly have modelled it that way in my framework. Maybe I just don't get what problem you're trying to solve here. Seb
Sebastien Lambla wrote: >> We're missing the point a bit here; the purpose of this thread was to >> establish whether or not a state transition is defined by the whole >> message or just the URI/Verb, if the conclusion is the latter - then >> there is a far stronger case for URI 'conneg'. If it's the former, the >> case is not necessarily stronger for HTTP conneg - but parties such as >> browser vendors and WHATWG should be encouraged to provide the >> mechanisms necessary to pursue this alternative, non-conflicting, >> approach to content negotiation. >> > > As far as my understanding of your question goes, you're trying to define an > identifier of a representation as URI of resource, plus verb, plus whatever > message headers impact on the selection of the representation. > > I make the argument that one shouldn't try to define such an identifier. The > identifier of a resource is enough, and if it is the case that you want to > identify a specific representation, then making that representation a > resource ought to be enough. > > The state transition is the result of dereferencing such identifier, and is > dependent on the state of the application. As such, it is of course > dependent on the current state the client has, if any, and the current state > the server holds. > > I just don't see what the conneg of an entity body has to do with the state > transition, I see them at different levels in the http layers, and I > certainly have modelled it that way in my framework. > > Maybe I just don't get what problem you're trying to solve here. > I could be wrong - but I was under the impression that a hyperlink can be more than just a URI.
On Wed, Jul 22, 2009 at 6:39 PM, Mike Kelly <mike@...> wrote: > Is a state transition adequately defined by: > > GET /resource > > or is it more appropriate to include the entire of the message? i.e.: > > GET /resource > Accept: application/pdf > Accept-Language: en-us > .... etc > > If browsers treated each state as the full HTTP message, bookmarks and page > refreshes would not 'break'. > > As it stands; if a browser refreshed or bookmarked the latter state, the > Accept header would revert back to default (text/html, etc..) because the > only part of the state transition stored is the URI/Verb combination. > > - Mike > > > > Dhananjay Nene wrote: > >> Not sure if I am the only one .. but couldn't really understand the >> question. Maybe you could describe the question by stating an example of the >> choices ? >> > > Mike, Can we treat this as two different questions ? a) How is a state adequately defined : A URI such as GET /resource adequately defines the application state The http headers / metadata have no implication on the application state. Whether you choose to view a document in a PDF or HTML or perhaps even as a PNG has no bearing on the application. The application state is always the same for the same URI irrespective of the content type. b) How is a bookmark adequately defined ? This is really a question for the author of a browser to answer. If we had a browser which say allowed you to also describe the accept headers before making the URI, then let us for a moment suggest that such a browser the bookmark perhaps could contain the accept header. However we don't have such browsers (I haven't seen one at least). But it is likely that programmatic clients may want to choose to store such URIs for say resuming later. Such a browser is again unlikely to 'break' if it is able to accept and parse various content types with equal capability or works with only one content type which it always specifically requests. However it could break if in a particular situation it has requested a particular resource with a non default accept headers and that is specifically needed to resume further. In such a situation, perhaps the bookmark could store the accept header. But even in this case the accept headers imo are not an attribute of the application state - they are an application of the conversation state (though I am open to be challenged since I myself am not so terribly convinced about it). Dhananjay -- -------------------------------------------------------- blog: http://blog.dhananjaynene.com twitter: http://twitter.com/dnene
> I could be wrong - but I was under the impression that a hyperlink can > be more than just a URI. As i feared, you've lost me. I'd like to separate at this stage the bookmark, which is defined as the identifier of a resource (in the case of HTTP, a URI), and the hypermedia control, which is the dereferencing of such URI using an adequate verb and whatever message headers and client state the user agent see fit in adding. As such, I think your original question deals with what identifies a bookmark. It's my belief that the state transition is depending on the client state, and as such is a different context than the simple dereferencing of the identifier of a resource. My previous comments still hold true, in both cases. Seb _________________________________________________________________ Windows Live Messenger: Happy 10-Year Anniversary—get free winks and emoticons. http://clk.atdmt.com/UKM/go/157562755/direct/01/
Sebastien Lambla wrote: >> We're missing the point a bit here; the purpose of this thread was to >> establish whether or not a state transition is defined by the whole >> message or just the URI/Verb, if the conclusion is the latter - then >> there is a far stronger case for URI 'conneg'. If it's the former, the >> case is not necessarily stronger for HTTP conneg - but parties such as >> browser vendors and WHATWG should be encouraged to provide the >> mechanisms necessary to pursue this alternative, non-conflicting, >> approach to content negotiation. >> > > As far as my understanding of your question goes, you're trying to define an > identifier of a representation as URI of resource, plus verb, plus whatever > message headers impact on the selection of the representation. > > Not an 'identifier', as such; just the full state transition (including control data) required to select a given representation. > I make the argument that one shouldn't try to define such an identifier. The > identifier of a resource is enough, and if it is the case that you want to > identify a specific representation, then making that representation a > resource ought to be enough. > I understand that a representation can be 'made a resource', although the phrase 'treated as if it were' would be more appropriate. It's slightly confusing when you state that an identifier to a resource is 'enough', and then immediately contradict this position by entertaining 'the case that you want to identify a specific representation'. So it's hard, from that, to make sense of whether or not treating a representation as a resource 'ought to be enough'. This is particularly apparent given, as I mentioned before, the only significant benefit from doing this is that you get *plain text* hyperlinks to your representations - the value of this is questionable if an identifier to a resource is 'enough' in the first place. > The state transition is the result of dereferencing such identifier, and is > dependent on the state of the application. As such, it is of course > dependent on the current state the client has, if any, and the current state > the server holds. > If it is possible for a hyperlink to include control data, then a hypermedia driven state transition is more than simply dereferencing an href URI. > I just don't see what the conneg of an entity body has to do with the state > transition, I see them at different levels in the http layers, and I > certainly have modelled it that way in my framework. > *Representational* State Transfer (?) - Mike
Hello Eben,
What you describe isn't RESTful and more generally isn't in line with the way the Web works. In particular you should keep in mind that URIs identify things, not actions. You might want to read a good tutorial or book on REST. You'll see that what you describe can be achieved using the HTTP methods (GET, PUT, ...).
Philippe Mougin
--- In rest-discuss@yahoogroups.com, "Eben Roux" <eben.roux@...> wrote:
>
> Hello all,
>
> I'm new to this group. Posted this question on the DDD group but someone pointed me here. Now who would think that there would be a whole group focusing on REST :)
>
> Right!
>
> Every once-in-a-while a take another look at something just to refresh / improve my understanding.
>
> So I had another good look at REST --- on wikipedia.
>
> So now I'm thinking I had my idea of REST somewhat wonky. Using the correct way the following happes:
>
> http://domain/cars --> returns ALL cars
> http://domain/cars/abc123 --> returns car details with id abc123
>
> so the 'format' is domain/{collection}/{id}
>
> Now if there are 40 gazillion cars it would be silly returning the whole lot.
>
> So there is no bahaviour built in other than the HTTP GET, POST, PUT, DELETE; not much behaviour at all.
>
> When looking at MVC and what can be done with routing it makes more sense (to me, anyway):
>
> http://domain/{aggregate}/{action}/{id}
>
> so:
>
> http://domain/cars/find --> post a search request
> http://domain/cars/list --> returns SEARCH result
> http://domain/cars/new --> starts a new car registration
> http://domain/cars/create ---> post a create request
> http://domain/cars/show/abc123 --> returns car details with id abc123
> http://domain/cars/declareunroadworthy/abc123 --> car with id abc123 set to unroadworthy
>
> Is this still REST. There may be session state on the server.
>
> Maybe this is Representational Intent or something to that effect.
>
> Ideas?
>
> Regards,
> Eben
>
I want to implement a "Wizard" type web form. My first thought is to use Cookies to track the parts of the form and assure that the second, third, ..nth pages that are submitted match the first. However, as I think about it more, I think that maybe I should have a web app that uses several restful resources but doesnt' require them to know anything about each other? So perhaps page one is a PUT and subsequent ones are POSTs? And I'd use the resource created from the first page to update using subsequent pages. Does that make sense? On the backend, I assume I'd insert the data from the first page into the database and update the records using subsequent pages. Am I close? Does anyone have any pointers? -- Greg Akins http://www.pghcodingdojo.org http://pittjug.dev.java.net
Greg: Not sure if this is what you're asking, but I recently worked on a project that had a "Create" process that involved multiple tabs on a form including the ability upload one or more files and annotate them. All of which needed to be completed in order to "Create" a valid record in the system. What we decided to do was invent "work in progress" (WIP) record that could hold all the data gathered from this multi-tab experience. This WIP record had almost no validation rules. It just accepted inputs and whenever the user changed focus (moved between tabs, uploaded a files, etc.) it stored the data to the server. Once the user is confident all the data was entered properly, they can press the "Create" button to send the entire state representation to the server to process. The server then does all the needed validation and responds accordingly. If all goes well, a new "official" record is created in the system and the user is notified of success. Hope this gives you some ideas. mca http://amundsen.com/blog/ On Thu, Jul 23, 2009 at 14:55, Greg Akins <angrygreg@...> wrote: > I want to implement a "Wizard" type web form. > > My first thought is to use Cookies to track the parts of the form and > assure that the second, third, ..nth pages that are submitted match > the first. > > However, as I think about it more, I think that maybe I should have a > web app that uses several restful resources but doesnt' require them > to know anything about each other? > > So perhaps page one is a PUT and subsequent ones are POSTs? And I'd > use the resource created from the first page to update using > subsequent pages. > > Does that make sense? On the backend, I assume I'd insert the data > from the first page into the database and update the records using > subsequent pages. > > Am I close? Does anyone have any pointers? > > -- > Greg Akins > > http://www.pghcodingdojo.org > http://pittjug.dev.java.net > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Thu, Jul 23, 2009 at 2:46 PM, mike amundsen<mamund@...> wrote: > What we decided to do was invent "work in progress" (WIP) record that could > hold all the data gathered from this multi-tab experience. This WIP record > had almost no validation rules. It just accepted inputs and whenever the > user changed focus (moved between tabs, uploaded a files, etc.) it stored > the data to the server. > > Once the user is confident all the data was entered properly, they can press > the "Create" button to send the entire state representation to the server to > process. The server then does all the needed validation and responds > accordingly. If all goes well, a new "official" record is created in the > system and the user is notified of success. Mike, you just described yet another instance of the provisional-final transaction pattern.
G'day, Code on demand is the optional constraint in REST. Roy's thesis has this to say in section 3.5.3: "In the code-on-demand style [50], a client component has access to a set of resources, but not the know-how on how to process them. It sends a request to a remote server for the code representing that know-how, receives that code, and executes it locally." Now, applets and in-page javascript fit this bill. They are an important way in which the client supports customization by the server as it navigates from one state to the next. However, I have seen plugins also claimed as being code on demand. My argument for them not being so is that the user explicitly downloads and "deploys" a plugin before it is useful. I guess my basic argument is that if it has an installed presence within the client outside of the application state when the client is at rest then it has stepped outside of code-on-demand and into a manual deployment model. However, I thought I would throw the question out to a wider audience: Does the manual installation step of installing a plugin prevent it from being code-on-demand? Does a requirement to restart the browser prevent it from being code-on-demand? Would a requirement to download a thick client and use that for further access to the site still be an example of code-on-demand? Is it just software that depends on the real machine instead of a common virtual machine across the clients that is stepping outside of code on demand? Where do the boundaries of this constraint lie? Benjamin.
Dhananjay Nene wrote: > > > On Wed, Jul 22, 2009 at 6:39 PM, Mike Kelly <mike@... > <mailto:mike@...>> wrote: > > Is a state transition adequately defined by: > > GET /resource > > or is it more appropriate to include the entire of the message? i.e.: > > GET /resource > Accept: application/pdf > Accept-Language: en-us > .... etc > > If browsers treated each state as the full HTTP message, bookmarks > and page refreshes would not 'break'. > > As it stands; if a browser refreshed or bookmarked the latter > state, the Accept header would revert back to default (text/html, > etc..) because the only part of the state transition stored is the > URI/Verb combination. > > - Mike > > > > Dhananjay Nene wrote: > > Not sure if I am the only one .. but couldn't really > understand the question. Maybe you could describe the question > by stating an example of the choices ? > > > Mike, > > Can we treat this as two different questions ? > > a) How is a state adequately defined : A URI such as GET /resource > adequately defines the application state The http headers / metadata > have no implication on the application state. Whether you choose to > view a document in a PDF or HTML or perhaps even as a PNG has no > bearing on the application. The application state is always the same > for the same URI irrespective of the content type. > Hi Dhananjay, Why would application state always be the same for all representations of a resource? > b) How is a bookmark adequately defined ? This is really a question > for the author of a browser to answer. If we had a browser which say > allowed you to also describe the accept headers before making the URI, > then let us for a moment suggest that such a browser the bookmark > perhaps could contain the accept header. However we don't have such > browsers (I haven't seen one at least). But it is likely that > programmatic clients may want to choose to store such URIs for say > resuming later. Such a browser is again unlikely to 'break' if it is > able to accept and parse various content types with equal capability > or works with only one content type which it always specifically > requests. However it could break if in a particular situation it has > requested a particular resource with a non default accept headers and > that is specifically needed to resume further. In such a situation, > perhaps the bookmark could store the accept header. That is, essentially, what I was suggesting > But even in this case the accept headers imo are not an attribute of > the application state - they are an application of the conversation > state (though I am open to be challenged since I myself am not so > terribly convinced about it). I don't know if I understand what you mean by 'application of the conversation state' - but there is no room for a state between application and resource, given communication in RESTful systems is (supposed to be) stateless. Cheers, Mike
On Mon, Jul 27, 2009 at 2:29 PM, Mike Kelly <mike@...> wrote: > Dhananjay Nene wrote: > >> >> >> On Wed, Jul 22, 2009 at 6:39 PM, Mike Kelly <mike@... <mailto: >> mike@...>> wrote: >> >> Is a state transition adequately defined by: >> >> GET /resource >> >> or is it more appropriate to include the entire of the message? i.e.: >> >> GET /resource >> Accept: application/pdf >> Accept-Language: en-us >> .... etc >> >> If browsers treated each state as the full HTTP message, bookmarks >> and page refreshes would not 'break'. >> >> As it stands; if a browser refreshed or bookmarked the latter >> state, the Accept header would revert back to default (text/html, >> etc..) because the only part of the state transition stored is the >> URI/Verb combination. >> >> - Mike >> >> >> >> Dhananjay Nene wrote: >> >> Not sure if I am the only one .. but couldn't really >> understand the question. Maybe you could describe the question >> by stating an example of the choices ? >> >> >> Mike, >> >> Can we treat this as two different questions ? >> >> a) How is a state adequately defined : A URI such as GET /resource >> adequately defines the application state The http headers / metadata have >> no implication on the application state. Whether you choose to view a >> document in a PDF or HTML or perhaps even as a PNG has no bearing on the >> application. The application state is always the same for the same URI >> irrespective of the content type. >> >> > Hi Dhananjay, > > Why would application state always be the same for all representations of a > resource? An application state reflects where the user is in the overall workflow supported / managed by the application. It is therefore not dependent upon the representation. eg. if I am being shown an itinerary for my approval prior to a final booking, I am in the state of reviewing the itinerary - irrespective of whether the itinerary was being rendered as XHTML, JSON, XML, PDF or PNG. > > > b) How is a bookmark adequately defined ? This is really a question for >> the author of a browser to answer. If we had a browser which say allowed you >> to also describe the accept headers before making the URI, then let us for a >> moment suggest that such a browser the bookmark perhaps could contain the >> accept header. However we don't have such browsers (I haven't seen one at >> least). But it is likely that programmatic clients may want to choose to >> store such URIs for say resuming later. Such a browser is again unlikely to >> 'break' if it is able to accept and parse various content types with equal >> capability or works with only one content type which it always specifically >> requests. However it could break if in a particular situation it has >> requested a particular resource with a non default accept headers and that >> is specifically needed to resume further. In such a situation, perhaps the >> bookmark could store the accept header. >> > > That is, essentially, what I was suggesting Yes, thats what I had assumed you were referring to - I was on the other hand attempting to separate the notion of the bookmark and the application state - ie. they are not one and the same. > > > But even in this case the accept headers imo are not an attribute of the >> application state - they are an application of the conversation state >> (though I am open to be challenged since I myself am not so terribly >> convinced about it). >> > > I don't know if I understand what you mean by 'application of the > conversation state' - but there is no room for a state between application > and resource, given communication in RESTful systems is (supposed to be) > stateless. Sorry, made a mistake there - should've stated " the accept headers imo are not an attribute of the application state - they are an *attribute* of the conversation state". Though reading it again I feel a little lame. I should've been more specific and should've said "the accept headers imo are not an attribute of the application state - they are merely the influencers of a representation as a part of a given conversation" > > Cheers, > Mike > -- -------------------------------------------------------- blog: http://blog.dhananjaynene.com twitter: http://twitter.com/dnene
Ant�nio Mota wrote: > I don't understand then. Application state resides on the client, so > you want to bookmark that? I thought bookmarks point to resources, not > to something that resides on the client. > A bookmark is an application state a user would like to return to. Given that application state must be driven by hypermedia - a bookmark can be considered a client side store representing a particular hyperlink (i.e. state transition). So my question is around whether or not it is feasible for a hypermedia format to include control data in its hyperlinks. In HTML one example of this could be: <a href="/report" accept="application/pdf">Report (PDF)</a> <a href="/report" accept="application/msexcel">Report (Excel)</a> <a href="/report" accept="image/png">Report (PNG)</a> <a href="/report">Report (Default HTML)</a> I'm not sure adding this data to bookmarks would change the 'direction' they point, but it would obviously allow for a bookmark to point to a specific representation rather than just a resource - thus addressing the primary argument against using HTTP conneg. Cheers, Mike
Dhananjay Nene wrote: > > > On Mon, Jul 27, 2009 at 2:29 PM, Mike Kelly <mike@... > <mailto:mike@...>> wrote: > > > Hi Dhananjay, > > Why would application state always be the same for all > representations of a resource? > > > An application state reflects where the user is in the overall > workflow supported / managed by the application. It is therefore not > dependent upon the representation. eg. if I am being shown an > itinerary for my approval prior to a final booking, I am in the state > of reviewing the itinerary - irrespective of whether the itinerary was > being rendered as XHTML, JSON, XML, PDF or PNG. I'm not sure if I understand the distinction you make between application and resource state > > > > > b) How is a bookmark adequately defined ? This is really a > question for the author of a browser to answer. If we had a > browser which say allowed you to also describe the accept > headers before making the URI, then let us for a moment > suggest that such a browser the bookmark perhaps could contain > the accept header. However we don't have such browsers (I > haven't seen one at least). But it is likely that programmatic > clients may want to choose to store such URIs for say resuming > later. Such a browser is again unlikely to 'break' if it is > able to accept and parse various content types with equal > capability or works with only one content type which it always > specifically requests. However it could break if in a > particular situation it has requested a particular resource > with a non default accept headers and that is specifically > needed to resume further. In such a situation, perhaps the > bookmark could store the accept header. > > > That is, essentially, what I was suggesting > > > Yes, thats what I had assumed you were referring to - I was on the > other hand attempting to separate the notion of the bookmark and the > application state - ie. they are not one and the same. If a bookmark is not a stored reference to a given application state, what is it? > > > But even in this case the accept headers imo are not an > attribute of the application state - they are an application > of the conversation state (though I am open to be challenged > since I myself am not so terribly convinced about it). > > > I don't know if I understand what you mean by 'application of the > conversation state' - but there is no room for a state between > application and resource, given communication in RESTful systems > is (supposed to be) stateless. > > > Sorry, made a mistake there - should've stated " the accept headers > imo are not an attribute of the application state - they are an > _*attribute*_ of the conversation state". Though reading it again I > feel a little lame. I should've been more specific and should've said > "the accept headers imo are not an attribute of the application state > - they are merely the influencers of a representation as a part of a > given conversation" > If they are influencing the representation transfered, they are impacting on the application state. I can agree that there is no difference between the resource state, but that is desired behavior from negotiated representations of the same resource. - Mike
Mike, On Mon, Jul 27, 2009 at 3:15 PM, Mike Kelly <mike@...> wrote: > Dhananjay Nene wrote: > >> >> >> On Mon, Jul 27, 2009 at 2:29 PM, Mike Kelly <mike@... <mailto: >> mike@...>> wrote: >> >> >> Hi Dhananjay, >> >> Why would application state always be the same for all >> representations of a resource? >> >> >> An application state reflects where the user is in the overall workflow >> supported / managed by the application. It is therefore not dependent upon >> the representation. eg. if I am being shown an itinerary for my approval >> prior to a final booking, I am in the state of reviewing the itinerary - >> irrespective of whether the itinerary was being rendered as XHTML, JSON, >> XML, PDF or PNG. >> > > I'm not sure if I understand the distinction you make between application > and resource state XHTML / JSON formats are an attribute of the resource representation - they cannot be an attribute of the application state, since from an application perspective, the user is exactly at the same place in the overall workflow irrespective of the resource representation. > > > >> >> >> b) How is a bookmark adequately defined ? This is really a >> question for the author of a browser to answer. If we had a >> browser which say allowed you to also describe the accept >> headers before making the URI, then let us for a moment >> suggest that such a browser the bookmark perhaps could contain >> the accept header. However we don't have such browsers (I >> haven't seen one at least). But it is likely that programmatic >> clients may want to choose to store such URIs for say resuming >> later. Such a browser is again unlikely to 'break' if it is >> able to accept and parse various content types with equal >> capability or works with only one content type which it always >> specifically requests. However it could break if in a >> particular situation it has requested a particular resource >> with a non default accept headers and that is specifically >> needed to resume further. In such a situation, perhaps the >> bookmark could store the accept header. >> >> >> That is, essentially, what I was suggesting >> >> >> Yes, thats what I had assumed you were referring to - I was on the other >> hand attempting to separate the notion of the bookmark and the application >> state - ie. they are not one and the same. >> > > If a bookmark is not a stored reference to a given application state, what > is it? In this context it is a stored reference to an application state + a preferred format. > > > >> >> But even in this case the accept headers imo are not an >> attribute of the application state - they are an application >> of the conversation state (though I am open to be challenged >> since I myself am not so terribly convinced about it). >> >> >> I don't know if I understand what you mean by 'application of the >> conversation state' - but there is no room for a state between >> application and resource, given communication in RESTful systems >> is (supposed to be) stateless. >> >> >> Sorry, made a mistake there - should've stated " the accept headers imo >> are not an attribute of the application state - they are an _*attribute*_ of >> the conversation state". Though reading it again I feel a little lame. I >> should've been more specific and should've said "the accept headers imo are >> not an attribute of the application state - they are merely the influencers >> of a representation as a part of a given conversation" >> >> > If they are influencing the representation transfered, they are impacting > on the application state. I can agree that there is no difference between > the resource state, but that is desired behavior from negotiated > representations of the same resource. I would just suggest the following hypothetical scenario. If I received a XHTML version of my itinerary and it had a link to obtain a PDF version, and then I click it - would it mean that the I would do a state transition from one state to another or would it be from a state to itself. I am arguing that both are same states ie. the transition is back to the same state, from what I presume you've been stating the user does a state transition to a different state. If I am correct in that assessment, then we differ in the essential understanding of application state and may have work to do to figure out which is the more appropriate interpretation. > > > - Mike > Dhananjay
Dhananjay Nene wrote: > > > Mike, > > On Mon, Jul 27, 2009 at 3:15 PM, Mike Kelly <mike@... > <mailto:mike@...>> wrote: > > Dhananjay Nene wrote: > > > > On Mon, Jul 27, 2009 at 2:29 PM, Mike Kelly > <mike@... <mailto:mike@...> > <mailto:mike@... <mailto:mike@...>>> wrote: > > > Hi Dhananjay, > > Why would application state always be the same for all > representations of a resource? > > > An application state reflects where the user is in the overall > workflow supported / managed by the application. It is > therefore not dependent upon the representation. eg. if I am > being shown an itinerary for my approval prior to a final > booking, I am in the state of reviewing the itinerary - > irrespective of whether the itinerary was being rendered as > XHTML, JSON, XML, PDF or PNG. > > > I'm not sure if I understand the distinction you make between > application and resource state > > XHTML / JSON formats are an attribute of the resource representation - > they cannot be an attribute of the application state, since from an > application perspective, the user is exactly at the same place in the > overall workflow irrespective of the resource representation. This seems to imply tight coupling between resource and application state. I can't agree that these are the same states from an application perspective because they are clearly distinguished by control data in the HTTP message (Accept/Content-Type in this specific case). > > > > > > b) How is a bookmark adequately defined ? This is really a > question for the author of a browser to answer. If we had a > browser which say allowed you to also describe the accept > headers before making the URI, then let us for a moment > suggest that such a browser the bookmark perhaps could > contain > the accept header. However we don't have such browsers (I > haven't seen one at least). But it is likely that > programmatic > clients may want to choose to store such URIs for say > resuming > later. Such a browser is again unlikely to 'break' if it is > able to accept and parse various content types with equal > capability or works with only one content type which it > always > specifically requests. However it could break if in a > particular situation it has requested a particular resource > with a non default accept headers and that is specifically > needed to resume further. In such a situation, perhaps the > bookmark could store the accept header. > > > That is, essentially, what I was suggesting > > > Yes, thats what I had assumed you were referring to - I was on > the other hand attempting to separate the notion of the > bookmark and the application state - ie. they are not one and > the same. > > > If a bookmark is not a stored reference to a given application > state, what is it? > > In this context it is a stored reference to an application state + a > preferred format. The array of representations for a given resource are not application states in their own right? How would you model negotiating these formats in a state machine, without treating them as separate states? > > > > > But even in this case the accept headers imo are not an > attribute of the application state - they are an > application > of the conversation state (though I am open to be > challenged > since I myself am not so terribly convinced about it). > > > I don't know if I understand what you mean by 'application > of the > conversation state' - but there is no room for a state between > application and resource, given communication in RESTful > systems > is (supposed to be) stateless. > > > Sorry, made a mistake there - should've stated " the accept > headers imo are not an attribute of the application state - > they are an _*attribute*_ of the conversation state". Though > reading it again I feel a little lame. I should've been more > specific and should've said "the accept headers imo are not an > attribute of the application state - they are merely the > influencers of a representation as a part of a given conversation" > > > If they are influencing the representation transfered, they are > impacting on the application state. I can agree that there is no > difference between the resource state, but that is desired > behavior from negotiated representations of the same resource. > > > I would just suggest the following hypothetical scenario. If I > received a XHTML version of my itinerary and it had a link to obtain a > PDF version, and then I click it - would it mean that the I would do a > state transition from one state to another or would it be from a state > to itself. My understanding is that you are describing an application state transition, so I would say the former - as mentioned above this is clearly distinguished by the control data within each of the HTTP messages. Cheers, Mike
My comments at the very end. On Mon, Jul 27, 2009 at 3:56 PM, Mike Kelly <mike@...> wrote: > Dhananjay Nene wrote: > >> >> >> Mike, >> >> On Mon, Jul 27, 2009 at 3:15 PM, Mike Kelly <mike@... <mailto: >> mike@...>> wrote: >> >> Dhananjay Nene wrote: >> >> >> >> On Mon, Jul 27, 2009 at 2:29 PM, Mike Kelly >> <mike@... <mailto:mike@...> >> <mailto:mike@... <mailto:mike@...>>> wrote: >> >> >> Hi Dhananjay, >> >> Why would application state always be the same for all >> representations of a resource? >> >> >> An application state reflects where the user is in the overall >> workflow supported / managed by the application. It is >> therefore not dependent upon the representation. eg. if I am >> being shown an itinerary for my approval prior to a final >> booking, I am in the state of reviewing the itinerary - >> irrespective of whether the itinerary was being rendered as >> XHTML, JSON, XML, PDF or PNG. >> >> >> I'm not sure if I understand the distinction you make between >> application and resource state >> >> XHTML / JSON formats are an attribute of the resource representation - >> they cannot be an attribute of the application state, since from an >> application perspective, the user is exactly at the same place in the >> overall workflow irrespective of the resource representation. >> > > This seems to imply tight coupling between resource and application state. > I can't agree that these are the same states from an application perspective > because they are clearly distinguished by control data in the HTTP message > (Accept/Content-Type in this specific case). > > > >> >> >> >> b) How is a bookmark adequately defined ? This is really a >> question for the author of a browser to answer. If we had a >> browser which say allowed you to also describe the accept >> headers before making the URI, then let us for a moment >> suggest that such a browser the bookmark perhaps could >> contain >> the accept header. However we don't have such browsers (I >> haven't seen one at least). But it is likely that >> programmatic >> clients may want to choose to store such URIs for say >> resuming >> later. Such a browser is again unlikely to 'break' if it is >> able to accept and parse various content types with equal >> capability or works with only one content type which it >> always >> specifically requests. However it could break if in a >> particular situation it has requested a particular resource >> with a non default accept headers and that is specifically >> needed to resume further. In such a situation, perhaps the >> bookmark could store the accept header. >> >> >> That is, essentially, what I was suggesting >> >> >> Yes, thats what I had assumed you were referring to - I was on >> the other hand attempting to separate the notion of the >> bookmark and the application state - ie. they are not one and >> the same. >> >> >> If a bookmark is not a stored reference to a given application >> state, what is it? >> >> In this context it is a stored reference to an application state + a >> preferred format. >> > > The array of representations for a given resource are not application > states in their own right? > > How would you model negotiating these formats in a state machine, without > treating them as separate states? > > >> >> >> >> But even in this case the accept headers imo are not an >> attribute of the application state - they are an >> application >> of the conversation state (though I am open to be >> challenged >> since I myself am not so terribly convinced about it). >> >> >> I don't know if I understand what you mean by 'application >> of the >> conversation state' - but there is no room for a state between >> application and resource, given communication in RESTful >> systems >> is (supposed to be) stateless. >> >> >> Sorry, made a mistake there - should've stated " the accept >> headers imo are not an attribute of the application state - >> they are an _*attribute*_ of the conversation state". Though >> reading it again I feel a little lame. I should've been more >> specific and should've said "the accept headers imo are not an >> attribute of the application state - they are merely the >> influencers of a representation as a part of a given conversation" >> >> >> If they are influencing the representation transfered, they are >> impacting on the application state. I can agree that there is no >> difference between the resource state, but that is desired >> behavior from negotiated representations of the same resource. >> >> >> I would just suggest the following hypothetical scenario. If I received a >> XHTML version of my itinerary and it had a link to obtain a PDF version, and >> then I click it - would it mean that the I would do a state transition from >> one state to another or would it be from a state to itself. >> > > My understanding is that you are describing an application state > transition, so I would say the former - as mentioned above this is clearly > distinguished by the control data within each of the HTTP messages. > > > Cheers, > Mike > > I think it boils down to the fact whether one interprets a control-header and a representation format to influence an application state. I would only reiterate that since (and so long as) the representation does not influence where the user is in the overall workflow of the activity he is trying to achieve, the control headers or representation formats should not lead him to different application states (ie. so long as the URI is the same - its always the same application state). Some of the thoughts which might support this idea are : a) If different control headers mean different application states, then would it mean different users viewing the same page using different authentication headers are in a different application state ? Not so imo. Similarly if the response returns with different cache-control directives for the same page (eg. expiry time) does that imply different application states ? b) If one assumes a new (hypothetical) browser which does not understand HTML but works only with say PDF or Flash representations. Would now one assume that two users viewing the same page/URI from two different browsers are looking at two different application states - again that does not sound appropriate to me. Having said that, I tried to search for an authoritative definition of what an Application State is including Fielding's thesis - however I could not find one. Does anyone in the group have any opinion on the matter (just to broadbase the conversation a little bit since I suspect myself and Mike have pushed it as far as we could perhaps carry it - that only being an assumption from my side) ? Dhananjay
> > I'm not sure if I understand the distinction you make between > > application and resource state > > > > XHTML / JSON formats are an attribute of the resource representation > - > > they cannot be an attribute of the application state, since from an > > application perspective, the user is exactly at the same place in the > > overall workflow irrespective of the resource representation. > > This seems to imply tight coupling between resource and application > state. I can't agree that these are the same states from an application > perspective because they are clearly distinguished by control data in > the HTTP message (Accept/Content-Type in this specific case). I'm going back to what I stated before. If you care about identifying separate things, then they should be separate things. Conneged representations are *not* separate things because they share the same identifier. The application state is dependent on you navigating between things, not between representation of those things. If the representations have a meaningful distinction for the continuation of the state transitions, they should have identifiers. The identifiers we have are URIs to resources. If representations matter, then they should have their own identifiers, as such be promoted to resources themselves. Let's put it another way. If your application state depends not only on the existing identifier, but also in control state data, you have indeed created a new identifier, aka (URI+Method+Accept headers). I fail to see what this brings you, as opposed to simply identify representations as separate things? Client-driven conneg can only work for resource representations that do not differ enough to be meaningful to either the UA or the application state, hence why there is no provision for the extra data required to identify those. You say it yourself in the next paragraph: > > If a bookmark is not a stored reference to a given application > > state, what is it? > > > > In this context it is a stored reference to an application state + a > > preferred format. > > The array of representations for a given resource are not application > states in their own right? > > How would you model negotiating these formats in a state machine, > without treating them as separate states? If they have any meaningful distinction, then you represent them as separately identified resources. If they do not have meaningful distinction, then you don't. The required data for a state transition to occur is the current state of the application. In the case of a bookmark for a new client, there is no application state on the client. I'll also reply to your various questions from Thursday: > I understand that a representation can be 'made a resource', although > the phrase 'treated as if it were' would be more appropriate. I'm afraid we may just not be using the same terminology. I use the word 'identifier' with the meaning of 'identifying a resource', and 'resource' as 'the smallest unit i can deal with when doing state transitions in ReST' and as 'the thing / conceptual mapping / awww:resource identified by an identifier'. As such, I object strongly to the idea that there would be such a thing as 'treating a thing as a resource'. If it's important enough for me to interact with, and if it's identified, it's a resource. > It's slightly confusing when you state that an identifier to a resource > is 'enough', and then immediately contradict this position by > entertaining 'the case that you want to identify a specific > representation'. I apologize for introducing confusion. Let me attempt to reformulate better. If you want to interact with a thing, it has to be a resource. If you want to interact with two things distinctively, they are then fundamentally, by the ReST and HTTP requirement of resources having an identifier, two distinct resources. If however you interact with a single resource, that has multiple representations that you do not need to address specifically, then you may use conneg and have only one thing. To take a specific example, if the plain text representation of this email is available at localhost/2234, and I wanted to interact with the plain/text media type with that resource, I could do so by using it's identifier. I could also endup using conneg so that I could receive the text in both UTF-8 and UTF-16 variations. If, on the other hand, I was maintaining a French and an English version of that message, I would find it useful to update the French version without impacting the English version. While they may carry the same meaning, the French and English versions are hardly the "same" thing, as the difference between the two languages will always trigger some slight variations in the meaning of the text. Those two language variations may well be "the response Sebastien is giving to Mike", but the two messages are two distinct texts, that are distinct things: "the French response" and "the English response". And that's why they'd have two distinct identifiers and be two distinct resources. > So it's hard, from that, to make sense of whether or not treating a > representation as a resource 'ought to be enough'. This is particularly > apparent given, as I mentioned before, the only significant benefit > from > doing this is that you get *plain text* hyperlinks to your > representations - the value of this is questionable if an identifier to > a resource is 'enough' in the first place. So you suggest that identifying a resource and dereferencing identifiers by following hyperlink controls is not enough to navigate application state? You're either proposing a new identifier that includes http control data, or you're not happy with how the web currently works. Either way, I'm not so sure what you're suggesting is actually buying, to http, to ReST, or to the web. > > The state transition is the result of dereferencing such identifier, > and is > > dependent on the state of the application. As such, it is of course > > dependent on the current state the client has, if any, and the > current state > > the server holds. > > > > If it is possible for a hyperlink to include control data, then a > hypermedia driven state transition is more than simply dereferencing an > href URI. No, hyperlink controls do not do that. Let's take an example of a well-known control. <form action="/" method="POST" enctype="multipart/form-data" /> The URI in action is the identifier to the resource for which an operation will execute. Everything else is a hint to the user-agent to let it understand the representation format it can send, and the http method being expected from it. What this means is that the target resource is identified by its URI. The state transition is helped by hinting to the UA how to process the transition, aka how to build the representation that will be used in the transition. At no point is there a new identifier for (/ with POST with mediatype multipart). That would be a new identifier entirely, and one I see very little value in. At no point either is there any assertion that the target resource only support the multipart/form-data media type, or that you can only operate on it with the POST method. That's why this data is part of a hypermedia link, and why this hypermedia link contains both the identifier and the mean to interact with it. Nothing prevents me from achieving the same result by using OPTIONS to discover the methods and the media types (using the Link: header) and achieve the same state transition. That's why I said they were at different levels in the stack. Resources are things, representations are what you use to manipulate things, and hypermedia controls are the hints that help the UA discover resources and build the representations to manipulate those things. The core of your argument has been around conneg, but I think you're wanting to see more in conneg than what it was designed for, and I still do not understand why it is such an issue to identify distinct things distinctly. Maybe if you could provide some scenarios that you cannot achieve through the current understanding of what resources and representations are in the context of ReST we could have a better substrate for conversation. Seb
Seb, Essentially: I don't assume just because I need to provide a hyperlink to a representation, that it must have its own URI. Maybe it is possible to have a new type of hypermedia with hyperlinks capable of providing control data to a UA - I don't see this as far different from using the end of a URI - other than the clear benefits of keeping the distinction between resources and representations clear. Specific responses below: Sebastien Lambla wrote: >>> I'm not sure if I understand the distinction you make between >>> application and resource state >>> >>> XHTML / JSON formats are an attribute of the resource representation >>> >> - >> >>> they cannot be an attribute of the application state, since from an >>> application perspective, the user is exactly at the same place in the >>> overall workflow irrespective of the resource representation. >>> >> This seems to imply tight coupling between resource and application >> state. I can't agree that these are the same states from an application >> perspective because they are clearly distinguished by control data in >> the HTTP message (Accept/Content-Type in this specific case). >> > > I'm going back to what I stated before. If you care about identifying > separate things, then they should be separate things. Conneged > representations are *not* separate things because they share the same > identifier. > Again, your terminology is confusing. Separate representations are separate 'things'. They aren't separate resources; which is why they are treated as representations of a single resource.. and given the same *resource* identifier. That is all a URI is for. > The application state is dependent on you navigating between things, not > between representation of those things. If the representations have a > meaningful distinction for the continuation of the state transitions, they > should have identifiers. Again, I'm confused by this notion of 'things'. If I substitute in 'resource' instead, the distinction that can be drawn between resource and representation seems to become uncomfortably vague, and so you have to question whether there is any value in the distinction at all. > The identifiers we have are URIs to resources. If > representations matter, then they should have their own identifiers, as such > be promoted to resources themselves. > Yes - I'm still clear that representations *can* be treated as if they were resources! :) > Let's put it another way. If your application state depends not only on the > existing identifier, but also in control state data, you have indeed created > a new identifier, aka (URI+Method+Accept headers). I fail to see what this > brings you, as opposed to simply identify representations as separate > things? > Self-descriptive messages that contain no ambiguity whatsoever over what resource was being requested and how the representation was negotiated. What do you get other than plain text hyperlinks to representations?! > Client-driven conneg can only work for resource representations that do not > differ enough to be meaningful to either the UA or the application state, > hence why there is no provision for the extra data required to identify > those. > You mean like accept headers? > You say it yourself in the next paragraph: > I don't if you're reading that from my perspective on what constitutes application state (i.e. the entire message). > >>> If a bookmark is not a stored reference to a given application >>> state, what is it? >>> >>> In this context it is a stored reference to an application state + a >>> preferred format. >>> >> The array of representations for a given resource are not application >> states in their own right? >> >> How would you model negotiating these formats in a state machine, >> without treating them as separate states? >> > > If they have any meaningful distinction, then you represent them as > separately identified resources. If they do not have meaningful distinction, > then you don't. > That's true *if* a state can only be defined by a URI, and not the rest of the message. > >> I understand that a representation can be 'made a resource', although >> the phrase 'treated as if it were' would be more appropriate. >> > > I'm afraid we may just not be using the same terminology. I use the word > 'identifier' with the meaning of 'identifying a resource', and 'resource' as > 'the smallest unit i can deal with when doing state transitions in ReST' "A resource is a conceptual mapping to a set of entities, not the entity that corresponds to the mapping at any particular point in time." > and > as 'the thing / conceptual mapping / awww:resource identified by an > identifier'. As such, I object strongly to the idea that there would be such > a thing as 'treating a thing as a resource'. If it's important enough for me > to interact with, and if it's identified, it's a resource. > > ... :) >> It's slightly confusing when you state that an identifier to a resource >> is 'enough', and then immediately contradict this position by >> entertaining 'the case that you want to identify a specific >> representation'. >> > > I apologize for introducing confusion. Let me attempt to reformulate better. > > If you want to interact with a thing, it has to be a resource. If you want > to interact with two things distinctively, they are then fundamentally, by > the ReST and HTTP requirement of resources having an identifier, two > distinct resources. If however you interact with a single resource, that has > multiple representations that you do not need to address specifically, then > you may use conneg and have only one thing. > > To take a specific example, if the plain text representation of this email > is available at localhost/2234, and I wanted to interact with the plain/text > media type with that resource, I could do so by using it's identifier. I > could also endup using conneg so that I could receive the text in both UTF-8 > and UTF-16 variations. If, on the other hand, I was maintaining a French and > an English version of that message, I would find it useful to update the > French version without impacting the English version. While they may carry > the same meaning, the French and English versions are hardly the "same" > thing, as the difference between the two languages will always trigger some > slight variations in the meaning of the text. > Actually the language example is exactly the kind of situation that can warrant the translated documents to be treated as resources, with separate URIs. This is because the documents are actually being treated as separate resources in their own right - where one is updated, and the other is non-equivalent until it is translated and updated at some other point in time. If that system worked in a way which tied the two together by automatically translating an update of one to the other, then I would be inclined to use HTTP conneg instead. > Those two language variations may well be "the response Sebastien is giving > to Mike", but the two messages are two distinct texts, that are distinct > things: "the French response" and "the English response". And that's why > they'd have two distinct identifiers and be two distinct resources. > Yep, no problem with that particular example - it's likely the right way to go given the problems associated with automated translation. >> So it's hard, from that, to make sense of whether or not treating a >> representation as a resource 'ought to be enough'. This is particularly >> apparent given, as I mentioned before, the only significant benefit >> from >> doing this is that you get *plain text* hyperlinks to your >> representations - the value of this is questionable if an identifier to >> a resource is 'enough' in the first place. >> > > So you suggest that identifying a resource and dereferencing identifiers by > following hyperlink controls is not enough to navigate application state? > Not if I want to link to specific representations of a resource, no. > You're either proposing a new identifier that includes http control data, or > you're not happy with how the web currently works. Either way, I'm not so > sure what you're suggesting is actually buying, to http, to ReST, or to the > web. > > I'm actually suggesting a new form of hypermedia which provides markup for hyperlinks that can include control data i.e.: <a href="/resource" accept="application/pdf">link to pdf representation</a> >>> The state transition is the result of dereferencing such identifier, >>> >> and is >> >>> dependent on the state of the application. As such, it is of course >>> dependent on the current state the client has, if any, and the >>> >> current state >> >>> the server holds. >>> >>> >> If it is possible for a hyperlink to include control data, then a >> hypermedia driven state transition is more than simply dereferencing an >> href URI. >> > > No, hyperlink controls do not do that. Let's take an example of a well-known > control. <form action="/" method="POST" enctype="multipart/form-data" /> > > The URI in action is the identifier to the resource for which an operation > will execute. Everything else is a hint to the user-agent to let it > understand the representation format it can send, and the http method being > expected from it. > > What this means is that the target resource is identified by its URI. The > state transition is helped by hinting to the UA how to process the > transition, aka how to build the representation that will be used in the > transition. > > At no point is there a new identifier for (/ with POST with mediatype > multipart). That would be a new identifier entirely, and one I see very > little value in. At no point either is there any assertion that the target > resource only support the multipart/form-data media type, or that you can > only operate on it with the POST method. > > That's why this data is part of a hypermedia link, and why this hypermedia > link contains both the identifier and the mean to interact with it. > > Nothing prevents me from achieving the same result by using OPTIONS to > discover the methods and the media types (using the Link: header) and > achieve the same state transition. > > That's why I said they were at different levels in the stack. Resources are > things, representations are what you use to manipulate things, and > hypermedia controls are the hints that help the UA discover resources and > build the representations to manipulate those things. > I'm trying to establish whether or not state transitions include the entire HTTP message, and if so - whether hypermedia controls should provide the mechanisms to indicate control data. I understand the practical limitations of browsers and HTML - you haven't actually said much here about HTTP or potential hypermedia formats/controls. > The core of your argument has been around conneg, but I think you're wanting > to see more in conneg than what it was designed for, and I still do not > understand why it is such an issue to identify distinct things distinctly. > Identify distinct resources with resource identifiers. Negotiate representations of resources with the appropriate conneg controls. Create hypermedia formats which allow the server to specify a hyperlink with specific control data required to negotiate a specific representation. > Maybe if you could provide some scenarios that you cannot achieve through > the current understanding of what resources and representations are in the > context of ReST we could have a better substrate for conversation. > Pretend twitter doesn't use (.xml/.json/.html) URIs to separate representations of a tweet. How could I add links to my homepage for each of the xml, json, and html representations? An accept attribute in the hyperlink would work: <a href="twitter.com/tweet/1234" accept="application/xml">xml of my tweet</a> <a href="twitter.com/tweet/1234" accept="application/json">json of my tweet</a> <a href="twitter.com/tweet/1234">Default (HTML) of my tweet</a> If I decided I wanted to update the tweet; a PUT would automatically invalidate the cache for all representations at the URI - and any other intermediary could see exactly which resource the request/response messages were about without having to assume any relationships between any (what are supposed to be opaque) URIs. Cheers, Mike
> Essentially: I don't assume just because I need to provide a hyperlink > to a representation, that it must have its own URI. That's the state of web architecture, and of ReST over http. > Maybe it is > possible > to have a new type of hypermedia with hyperlinks capable of providing > control data to a UA - I don't see this as far different from using the > end of a URI - other than the clear benefits of keeping the distinction > between resources and representations clear. URIs identify resources. If you provide a new media type that requires both URI and control data, you created a new identifier, one that is not a URI only, and one that is not in use by the rest of the web architecture. > > I'm going back to what I stated before. If you care about identifying > > separate things, then they should be separate things. Conneged > > representations are *not* separate things because they share the same > > identifier. > > > > Again, your terminology is confusing. It seems to be a leitmotiv of this conversation. Let's try once more. > Separate representations are separate 'things'. > > They aren't separate resources; which is why they are treated as > representations of a single resource. A resource is a thing, anything, that has an identifier. A thing, anything, that doesn't have an identifier, is not a resource. A thing, anything, that you need to identify or operate upon by applying a state transition has to be a resource. The identifier in ReST over http is the URI. > > The application state is dependent on you navigating between things, > not > > between representation of those things. If the representations have a > > meaningful distinction for the continuation of the state transitions, > they > > should have identifiers. > > Again, I'm confused by this notion of 'things'. If I substitute in > 'resource' instead, the distinction that can be drawn between resource > and representation seems to become uncomfortably vague, and so you have > to question whether there is any value in the distinction at all. Let's retry this again, reformulated for hopefully more clarity. A representation of a resource is a set of bytes that "represent" a resource and is used as the data payload when operating resources. It happens that sometimes a resource, when dereferenced, has one representation that is the full and entire "representation" of the resource. In this instance, the representation may be the resource. For example, localhost/mycv.doc is the resource "the word document that describes Seb's professional life". There is no need of distinction between resource and representation, as the resulting representation is the resource. The distinction between resource and representation is as vague as the naming function is. If you need to identify "the last customer that registered on my site, in html", that's a resource for which only an html representation exists. If you need to identify "The last customer that registered on my site", that's a resource, that happens to have an html representation. If you navigating the application state requires you going back to the html version, and not for example the json version, then you have two resources with separate identifiers, as they identify a different "thing". If it doesn't matter to anyone, then you can have conneg on one resource with one identifier. As for the value of distinguishing resource and representation, it has the value of being required to ensure a late-bound system where a resource can be identified before a representation can be produced, or even before the resource itself exist. Without the distinction, not only do you limit the value of identifying things to those that have a representation that can be retrieved, you also prevent other schemes that cannot be dereferenced, such as mailto:. > Self-descriptive messages that contain no ambiguity whatsoever over > what > resource was being requested and how the representation was negotiated. > What do you get other than plain text hyperlinks to representations?! A bookmark is just the plain text hyperlink. A hyperlink within a media type may have additional hints to help in dereferencing. It still doesn't impact the "identification" as opposed to the "dereferencing". Identifying is done independently, and at a higher level in the web stack, than dereferencing, and as such than representation. > > > Client-driven conneg can only work for resource representations that > do not > > differ enough to be meaningful to either the UA or the application > state, > > hence why there is no provision for the extra data required to > identify > > those. > > You mean like accept headers? Again, it's a matter of layering. Resources require URIs. Http require resources. The operation of identifying a resource is not tied to http, accept headers are metadata to help find a suitable representation when dereferencing. Media types are at the right level in the stack to add control data to hint as to which representation may be retrieved, but they don't change the semantics of resource identification. I think your question comes down to "shouldn't they, by specializing", and i've already made my point clear: creating a new resource when you want to uniquely identify something is enough, requires less changes to webarch, and is cheap. > > You say it yourself in the next paragraph: > > I don't if you're reading that from my perspective on what constitutes > application state (i.e. the entire message). I am making the point that application state is the sum of identifications, hypermedia controls, current state on the server and current state on the client. If your question was, can one resume a state transition by bookmarking the state of the current application, and be brought back to where they were, then this would depend on the state on the client: if the page is in cache, and the data in the form has been persisted by the browser when bookmarking, then yes. If there is no cache, and only a URI exist, it's up to the client to re-request state from the server to "rehydrate" itself. The browser cache, and whatever data is saved when bookmarking, is the local storage that is part of the general application state. Application crash recovery in browsers is a good example of persisted application state being resumed, so it's already there, but it encompasses all the state that was on the client that compose part of the application state. Redefining the identifier mechanism to solve the false problem of conneg is not buying you enough to be useful, when the existing mechanism can work just as well without redefining such an essential functionality. > That's true *if* a state can only be defined by a URI, and not the rest > of the message. Application State is not defined by either. It's defined by the whole of the browser cache, local storage, content of forms, plus everything that's on the server. If you loose the state on the client, being able to bookmark a specific representation won't buy you any better functionality than the one that already exist. > > I'm afraid we may just not be using the same terminology. I use the > word > > 'identifier' with the meaning of 'identifying a resource', and > 'resource' as > > 'the smallest unit i can deal with when doing state transitions in > ReST' > > "A resource is a conceptual mapping to a set of entities, not the > entity > that corresponds to the mapping at any particular point in time." Then I think you'll agree with me that at no point in the ReST discertation, or in webarch for that matter, is anyone talking about identifying and manipulating the set of entities themselves, without the 'resource' concept. It so happens that the resource in question may be "the customer as xml" as opposed to "the customer". > I'm actually suggesting a new form of hypermedia which provides markup > for hyperlinks that can include control data i.e.: > > <a href="/resource" accept="application/pdf">link to pdf > representation</a> Adding such a hint to a specific media type is possible, and the <link> tag already does that. But it's a hint, not part of the identification function. If it was, you suddenly couldn't copy n paste a URL, or email it to someone else, or link it in a wiki or an atom feed. > Identify distinct resources with resource identifiers. Negotiate > representations of resources with the appropriate conneg controls. > Create hypermedia formats which allow the server to specify a hyperlink > with specific control data required to negotiate a specific > representation. See above, if you "require" the control data, you redefine the identification functionality and break the rest of the web architecture. Not ignoring the rest of the message, but my argument will be the same. Seb
I consider plugins to be code-on-demand. They just have an awful UI as commonly implemented. Mark.
Hi,
I recently wrote a paper outlining a framework that would RESTify SOAP requests e.g. a SOAP request would be mapped to a RESTful GET/PUT/POST/DELETE. One of the feedbacks was that "it is not clear what would happen for SOAP requests of large/complex data structures (several MBs of XML data). It does not seem possible to map these to URI strings (which have limited length, up to a few Kbs) so the proposed framework is not likely to work in real-world industrial applications of Web services technology".
My question here is : is this accurate? are SOAP retrieval-requests that complex that they cannot be accommodated in a URI ? The return data is not an issue - I just return the same as SOAP would return.
Thanks,
Sean.
URL length restrictions are real - for example Internet Explorer chokes after 2083 characters <http://support.microsoft.com/kb/208427>. Worse, they're arbitrary... you can run into troubles passing URLs from the shell or command line via API calls, and each browser has different ideas. Regardless things like SAML still work reliably, but for arbitrary length content stick to POSTs. Sam On Thu, Jul 30, 2009 at 11:40 AM, Sean Kennedy <seandkennedy@...>wrote: > > > Hi, > I recently wrote a paper outlining a framework that would RESTify SOAP > requests e.g. a SOAP request would be mapped to a RESTful > GET/PUT/POST/DELETE. One of the feedbacks was that "it is not clear what > would happen for SOAP requests of large/complex data structures (several MBs > of XML data). It does not seem possible to map these to URI strings (which > have limited length, up to a few Kbs) so the proposed framework is not > likely to work in real-world industrial applications of Web services > technology". > My question here is : is this accurate? are SOAP retrieval-requests that > complex that they cannot be accommodated in a URI ? The return data is not > an issue - I just return the same as SOAP would return. > > Thanks, > Sean. > > >
Thanks Sam. The reference will be useful. I suppose my query is : are SOAP requests that complex that they cannot be accomodated in a GET?
Regards,
Sean.
________________________________
From: Sam Johnston <samj@...>
To: Sean Kennedy <seandkennedy@...>
Cc: Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Thursday, 30 July, 2009 11:23:37
Subject: Re: [rest-discuss] URI length restriction
URL length restrictions are real - for example Internet Explorer chokes after 2083 characters. Worse, they're arbitrary... you can run into troubles passing URLs from the shell or command line via API calls, and each browser has different ideas.
Regardless things like SAML still work reliably, but for arbitrary length content stick to POSTs.
Sam
On Thu, Jul 30, 2009 at 11:40 AM, Sean Kennedy <seandkennedy@...> wrote:
>
>Hi,
> I recently wrote a paper outlining a framework that would RESTify SOAP requests e.g. a SOAP request would be mapped to a RESTful GET/PUT/POST/DELETE. One of the feedbacks was that "it is not clear what would happen for SOAP requests of large/complex data structures (several MBs of XML data). It does not seem possible to map these to URI strings (which have limited length, up to a few Kbs) so the proposed framework is not likely to work in real-world industrial applications of Web services technology".
>> My question here is : is this accurate? are SOAP retrieval-requests that complex that they cannot be accommodated in a URI ? The return data is not an issue - I just return the same as SOAP would
> return.
>
>Thanks,
>Sean.
>
>
>
--- In rest-discuss@yahoogroups.com, Sean Kennedy <seandkennedy@...> wrote: > > Thanks Sam. The reference will be useful. I suppose my query is : are SOAP requests that complex that they cannot be accomodated in a GET? > > Regards, > Sean. What is the complexity? I don't get that. I haven't experienced this personally.
Yes, I agree, the uniform interface should be maintained. In other words, if the SOAP message is "updateCustomer" then that should map to a PUT, "addCutomer" should map to a POST, "deleteCustomer" to DELETE and "getCustomer" to a GET.
The key is that SOAP POSTs "get" (and "delete") type requests which I think should be mapped to a RESTful GET (and DELETE). Having a GET we then have cacheability...
Their feedback suggests that in complex scenarios that the URI will be an issue. How complex can "get" and "delete" type requests be?
Sean.
PS I know of a financial institution that issue POX read msgs that are 500 bytes. Throw some SOAP headers on that and I think 1K should be enough...
________________________________
From: David Stanek <dstanek@...>
To: Sean Kennedy <seandkennedy@...>
Cc: Sam Johnston <samj@...>; Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Thursday, 30 July, 2009 13:26:03
Subject: Re: [rest-discuss] URI length restriction
On Thu, Jul 30, 2009 at 6:35 AM, Sean Kennedy<seandkennedy@...> wrote:
>
>
> Thanks Sam. The reference will be useful. I suppose my query is : are SOAP
> requests that complex that they cannot be accomodated in a GET?
>
You can use the body for a PUT and a POST. All of the data should not
be on the query string. It sounds like you may be confusing REST with
"jam everything into the URL."
--
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
Our current REST-like infrastructure was designed from the ground-up to support several connectors, and we have at the moment HTTP, IMAP, JMS and IntraVM connectors. We are now rewriting the JMS connector to extends it to some of the JMS-specific capabilities, but in a way that is really connector-specific and not resource or server-side specific. Let me give a example: GET /notifications/toni HTTP/1.0 This will return a list of all the notifications for Toni. This is a simple send/receive conversation. But on the JMS side we want to extend the conversation to use some JMS specific features, namely a conversation of type listen-receive-keeplistening. How to do that in a restfull way? Extending our interface by using another verb? GET /notifications/toni JMS/2.0 LISTEN /notifications/toni JMS/2.0 By promoting the "listen" to a resource on it's own? GET /notifications/toni JMS/2.0 GET /notifications/listener/toni JMS/2.0 By using General-Headers? Or other type of headers? GET /notifications/toni JMS/2.0 GET /notifications/toni JMS/2.0 Connect: keep-listening assuming in this last one that if a resource doesen't understand "keep-listening" it will ignore it? My preference goes to this last one, but I'll like to know what others think. Cheers.
On Thu, Jul 30, 2009 at 6:35 AM, Sean Kennedy<seandkennedy@...> wrote: > > > Thanks Sam. The reference will be useful. I suppose my query is : are SOAP > requests that complex that they cannot be accomodated in a GET? > You can use the body for a PUT and a POST. All of the data should not be on the query string. It sounds like you may be confusing REST with "jam everything into the URL." -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek
Le 30 juil. 09 à 14:46, Sean Kennedy a écrit : > Yes, I agree, the uniform interface should be maintained. In other > words, if the SOAP message is "updateCustomer" then that should map > to a PUT, "addCutomer" should map to a POST, "deleteCustomer" to > DELETE and "getCustomer" to a GET. > The key is that SOAP POSTs "get" (and "delete") type requests which > I think should be mapped to a RESTful GET (and DELETE). Note that SOAP is now able to use GET to do a get. See SOAP 1.2 - HTTP GET usage (http://www.w3.org/TR/soap12-part0/#L26854). Philippe Mougin
Yes, good point. If SOAP 1.2 with the WebMethod enables GET, which would have the benefit of enabling cache intermediaries, then why isn't is used more often? Ref: Stefan (Tilkov) points out in [*] that SOAP 1.2 is just not used very much in practice.
[*] http://www.parleys.com/display/PARLEYS/Home#;slide=6;talk=31817742
________________________________
From: Philippe Mougin <pmougin@...>
To: Rest Discussion Group <rest-discuss@yahoogroups.com>
Cc: Sean Kennedy <seandkennedy@...>
Sent: Thursday, 30 July, 2009 15:29:57
Subject: Re: [rest-discuss] URI length restriction
Le 30 juil. 09 à 14:46, Sean Kennedy a écrit :
> Yes, I agree, the uniform interface should be maintained. In other words, if the SOAP message is "updateCustomer" then that should map to a PUT, "addCutomer" should map to a POST, "deleteCustomer" to DELETE and "getCustomer" to a GET.
> The key is that SOAP POSTs "get" (and "delete") type requests which I think should be mapped to a RESTful GET (and DELETE).
Note that SOAP is now able to use GET to do a get. See SOAP 1.2 - HTTP GET usage (http://www.w3.org/TR/soap12-part0/#L26854).
Philippe Mougin
On 30 Jul 2009, at 15:29, Philippe Mougin wrote: > > Le 30 juil. 09 à 14:46, Sean Kennedy a écrit : > > > Yes, I agree, the uniform interface should be maintained. In other > > words, if the SOAP message is "updateCustomer" then that should map > > to a PUT, "addCutomer" should map to a POST, "deleteCustomer" to > > DELETE and "getCustomer" to a GET. > > The key is that SOAP POSTs "get" (and "delete") type requests which > > I think should be mapped to a RESTful GET (and DELETE). > > Note that SOAP is now able to use GET to do a get. See SOAP 1.2 - HTTP > GET usage (http://www.w3.org/TR/soap12-part0/#L26854). > In theory but read "It should be noted that SOAP Version 1.2 does not specify any algorithm on how to compute a URI from the definition of an RPC which has been determined to represent pure information retrieval." Thats the issue - SOAP applications do not have any concept of a resource in general. If you try to pass the XML RPC representation as being a resource you will run out of space in the URI. I know I tried that once! I dont actually see the point of making SOAP REST compliant. It *is* RPC. Just dont use it! Justin
Le 30 juil. 09 à 19:14, Justin Cormack a écrit : > On 30 Jul 2009, at 15:29, Philippe Mougin wrote: > >> Le 30 juil. 09 à 14:46, Sean Kennedy a écrit : >> >>> Yes, I agree, the uniform interface should be maintained. In other >>> words, if the SOAP message is "updateCustomer" then that should map >>> to a PUT, "addCutomer" should map to a POST, "deleteCustomer" to >>> DELETE and "getCustomer" to a GET. >>> The key is that SOAP POSTs "get" (and "delete") type requests which >>> I think should be mapped to a RESTful GET (and DELETE). >> >> Note that SOAP is now able to use GET to do a get. See SOAP 1.2 - >> HTTP >> GET usage (http://www.w3.org/TR/soap12-part0/#L26854). >> > In theory but read "It should be noted that SOAP Version 1.2 does not > specify any algorithm on how to compute a URI from the definition of > an RPC which has been determined to represent pure information > retrieval." > > Thats the issue - SOAP applications do not have any concept of a > resource in general. > > If you try to pass the XML RPC representation as being a resource you > will run out of space in the URI. I know I tried that once! > > I dont actually see the point of making SOAP REST compliant. It *is* > RPC. Well, not necessarily. RPC is one possible usage of SOAP but other usages (mostly, exchanging XML messages) exist as well. Fundamentally SOAP defines an extensible envelope, a few architectural concepts (sender, receiver, intermediaries, ...) and a processing model of messages. In this regard, it is somewhat like HTTP, but generally much less beneficial and useful. Philippe Mougin
On Thu, Jul 30, 2009 at 2:40 AM, Sean Kennedy<seandkennedy@...> wrote: > > > Hi, > I recently wrote a paper outlining a framework that would RESTify SOAP > requests e.g. a SOAP request would be mapped to a RESTful > GET/PUT/POST/DELETE. One of the feedbacks was that "it is not clear what > would happen for SOAP requests of large/complex data structures (several MBs > of XML data). It does not seem possible to map these to URI strings (which > have limited length, up to a few Kbs) so the proposed framework is not > likely to work in real-world industrial applications of Web services > technology". > My question here is : is this accurate? are SOAP retrieval-requests that > complex that they cannot be accommodated in a URI ? The return data is not > an issue - I just return the same as SOAP would return. Yes, it's accurate. There are many domains that pass around very complicated requests, and the requests are very large. There's no way, in a general case, to expect them to fit within a URL. I have one right here that's 8K in length, and that's before the SAML annotations that need to be applied to it. It's 166 lines of 'well formatted' XML, just to give an idea of the complexity. Finally, I'm curious how your framework "RESTify"s a SOAP architecture? How does it fundamentally rearchitect the underlying concepts the SOAP API is fronting and turn it in to a view of resources? How does it push any state managed by the SOAP back end in to the client? How can it do any of these things to "RESTify" the system that happens to be using a SOAP based API? In fact, what does REST have to do with this at all? Sounds more like your converting SOAP envelopes in to GET/PUT/POST/DELETEs, etc. which are all about HTTP the protocol, not REST the architecture. Adding an HTTP API with GETs and POSTs does not a REST system make. You just end up with an RPC system using a different protocol than SOAP. Regards, Will Hartung
The dicussions about SOAP and Web services in general often suffer from a lack of distinction between what's in the specifications, and what is actually implemted by Web services vendors.
The specifications are actually much more RESTful than the implementations. The specifications do not define an RPC mechanism - they define a simple one-way message that can be composed into multiple message exchange patterns, including request/response.
However vendors have tended to implement Web services as artifacts generated from programming languages rather than the other way around (although there are toolkits supporting both approaches). Typically when a Web service is generated from an EJB it's generated as an RPC for example since that's the message exchange pattern EJBs support.
In the end I suppose it's the implementations that matter more than the specifications, but significant work was done to make the SOAP and WSDL specifications more RESTful.
Eric
________________________________
From: Sean Kennedy <seandkennedy@....uk>
To: Philippe Mougin <pmougin@...>
Cc: Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Thursday, July 30, 2009 11:29:11 AM
Subject: Re: [rest-discuss] URI length restriction
Yes, good point. If SOAP 1.2 with the WebMethod enables GET, which would have the benefit of enabling cache intermediaries, then why isn't is used more often? Ref: Stefan (Tilkov) points out in [*] that SOAP 1.2 is just not used very much in practice.
[*] http://www.parleys. com/display/ PARLEYS/Home# ;slide=6; talk=31817742
________________________________
From: Philippe Mougin <pmougin@acm. org>
To: Rest Discussion Group <rest-discuss@ yahoogroups. com>
Cc: Sean Kennedy <seandkennedy@ yahoo.co. uk>
Sent: Thursday, 30 July, 2009 15:29:57
Subject: Re: [rest-discuss] URI length restriction
Le 30 juil. 09 à 14:46, Sean Kennedy a écrit :
> Yes, I agree, the uniform interface should be maintained. In other words, if the SOAP message is "updateCustomer" then that should map to a PUT, "addCutomer" should map to a POST, "deleteCustomer" to DELETE and "getCustomer" to a GET.
> The key is that SOAP POSTs "get" (and "delete") type requests which I think should be mapped to a RESTful GET (and DELETE).
Note that SOAP is now able to use GET to do a get. See SOAP 1.2 - HTTP GET usage (http://www.w3. org/TR/soap12- part0/#L26854).
Philippe Mougin
With XML, the larger the file size, the more the XML metadata that is present. I believe that the XML metadata can be stripped out when mapping to a URI and thus the URI required for an equivalent XML message would be much smaller (assuming a "read"/GET).
For example,
<Transaction>CltviewService002</Transaction> goes to /{Transaction} i.e. /CltviewService002 (44 chars going to 18) immediately saving over 50%
Sean.
________________________________
From: Will Hartung <willh@...>
To: Sean Kennedy <seandkennedy@...>
Cc: Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Thursday, 30 July, 2009 20:01:02
Subject: Re: [rest-discuss] URI length restriction
On Thu, Jul 30, 2009 at 2:40 AM, Sean Kennedy<seandkennedy@yahoo.co.uk> wrote:
>
>
> Hi,
> I recently wrote a paper outlining a framework that would RESTify SOAP
> requests e.g. a SOAP request would be mapped to a RESTful
> GET/PUT/POST/DELETE. One of the feedbacks was that "it is not clear what
> would happen for SOAP requests of large/complex data structures (several MBs
> of XML data). It does not seem possible to map these to URI strings (which
> have limited length, up to a few Kbs) so the proposed framework is not
> likely to work in real-world industrial applications of Web services
> technology".
> My question here is : is this accurate? are SOAP retrieval-requests that
> complex that they cannot be accommodated in a URI ? The return data is not
> an issue - I just return the same as SOAP would return.
Yes, it's accurate. There are many domains that pass around very
complicated requests, and the requests are very large. There's no way,
in a general case, to expect them to fit within a URL. I have one
right here that's 8K in length, and that's before the SAML annotations
that need to be applied to it. It's 166 lines of 'well formatted' XML,
just to give an idea of the complexity.
Finally, I'm curious how your framework "RESTify"s a SOAP
architecture? How does it fundamentally rearchitect the underlying
concepts the SOAP API is fronting and turn it in to a view of
resources? How does it push any state managed by the SOAP back end in
to the client? How can it do any of these things to "RESTify" the
system that happens to be using a SOAP based API?
In fact, what does REST have to do with this at all? Sounds more like
your converting SOAP envelopes in to GET/PUT/POST/DELETEs, etc. which
are all about HTTP the protocol, not REST the architecture. Adding an
HTTP API with GETs and POSTs does not a REST system make. You just end
up with an RPC system using a different protocol than SOAP.
Regards,
Will Hartung
Doesn't HTTP Keep-Alive handle this scenario? Then the listener would continually being doing GETs (or POSTs). Its a little less efficient, but does this small CPU savings really save anything? Then you wouldn't be tunneling any special protocol over HTTP (which is what this would be). António Mota wrote: > > > Our current REST-like infrastructure was designed from the ground-up to > support several connectors, and we have at the moment HTTP, IMAP, JMS > and IntraVM connectors. > > We are now rewriting the JMS connector to extends it to some of the > JMS-specific capabilities, but in a way that is really > connector-specific and not resource or server-side specific. Let me give > a example: > > GET /notifications/toni HTTP/1.0 > > This will return a list of all the notifications for Toni. This is a > simple send/receive conversation. But on the JMS side we want to extend > the conversation to use some JMS specific features, namely a > conversation of type listen-receive-keeplistening. > > How to do that in a restfull way? Extending our interface by using > another verb? > > GET /notifications/toni JMS/2.0 > LISTEN /notifications/toni JMS/2.0 > > By promoting the "listen" to a resource on it's own? > > GET /notifications/toni JMS/2.0 > GET /notifications/listener/toni JMS/2.0 > > By using General-Headers? Or other type of headers? > > GET /notifications/toni JMS/2.0 > > GET /notifications/toni JMS/2.0 > Connect: keep-listening > > assuming in this last one that if a resource doesen't understand > "keep-listening" it will ignore it? > > My preference goes to this last one, but I'll like to know what others > think. > > Cheers. > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
I hate to turn this into a WS-* vs. REST debate, but, a few things I don't like about WS-* is that: * It doesn't leverage HTTP * It requires an envelope format. (This is the same reason I don't like the idea of using Atom for other than a blog feed.) In many (most?) cases, the envelope format is necessary because the WS-* specifications don't leverage HTTP. Peronsally, I'd like to see a lot of these WS-* specifications rethought using REST colored glasses. Eric Newcomer wrote: > > > The dicussions about SOAP and Web services in general often suffer from > a lack of distinction between what's in the specifications, and what is > actually implemted by Web services vendors. > > The specifications are actually much more RESTful than the > implementations. The specifications do not define an RPC mechanism - > they define a simple one-way message that can be composed into multiple > message exchange patterns, including request/response. > > However vendors have tended to implement Web services as artifacts > generated from programming languages rather than the other way around > (although there are toolkits supporting both approaches). Typically when > a Web service is generated from an EJB it's generated as an RPC for > example since that's the message exchange pattern EJBs support. > > In the end I suppose it's the implementations that matter more than the > specifications, but significant work was done to make the SOAP and WSDL > specifications more RESTful. > > Eric > > ------------------------------------------------------------------------ > *From:* Sean Kennedy <seandkennedy@...> > *To:* Philippe Mougin <pmougin@...> > *Cc:* Rest Discussion Group <rest-discuss@yahoogroups.com> > *Sent:* Thursday, July 30, 2009 11:29:11 AM > *Subject:* Re: [rest-discuss] URI length restriction > > > > Yes, good point. If SOAP 1.2 with the WebMethod enables GET, which would > have the benefit of enabling cache intermediaries, then why isn't is > used more often? Ref: Stefan (Tilkov) points out in [*] that SOAP 1.2 is > just not used very much in practice. > > [*] http://www.parleys. com/display/ PARLEYS/Home# ;slide=6; > talk=31817742 > <http://www.parleys.com/display/PARLEYS/Home#;slide=6;talk=31817742> > > > ------------------------------------------------------------------------ > *From:* Philippe Mougin <pmougin@acm. org> > *To:* Rest Discussion Group <rest-discuss@ yahoogroups. com> > *Cc:* Sean Kennedy <seandkennedy@ yahoo.co <http://yahoo.co/>. uk> > *Sent:* Thursday, 30 July, 2009 15:29:57 > *Subject:* Re: [rest-discuss] URI length restriction > > > Le 30 juil. 09 à 14:46, Sean Kennedy a écrit : > > > Yes, I agree, the uniform interface should be maintained. In other > words, if the SOAP message is "updateCustomer" then that should map to a > PUT, "addCutomer" should map to a POST, "deleteCustomer" to DELETE and > "getCustomer" to a GET. > > The key is that SOAP POSTs "get" (and "delete") type requests which I > think should be mapped to a RESTful GET (and DELETE). > > Note that SOAP is now able to use GET to do a get. See SOAP 1.2 - HTTP > GET usage (http://www.w3. org/TR/soap12- part0/#L26854 > <http://www.w3.org/TR/soap12-part0/#L26854>). > > Philippe Mougin > > > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Hi Bill, thanks for your response. However I don't think I understand what you mean. On one side, afaik in HTTP1.1 all connections are keep-alive unless a Connection: close comes with the response. But nevertheless we are *not* tunneling other protocols over HTTP, we are using JMS, IMAP, JCR and intraVM connectors on their respective protocols, we just had to "model" our uniform interface around the HTTP one for obvious reasons. All these connectors connects to a Resource that extracts the relevant information and dispatch the request to the specified service and returns the response using the original transport protocol. So basically the question here is, is it better to extend the Uniform Interface adding a LISTEN verb (that will originate a Not Supported where appropriate) or using the Headers to signal that, using a GET+Connection: keep-listening that should behave like a normal GET when the header is not understood by the resource? I think this later is preferable, but I would like to ear other opinions. Cheers. 2009/7/31 Bill Burke <bburke@...>: > Doesn't HTTP Keep-Alive handle this scenario? Then the listener would > continually being doing GETs (or POSTs). Its a little less efficient, but > does this small CPU savings really save anything? Then you wouldn't be > tunneling any special protocol over HTTP (which is what this would be). > > António Mota wrote: >> >> >> Our current REST-like infrastructure was designed from the ground-up to >> support several connectors, and we have at the moment HTTP, IMAP, JMS >> and IntraVM connectors. >> >> We are now rewriting the JMS connector to extends it to some of the >> JMS-specific capabilities, but in a way that is really >> connector-specific and not resource or server-side specific. Let me give >> a example: >> >> GET /notifications/toni HTTP/1.0 >> >> This will return a list of all the notifications for Toni. This is a >> simple send/receive conversation. But on the JMS side we want to extend >> the conversation to use some JMS specific features, namely a >> conversation of type listen-receive-keeplistening. >> >> How to do that in a restfull way? Extending our interface by using >> another verb? >> >> GET /notifications/toni JMS/2.0 >> LISTEN /notifications/toni JMS/2.0 >> >> By promoting the "listen" to a resource on it's own? >> >> GET /notifications/toni JMS/2.0 >> GET /notifications/listener/toni JMS/2.0 >> >> By using General-Headers? Or other type of headers? >> >> GET /notifications/toni JMS/2.0 >> >> GET /notifications/toni JMS/2.0 >> Connect: keep-listening >> >> assuming in this last one that if a resource doesen't understand >> "keep-listening" it will ignore it? >> >> My preference goes to this last one, but I'll like to know what others >> think. >> >> Cheers. >> > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > >
Sebastien Lambla wrote: >> Essentially: I don't assume just because I need to provide a hyperlink >> to a representation, that it must have its own URI. >> > > That's the state of web architecture, and of ReST over http. > So hyperlink === URI ? i.e. <a href="/foo/bar" accept="application/xml">link</a> - the hyperlink is simply the value of href attribute and not the whole thing? > >> Maybe it is >> possible >> to have a new type of hypermedia with hyperlinks capable of providing >> control data to a UA - I don't see this as far different from using the >> end of a URI - other than the clear benefits of keeping the distinction >> between resources and representations clear. >> > > URIs identify resources. If you provide a new media type that requires both > URI and control data, you created a new identifier, one that is not a URI > only, and one that is not in use by the rest of the web architecture. > > URI's *should* identify resources. But resources need to be identified appropriately. If the rationale is essentially "I want to serve a client this representation, so it needs a URI", what is the point in drawing a distinction between resources and representations? RPC endpoints like /app/updateSomethingSomewhere aren't resources. Representations aren't resources. Sometimes there may be 'things' that share a similar meaning, but have a different media type - those may very well be separate resources and not representations at all which is fine; give them each their own URI. I'm concerned here with correctly identified resources that have multiple representations. > If it was, you suddenly couldn't copy n paste a URL, or email it to someone > else, or link it in a wiki or an atom feed. > Right - so you gain plain text hyperlinks to 'representations' that aren't actually representations at all; they're resources. There are potential solutions to the plain text URI and conneg 'problem' - e.g. browsers outputting an html hyperlink when copying a URI from the location bar, or something like an 'open with' menu for hyperlinks (powered by an OPTIONS request to the resource which lists available content-types/languages/encodings/etc.). Cheers, Mike
Tomcat by default sends chunked responses => Transfer-Encoding:chunked I'm calculating the ETag for the response dynamically as the content is being written to the response stream. Since I'll know the final value of the ETag only when the stream is closed, I indicate that using the Trailer header => Trailer: ETag I'm ensuring that just before the stream is closed, I set the ETag header => ETag: SomeETagValue However, when I look at the response on the client, the ETag header was never written. Is this a tomcat specific bug or am I doing something wrong? Thanks, Keyur
I'm not sure that you're on the right discussion list for this issue. With that said, you have to set headers before you open the outgoing stream. One way to do that is to write to a buffer (such as a ByteArrayOutputStream), get the size, write the headers and then flushing the buffer to the servletResponse's OutputStream. -Solomon On Wed, Aug 5, 2009 at 1:43 PM, Keyur Shah <keyurva@...> wrote: > > > Tomcat by default sends chunked responses => Transfer-Encoding:chunked > > I'm calculating the ETag for the response dynamically as the content is > being written to the response stream. Since I'll know the final value of the > ETag only when the stream is closed, I indicate that using the Trailer > header => Trailer: ETag > > I'm ensuring that just before the stream is closed, I set the ETag header > => ETag: SomeETagValue > > However, when I look at the response on the client, the ETag header was > never written. > > Is this a tomcat specific bug or am I doing something wrong? > > Thanks, > Keyur > > >
Yes, it's a tomcat question and a http question. I was asking more from the http perspective here - whether what I'm doing is http coherent or not. About writing the content to the buffer: Buffering is what I do right now and that is what I'm trying to avoid. With a large number of concurrent users, the buffering is proving to be a performance bottleneck and is causing frequent GC sweeps. --Keyur --- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@...> wrote: > > I'm not sure that you're on the right discussion list for this issue. With > that said, you have to set headers before you open the outgoing stream. One > way to do that is to write to a buffer (such as a ByteArrayOutputStream), > get the size, write the headers and then flushing the buffer to the > servletResponse's OutputStream. > -Solomon > > On Wed, Aug 5, 2009 at 1:43 PM, Keyur Shah <keyurva@...> wrote: > > > > > > > Tomcat by default sends chunked responses => Transfer-Encoding:chunked > > > > I'm calculating the ETag for the response dynamically as the content is > > being written to the response stream. Since I'll know the final value of the > > ETag only when the stream is closed, I indicate that using the Trailer > > header => Trailer: ETag > > > > I'm ensuring that just before the stream is closed, I set the ETag header > > => ETag: SomeETagValue > > > > However, when I look at the response on the client, the ETag header was > > never written. > > > > Is this a tomcat specific bug or am I doing something wrong? > > > > Thanks, > > Keyur > > > > > > >
You'll get HTTP experts on the rest-discuss group, but isn't the intent of the rest-discuss group to discuss the concepts and implementation of REST? I guess I'm lamenting that rest-discuss talks about HTTP handling rather than some of the more subtle REST topics. With that said, REST (and HTTP) allows you to layer "middle men" that perform specific tasks, such as caching. Squid is a great way to boost performance through efficient caching. Something like Squid may be able to not only get rid of those pesky GC sweeps, it will also do core caching that will reduce the load on Tomcat. I'm not sure about the details of implementing ETags with Squid, but I do think that Squid could be prove useful in solving your problem. -Solomon On Wed, Aug 5, 2009 at 6:01 PM, Keyur Shah <keyurva@...> wrote: > > > Yes, it's a tomcat question and a http question. I was asking more from the > http perspective here - whether what I'm doing is http coherent or not. > > About writing the content to the buffer: Buffering is what I do right now > and that is what I'm trying to avoid. With a large number of concurrent > users, the buffering is proving to be a performance bottleneck and is > causing frequent GC sweeps. > > > --Keyur > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > Solomon Duskis <sduskis@...> wrote: > > > > I'm not sure that you're on the right discussion list for this issue. > With > > that said, you have to set headers before you open the outgoing stream. > One > > way to do that is to write to a buffer (such as a ByteArrayOutputStream), > > get the size, write the headers and then flushing the buffer to the > > servletResponse's OutputStream. > > -Solomon > > > > On Wed, Aug 5, 2009 at 1:43 PM, Keyur Shah <keyurva@...> wrote: > > > > > > > > > > > Tomcat by default sends chunked responses => Transfer-Encoding:chunked > > > > > > I'm calculating the ETag for the response dynamically as the content is > > > being written to the response stream. Since I'll know the final value > of the > > > ETag only when the stream is closed, I indicate that using the Trailer > > > header => Trailer: ETag > > > > > > I'm ensuring that just before the stream is closed, I set the ETag > header > > > => ETag: SomeETagValue > > > > > > However, when I look at the response on the client, the ETag header was > > > never written. > > > > > > Is this a tomcat specific bug or am I doing something wrong? > > > > > > Thanks, > > > Keyur > > > > > > > > > > > > > >
This may very well be a bug/limitation in Tomcat. By the way, have you considered using some other application hash for the Etag. Computing ETag based on the byte stream is not always the most efficient. Subbu On Aug 5, 2009, at 3:01 PM, Keyur Shah wrote: > Yes, it's a tomcat question and a http question. I was asking more > from the http perspective here - whether what I'm doing is http > coherent or not. > > About writing the content to the buffer: Buffering is what I do > right now and that is what I'm trying to avoid. With a large number > of concurrent users, the buffering is proving to be a performance > bottleneck and is causing frequent GC sweeps. > > --Keyur > > --- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@...> > wrote: > > > > I'm not sure that you're on the right discussion list for this > issue. With > > that said, you have to set headers before you open the outgoing > stream. One > > way to do that is to write to a buffer (such as a > ByteArrayOutputStream), > > get the size, write the headers and then flushing the buffer to the > > servletResponse's OutputStream. > > -Solomon > > > > On Wed, Aug 5, 2009 at 1:43 PM, Keyur Shah <keyurva@...> wrote: > > > > > > > > > > > Tomcat by default sends chunked responses => Transfer- > Encoding:chunked > > > > > > I'm calculating the ETag for the response dynamically as the > content is > > > being written to the response stream. Since I'll know the final > value of the > > > ETag only when the stream is closed, I indicate that using the > Trailer > > > header => Trailer: ETag > > > > > > I'm ensuring that just before the stream is closed, I set the > ETag header > > > => ETag: SomeETagValue > > > > > > However, when I look at the response on the client, the ETag > header was > > > never written. > > > > > > Is this a tomcat specific bug or am I doing something wrong? > > > > > > Thanks, > > > Keyur > > > > > > > > > > > > > >
Keyur Shah wrote: > > > Tomcat by default sends chunked responses => Transfer-Encoding: chunked > > I'm calculating the ETag for the response dynamically as the content is > being written to the response stream. Since I'll know the final value of > the ETag only when the stream is closed, I indicate that using the > Trailer header => Trailer: ETag > > I'm ensuring that just before the stream is closed, I set the ETag > header => ETag: SomeETagValue > > However, when I look at the response on the client, the ETag header was > never written. > > Is this a tomcat specific bug or am I doing something wrong? > ... I'd call it a missing feature. Speaking of which: do you have a client that actually *reads* trailers and does the right thing with them? BR; Julian
Subbu Allamaraju wrote: > This may very well be a bug/limitation in Tomcat. By the way, have you > considered using some other application hash for the Etag. Computing > ETag based on the byte stream is not always the most efficient. It also defeats some uses. In particular it's not possible to conditionally change processing due to If-Match or If-None-Match as you don't know if the E-Tags match until it's too late.
On 6 Aug 2009, at 11:01, Jon Hanna wrote: > Subbu Allamaraju wrote: > > This may very well be a bug/limitation in Tomcat. By the way, have > you > > considered using some other application hash for the Etag. Computing > > ETag based on the byte stream is not always the most efficient. > > It also defeats some uses. In particular it's not possible to > conditionally change processing due to If-Match or If-None-Match as > you > don't know if the E-Tags match until it's too late. > > Yes agree here. If you cant work out the etag without generating the resource representation you may as well not support HEAD requests! The whole point is to be able to compute etags without doing all the work of sending the resource. Otherwise you are just saving a little bandwidth. Justin
I keep a memo list of the hashes for future use. - build up the representation - gen the ETag - add this info to the memo list on future requests that contain If-Match or If-None-Match, check the memo list and act accordingly. primary gotcha for this pattern is that you need to keep the memo-list fresh whenever changes to the underlying data occurs mca http://amundsen.com/blog/ On Thu, Aug 6, 2009 at 09:07, Justin Cormack <justin@...>wrote: > > On 6 Aug 2009, at 11:01, Jon Hanna wrote: > > > Subbu Allamaraju wrote: > > > This may very well be a bug/limitation in Tomcat. By the way, have > > you > > > considered using some other application hash for the Etag. Computing > > > ETag based on the byte stream is not always the most efficient. > > > > It also defeats some uses. In particular it's not possible to > > conditionally change processing due to If-Match or If-None-Match as > > you > > don't know if the E-Tags match until it's too late. > > > > > > Yes agree here. If you cant work out the etag without generating the > resource > representation you may as well not support HEAD requests! The whole > point is to be able to > compute etags without doing all the work of sending the resource. > Otherwise > you are just saving a little bandwidth. > > Justin > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Yes. I agree with all of you. I pre-compute the ETag wherever possible: file name + last modified, a version-like column in the db, even cached object hash codes. Here I'm only looking to generate ETags for resources where it's not possible for me to compute the etag in advance. Yes, it does not save me any computation time. But if it can save me bandwidth it's still a good thing. But... as I think more about it, generating a trailer ETag does not even save me any bandwidth. So my basic premise itself was flawed. Thanks for setting me straight. --Keyur On Thu, Aug 6, 2009 at 6:07 AM, Justin Cormack <justin@... > wrote: > > On 6 Aug 2009, at 11:01, Jon Hanna wrote: > > Subbu Allamaraju wrote: >> > This may very well be a bug/limitation in Tomcat. By the way, have you >> > considered using some other application hash for the Etag. Computing >> > ETag based on the byte stream is not always the most efficient. >> >> It also defeats some uses. In particular it's not possible to >> conditionally change processing due to If-Match or If-None-Match as you >> don't know if the E-Tags match until it's too late. >> >> >> > Yes agree here. If you cant work out the etag without generating the > resource > representation you may as well not support HEAD requests! The whole point > is to be able to > compute etags without doing all the work of sending the resource. Otherwise > you are just saving a little bandwidth. > > Justin > >
On Aug 5, 2009, at 11:57 PM, Julian Reschke wrote: > > Speaking of which: do you have a client that actually *reads* trailers > and does the right thing with them? > Great point :) Subbu
Would XSLT typically be considered a a part of the Code-On-Demand[1] constraint? I'm particularly interested in its usage as compared to a typical ESB's "transformation" capability. Something like this: Client Request: GET /transformations/application/atom+xml Server Response: <ul> <li href="toHtml.xsl" type="text/xslt+xml" rel="html"/> <li href="toText.xsl" type="text/xslt+xml" rel="text"/> <li href="someweirdtransform.json" type="text/javascript" rel="custom"/> </ul> Client Request: GET /toHtml.xsl ... and performs transform locally, caches, etc... So, code-on-demand or just another resource? Any pointers to more detailed coverage of this constraint are much appreciated... Thanks, --tim [1] - http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_7
Tim Williams wrote: > Would XSLT typically be considered a a part of the Code-On-Demand[1] > constraint? I'm particularly interested in its usage as compared to a > typical ESB's "transformation" capability. Something like this: I would say so. Conceptually it depends on whether or not you think of it as "code", and sometimes it makes more sense not to, but even then the fact remains that its executable code and hence it is a form of COD. > Client Request: > GET /transformations/application/atom+xml > > Server Response: > <ul> > <li href="toHtml.xsl" type="text/xslt+xml" rel="html"/> > <li href="toText.xsl" type="text/xslt+xml" rel="text"/> > <li href="someweirdtransform.json" type="text/javascript" rel="custom"/> > </ul> What I've done in some cases here is just used an xml-stylesheet PI in the XML. As a rule, if something was expecting atom+xml (or rdf+xml, which is the case I've used it with) and it's been given rdf+xml then it isn't going to perform the transformation, but a browser would if it was served as text/xml. I served as text/xml or application/rdf+xml according to requested type but sent the same octet stream either way. I never concluded in my own mind whether this was a good or bad idea, but the interesting thing as far as REST goes is that it was an example of COD being something that may or may not be part of REST (since some clients executed the code and some didn't). > So, code-on-demand or just another resource? COD, though COD does happen from the entities for a resource.
Hello. I'm trying to "squeeze" a SyncML-based two-way communication (with a external app) in our Rest infrastructure. We are not using the full set of SyncML specifications, and we have not a SyncML server or any SyncML infrastructure, we are only using SyncML as a "normalization" tool between two disparate systems. Now, it seems to me that the SyncML protocol is the opposite of Rest, as it is based on commands as opposed to resources. How can I make this two scenarios compatible? Has anyone had experience with this kind of problem? Thanks.
On Thu, Jun 11, 2009 at 1:41 PM, object01<object01@...> wrote: > I'm trying to develop a solid understanding of the principles behind REST, and I'm wrestling with what seems like a big inconsistency. I have a feeling there's a mismatch between my semantics and the community's. > > I've seen it said over and over again that passing session IDs (typically in a cookie) is generally considered unRESTful. I've also seen it said over and over again that passing credentials along with every request is considered very RESTful. > > The reasonings are unclear to me, though. Considering the "statelessness" principle of REST, if I think of a RESTful architecture as one in which every request passes all the information needed for the server to fulfill the request, then I don't see how passing session IDs (for instance) in requests violates that principle. > > To me, a session ID is a simple alphanumeric string. Authentication credentials are (typically expressed as) a simple alphanumeric string. Both strings persist on the client. Both strings are received by the server and are compared to information held on the server in order to make decisions on the server about how to process the request, and thus the content of the strings can dramatically affect the server's response to the request. > > When approached from this way, I lose the meaning behind the distinction. What am I misunderstanding? If by authentication credentials, you mean basic/digest auth, the difference is that the meaning of auth credentials is defined by the spec and means the same thing for every resource. Where is the definition of a cookie? Does it mean the same thing for every resource? --Chuck
António, 2009/8/10 António Mota <amsmota@...>: > I'm trying to "squeeze" a SyncML-based two-way communication (with a > external app) in our Rest infrastructure. We are not using the full set > of SyncML specifications, and we have not a SyncML server or any SyncML > infrastructure, we are only using SyncML as a "normalization" tool > between two disparate systems. > > Now, it seems to me that the SyncML protocol is the opposite of Rest, as > it is based on commands as opposed to resources. How can I make this two > scenarios compatible? Has anyone had experience with this kind of problem? Essentially what you are wanting to do is wrap up the SyncML endpoint inside a REST gateway. This gateway will expose whatever URLs you deem appropriate for the rest of the architecture to interact with in a way that doesn't turn everything to pot. It can be a useful approach if a number of components need to interact with the endpoint and if supporting the SyncML directly within each component would be an impost. Similar to how wrapping up an old mainframe or a java RMI system in a SOAP service makes it more accessible and easier to integrate with, wrapping up an old endpoint-based protocol behind a REST gateway can make its capabilities more easily accessible to the architecture. However, designing the interface to this gateway service can be difficult as you try to reinterpret a specification through a REST model. It can be as difficult and as subject to change as the development of the standard itself. You may ultimately find that if only a few components need to speak SyncML, it may just be better to support it directly. Remember, REST is not an end unto itself. It's a way of getting things done. Think first about what you need to get done, and the REST aspects should fall out pretty naturally. Benjamin.
--- In rest-discuss@yahoogroups.com, "object01" <object01@...> wrote: > I've seen it said over and over again that passing session IDs (typically in a cookie) is generally considered unRESTful. I've also seen it said over and over again that passing credentials along with every request is considered very RESTful. > > The reasonings are unclear to me, though. Considering the "statelessness" principle of REST, if I think of a RESTful architecture as one in which every request passes all the information needed for the server to fulfill the request, then I don't see how passing session IDs (for instance) in requests violates that principle. The answer is fairly simple: A client in REST architecture can either be "at rest" or transition between rest states. When the client is at rest it places no demands on the network or on servers. The client transitions from one rest state to another by issuing a series of requests. When the requests have all come back, the client is at rest again. This produces a simple, scalable architecture where servers don't have to remember anything about clients between requests. The statelessness constraint specifically allows servers to process one request at a time, at a rate they can handle, without needing to consume memory or space in a session database. Additionally, individual servers within load-balanced or highly-available clusters are able to fail, shut down, be purged, be upgraded, etc without interrupting the client session. The next request is simply handle by another server, and there is no session state that needs to be synchronised between servers to make this happen. Statelessness also has impacts on intermediaries. Because all required information is present in requests, caches and security proxies are more able to interpret and modify messages that pass through them. Statelessness provides simplicity, visibility, scalability, and reliability properties within the architecture. However. Authentication is tricky business, which often by necessity requires multiple passes. Message-based security is possible with signatures and encryption, but these are open to man-in-the-middle attacks etc. Authentication often breaches the statelessness constraint, and in my view this is not necessarily a problem in practice. Stateful authentication does not particularly affect visibility, as the message can still be understood by intermediaries. It does impact scalability, and that has to be carefully considered. Reliability may be impacted, but often this can be handled by automatically reinstating the security session in the case of a failed server. Simplicity is impacted if requests have to be directed to the same server each time, but this can often be contained to a specialised authentication service in the back-end. If session state is limited to authentication sessions only, I don't see a fundamental problem in most practical cases. Scalability is again the biggest risk area. If automatic recovery of sessions is possible, this could even be resolved to some extent by allowing servers to aggressively shed authentication sessions if space is limited. This then becomes something of a trade-off on the server side between the processing required to initiate a session and authenticate a single message and the space required to keep old sessions active. More here[1]. Benjamin [1] http://soundadvice.id.au/blog/2009/06/13/#stateless
How would you approach a REST API with dozens of interconnecting functionality? IMHO, It's doable in theory, but probably not in practice... yet. Am I right in that assessment? If so, what do we need to do to get to that point? If so, what rule of thumb can be used to determine if REST (as opposed to other distributed technologies) is appropriate for a particular set of functionality features? -Solomon
Solomon Duskis <sduskis@...> writes:
> How would you approach a REST API with dozens of interconnecting
> functionality?
Like the web, except small? ;)
> IMHO, It's doable in theory, but probably not in practice... yet. Â Am
> I right in that assessment? Â
Why wouldn't it be doable in practice?
> If so, what do we
> need to do to get to that point? Â If so, what rule of thumb can be used to determine if REST (as opposed to other distributed technologies) is appropriate for a particular set of functionality
> features?
If the REST constraints and benefits are acceptable for the features.
The REST constraints either require or encourage coarse-grained,
self-descriptive, resource-oriented, hypermedia-driven, loosely-coupled
interactions. If you ultimately want to build a, let's say, high-volume
trading system, those constraints will be at odds with the features. If
you ultimately control both ends of the wire (e.g., your own
programmatic client connecting to your own servers, upgraded in
lockstep), then the REST constraints might be more work than they're
worth. If you want a widely-distributed system tolerant of
intermediaries, can exploit caching, is inherently stateless (allowing
many scaling possibilities) with lazy upgrading, REST fits rather well.
--
...jsled
http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
On Wed, Aug 12, 2009 at 10:50 AM, Josh Sled <jsled@...> wrote:
> Solomon Duskis <sduskis@...> writes:
>
> > How would you approach a REST API with dozens of interconnecting
> > functionality?
>
> Like the web, except small? ;)
Doesn't "small" or even "tiny" describe the scope and functionality the
currently implemented REST APIs?
You're right the REST, as discussed by Roy Fielding, should be able to work
for data oriented APIs, but I simply haven't seen an implementation that has
dozens of interconnecting sets of functionality; why is that?
> IMHO, It's doable in theory, but probably not in practice... yet. Am
> > I right in that assessment?
>
> Why wouldn't it be doable in practice? > If so, what do we
> > need to do to get to that point? If so, what rule of thumb can be used
> to determine if REST (as opposed to other distributed technologies) is
> appropriate for a particular set of functionality
> > features?
>
> If the REST constraints and benefits are acceptable for the features.
>
You can't just take the REST constraints into account. You have to take the
current sets of supporting techonologies, which may not support the full set
of concepts behind the REST constraints.
The REST constraints either require or encourage coarse-grained,
> self-descriptive, resource-oriented, hypermedia-driven, loosely-coupled
> interactions.
Agreed in theory, but I haven't seen that in practice.
> If you ultimately want to build a, let's say, high-volume
> trading system, those constraints will be at odds with the features.
agreed again.
> If you ultimately control both ends of the wire (e.g., your own
> programmatic client connecting to your own servers, upgraded in
> lockstep), then the REST constraints might be more work than they're
> worth.
I'm not sure that I'd constrain my self to owning both ends of the wire.
Isn't the primary concern for global distributed computing the have
multiple organizations independently enhancing either client or server
independently?
> If you want a widely-distributed system tolerant of
> intermediaries, can exploit caching, is inherently stateless (allowing
> many scaling possibilities) with lazy upgrading, REST fits rather well.
Let's say that I have those needs. I know that the current REST oriented
platforms can support some of these features, but I can't say that there is
any platform that can implement those features really, really well. Sure,
we have lots of well established RESTful APIs, but they more often than not
have some issues implementing the hypertext constraint effectively and are
likely to be built on proprietary technology.
Perhaps it's just the APIs that I've seen. However, I keep on talking to
experts that express that there's some disconnect between REST principles
and practice. So my question is how do we get from the ideals you describe
(and I agree are the goals of REST) into a road map that a business can
implement?
-Solomon
> --
> ...jsled
> http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
>
On Thu, Aug 13, 2009 at 03:20, Solomon Duskis<sduskis@...> wrote: > You're right the REST, as discussed by Roy Fielding, should be able to work > for data oriented APIs, but I simply haven't seen an implementation that has > dozens of interconnecting sets of functionality; why is that? Some do. I've created fully RESTful SOA infra-structures in a few organisations, but of course they're mostly hidden behind firewalls and whatsnot. As to more open infra-structure, it's around and growing, like AWS, Google and AtomPub, and lots of "Web2.0" applications these days have a REST API (well, more REST than RPC is growing these days). And note that in a RESTful world, the traditional notion of what an API is is also a bit shady. > You can't just take the REST constraints into account. You have to take the > current sets of supporting techonologies, which may not support the full set > of concepts behind the REST constraints. Do you here mean the limitation of browsers with GET and POST? I guess that is sort of true, although I suspect the world wasn't ready for DELETE and PUT at the time. Of course, the upgrade should have happened by now, but this will all change (and I spsect rather rapidly) as HTML5 / XHTML2 enters the field. I'm excited! :) >> The REST constraints either require or encourage coarse-grained, >> self-descriptive, resource-oriented, hypermedia-driven, loosely-coupled >> interactions. > > Agreed in theory, but I haven't seen that in practice. What do you mean, "in practice"? Is there some part of your daily online life where you don't interact with a system that might be doing some proper REST magic? Any Google, Yahoo or Amazon service or system, for example? >> If you want a widely-distributed system tolerant of >> intermediaries, can exploit caching, is inherently stateless (allowing >> many scaling possibilities) with lazy upgrading, REST fits rather well. > > Let's say that I have those needs. I know that the current REST oriented > platforms can support some of these features, but I can't say that there is > any platform that can implement those features really, really well. Ok, so what are they missing or not doing well? What does "well" mean? > However, I keep on talking to > experts that express that there's some disconnect between REST principles > and practice. Who are you talking to? :) I think most of us would think that a lot of claims of being RESTful isn't actually RESTful, but there's plenty of stuff out there (and growing every day as people understand the principles more) that is. A suggestion would be to search the net for systems with AtomPub support, and Voila! you got lots of examples of well-done REST. > So my question is how do we get from the ideals you describe > (and I agree are the goals of REST) into a road map that a business can > implement? Start by implementing AtomPub, and the rest will follow. It has a basic model which works for a lot of things (no, it's not just about blogging) and is extendable. Google use it, others use it, and it's RESTful. Of course, if you want to, you can abuse any "right" technology to make it go "wrong", so there are no guarantees anywhere. But that's just life and organic growth. :) Regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
I have an application managing user profiles. It has the following protocol:
GET /profiles - returns a list of user profiles
POST /profiles - adds a new user profile
GET /profiles/{userid} - gets the user profile of the specified user
PUT /profiles/{userid} - update the user profile (non-browser)
POST /profiles/{userid} - update the user profile (browser)
etc.
I'd like it to work such that when a user does a GET
/profiles/{userid}, the html representation that is returned either
contains a link to a form to be used to edit the profile data, or
directly contains the form for editing. For the former the form could
be located at /profiles/{userid}?editform. In both cases, the form
would be POSTed to /profiles/{userid}
I don't feel strongly about either option and expect usability
requirements or implementation details to guide me here (though I am
interested in opinions if somebody thinks there's a better way).
My real question has to do with providing a mechanism to create new
users. I'm really not sure what a good pattern for this might be. I
can think of three possiblities:
1. GET /profiles/newuser to get the form, and POST to /profiles
This one kind of messes with the implied namespace of
/profiles/{userid} in the newuser isnt really a userid
2. GET /profiles/ to get the form and POST to /profiles
I think this is too subtle a distinction and probably has the
collection semantics wrong
3. GET /profiles?newuser and POST to /profiles
I dont have any real objection to this one at this point.
Option 3 seems like a winner, but I'm curious as to what other options
there might be. Thoughts?
--Chuck
If GET /profiles is returning a list of users, this seems like a
natural place from which you would want to add a new user. Why not
have the (HTML) representation for /profiles contain a form for doing
the POST to /profiles?
I think this is compliant with the HATEOAS principle, for example.
Jon
........
Jon Moore
Comcast Interactive Media
On Aug 13, 2009, at 11:21 PM, "Chuck Hinson" <chuck.hinson@...>
wrote:
> I have an application managing user profiles. It has the following
> protocol:
>
> GET /profiles - returns a list of user profiles
> POST /profiles - adds a new user profile
>
> GET /profiles/{userid} - gets the user profile of the specified user
> PUT /profiles/{userid} - update the user profile (non-browser)
> POST /profiles/{userid} - update the user profile (browser)
>
> etc.
>
> I'd like it to work such that when a user does a GET
> /profiles/{userid}, the html representation that is returned either
> contains a link to a form to be used to edit the profile data, or
> directly contains the form for editing. For the former the form could
> be located at /profiles/{userid}?editform. In both cases, the form
> would be POSTed to /profiles/{userid}
>
> I don't feel strongly about either option and expect usability
> requirements or implementation details to guide me here (though I am
> interested in opinions if somebody thinks there's a better way).
>
> My real question has to do with providing a mechanism to create new
> users. I'm really not sure what a good pattern for this might be. I
> can think of three possiblities:
>
> 1. GET /profiles/newuser to get the form, and POST to /profiles
> This one kind of messes with the implied namespace of
> /profiles/{userid} in the newuser isnt really a userid
>
> 2. GET /profiles/ to get the form and POST to /profiles
> I think this is too subtle a distinction and probably has the
> collection semantics wrong
>
> 3. GET /profiles?newuser and POST to /profiles
> I dont have any real objection to this one at this point.
>
> Option 3 seems like a winner, but I'm curious as to what other options
> there might be. Thoughts?
>
> --Chuck
>
I think modified option 1 for the new forms - i.e.,
GET /profiles/new
And for edits:
GET /profiles/{profile-id}/edit
-L
On Fri, Aug 14, 2009 at 7:06 AM, Moore, Jonathan
(CIM)<jonathan_moore@...> wrote:
>
>
> If GET /profiles is returning a list of users, this seems like a natural
> place from which you would want to add a new user. Why not have the (HTML)
> representation for /profiles contain a form for doing the POST to
> /profiles?
> I think this is compliant with the HATEOAS principle, for example.
> Jon
>
> ........
> Jon Moore
> Comcast Interactive Media
>
> On Aug 13, 2009, at 11:21 PM, "Chuck Hinson" <chuck.hinson@...> wrote:
>
>
>
> I have an application managing user profiles. It has the following protocol:
>
> GET /profiles - returns a list of user profiles
> POST /profiles - adds a new user profile
>
> GET /profiles/{userid} - gets the user profile of the specified user
> PUT /profiles/{userid} - update the user profile (non-browser)
> POST /profiles/{userid} - update the user profile (browser)
>
> etc.
>
> I'd like it to work such that when a user does a GET
> /profiles/{userid}, the html representation that is returned either
> contains a link to a form to be used to edit the profile data, or
> directly contains the form for editing. For the former the form could
> be located at /profiles/{userid}?editform. In both cases, the form
> would be POSTed to /profiles/{userid}
>
> I don't feel strongly about either option and expect usability
> requirements or implementation details to guide me here (though I am
> interested in opinions if somebody thinks there's a better way).
>
> My real question has to do with providing a mechanism to create new
> users. I'm really not sure what a good pattern for this might be. I
> can think of three possiblities:
>
> 1. GET /profiles/newuser to get the form, and POST to /profiles
> This one kind of messes with the implied namespace of
> /profiles/{userid} in the newuser isnt really a userid
>
> 2. GET /profiles/ to get the form and POST to /profiles
> I think this is too subtle a distinction and probably has the
> collection semantics wrong
>
> 3. GET /profiles?newuser and POST to /profiles
> I dont have any real objection to this one at this point.
>
> Option 3 seems like a winner, but I'm curious as to what other options
> there might be. Thoughts?
>
> --Chuck
>
>
On Fri, Aug 14, 2009 at 4:19 AM, Chuck Hinson<chuck.hinson@...> wrote:
> My real question has to do with providing a mechanism to create new
> users. I'm really not sure what a good pattern for this might be. I
> can think of three possiblities:
>
> 1. GET /profiles/newuser to get the form, and POST to /profiles
> This one kind of messes with the implied namespace of
> /profiles/{userid} in the newuser isnt really a userid
>
> 2. GET /profiles/ to get the form and POST to /profiles
> I think this is too subtle a distinction and probably has the
> collection semantics wrong
>
> 3. GET /profiles?newuser and POST to /profiles
> I dont have any real objection to this one at this point.
>
>
> Option 3 seems like a winner, but I'm curious as to what other options
> there might be. Thoughts?
I use
GET /forms/newuser and POST to /profiles
Then in theory I could POST to /forms to add a new form to the system
See http://open.vocab.org/forms for an example
Ian
Alexander, Agreed on all of your points, but I still think that there is something key missing. Even if you have converted all of your services to AtomPub and they are all individually RESTful, an application that wishes to use a set of them could benefit from being hypermedia driven couldn't it? E.g. a sort of RESTful orchestration system? Let's use Ajax as an analogy -- you have various sets of resources exposed (typically ajax apps just use JSON -- not RESTful; but let's say that AtomPub and a Javascript AtomPub client are used), and in addition you have the HTML + Javascript that ultimately pull together those resources into a browser application. You can think of the HTML as a sort of orchestration layer that uses the various services together to some end. The client (your browser) is not only decoupled from the underlying resources, but also from the application itself via hypermedia. If you aren't developing a UI but a machine to machine application, what fills the orchestration role? You could just write code in your language of choice, in which case you do get some decoupling from the individual AtomPub services (assuming you interact with them using a generic AtomPub client) but in a way you are coupled to this particular combination of services. See what I mean? Ideally some sort of hypermedia could play a similar role that HTML+JS plays in the the Ajax scenario. Then the client is truly decoupled from the set of services in addition to each individual service. Don't have a solution here -- I'm just trying to restate the problem. Andrew --- In rest-discuss@yahoogroups.com, Alexander Johannesen <alexander.johannesen@...> wrote: > > On Thu, Aug 13, 2009 at 03:20, Solomon Duskis<sduskis@...> wrote: > > You're right the REST, as discussed by Roy Fielding, should be able to work > > for data oriented APIs, but I simply haven't seen an implementation that has > > dozens of interconnecting sets of functionality; why is that? > > Some do. I've created fully RESTful SOA infra-structures in a few > organisations, but of course they're mostly hidden behind firewalls > and whatsnot. As to more open infra-structure, it's around and > growing, like AWS, Google and AtomPub, and lots of "Web2.0" > applications these days have a REST API (well, more REST than RPC is > growing these days). And note that in a RESTful world, the traditional > notion of what an API is is also a bit shady. > > > You can't just take the REST constraints into account. You have to take the > > current sets of supporting techonologies, which may not support the full set > > of concepts behind the REST constraints. > > Do you here mean the limitation of browsers with GET and POST? I guess > that is sort of true, although I suspect the world wasn't ready for > DELETE and PUT at the time. Of course, the upgrade should have > happened by now, but this will all change (and I spsect rather > rapidly) as HTML5 / XHTML2 enters the field. I'm excited! :) > > >> The REST constraints either require or encourage coarse-grained, > >> self-descriptive, resource-oriented, hypermedia-driven, loosely-coupled > >> interactions. > > > > Agreed in theory, but I haven't seen that in practice. > > What do you mean, "in practice"? Is there some part of your daily > online life where you don't interact with a system that might be doing > some proper REST magic? Any Google, Yahoo or Amazon service or system, > for example? > > >> If you want a widely-distributed system tolerant of > >> intermediaries, can exploit caching, is inherently stateless (allowing > >> many scaling possibilities) with lazy upgrading, REST fits rather well. > > > > Let's say that I have those needs. I know that the current REST oriented > > platforms can support some of these features, but I can't say that there is > > any platform that can implement those features really, really well. > > Ok, so what are they missing or not doing well? What does "well" mean? > > > However, I keep on talking to > > experts that express that there's some disconnect between REST principles > > and practice. > > Who are you talking to? :) I think most of us would think that a lot > of claims of being RESTful isn't actually RESTful, but there's plenty > of stuff out there (and growing every day as people understand the > principles more) that is. > > A suggestion would be to search the net for systems with AtomPub > support, and Voila! you got lots of examples of well-done REST. > > > So my question is how do we get from the ideals you describe > > (and I agree are the goals of REST) into a road map that a business can > > implement? > > Start by implementing AtomPub, and the rest will follow. It has a > basic model which works for a lot of things (no, it's not just about > blogging) and is extendable. Google use it, others use it, and it's > RESTful. > > Of course, if you want to, you can abuse any "right" technology to > make it go "wrong", so there are no guarantees anywhere. But that's > just life and organic growth. :) > > > Regards, > > Alex > -- > Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps > --- http://shelter.nu/blog/ ---------------------------------------------- > ------------------ http://www.google.com/profiles/alexander.johannesen --- >
Hey there, I am totally new to REST, and I am looking at this API from the SUSE Studio project (which is awesome and I recommend checking it out). http://susestudio.com/help/api/v1 I fully understand the API and how it works, but I'm just wondering how well(philosophically) some of the API methods fit into the REST principles. For instance: POST /api/v1/user/appliances/<id>/cmd/remove_package?name=<name> Is used to *remove* a package from a virtual appliance. Wouldn't a more restful approach be something like DELETE /user/appliances/<id>/packages/<name> Any comments that will help my understanding of REST principles are greatly appreciated. Also, if you can help me to understand why they would have chosen one way over the other. The site is built on Ruby on Rails (according to their twitter stream).
On Wed, Aug 12, 2009 at 3:31 AM, thedesignofsoftware <online@...>wrote: > Wouldn't a more restful approach be something like > > DELETE /user/appliances/<id>/packages/<name> > Yes. Sam
This may be a vague question bordering on absurd but I better ask it. When the content length is not known, chunked encoding is the obvious thing to do. But say I am streaming files so I know the content length before hand. Is not setting a content-length header still a good idea even in this case? Both approaches allow for persistent connections. But by setting the content-length header do I lose the efficient server memory utilization provided by chunked transfers? Will certain web servers on seeing a content-length header not stream the content until it has been fully loaded in memory. I hope the answer is no so I can happily set content-length for static content and used chunked encoding for dynamic. Thanks, Keyur
If you are using chunked transfer encoding, there is no need to set the Content-Length. AFAIK, most HTTP 1.1 aware libraries will ignore the Content-Length header when they see chunked encoding. But adding a correct Content-Length header should not hurt. The key is to set the correct value for Content-Length, as I saw at least one instance of a client choking on incorrect Content-Length even when the response was using chunked encoding. Subbu --- http://subbu.org http://www.restful-webservices-cookbook.org On Aug 18, 2009, at 11:49 AM, Keyur Shah wrote: > This may be a vague question bordering on absurd but I better ask it. > > When the content length is not known, chunked encoding is the > obvious thing > to do. > > But say I am streaming files so I know the content length before > hand. Is > not setting a content-length header still a good idea even in this > case? > > Both approaches allow for persistent connections. But by setting the > content-length header do I lose the efficient server memory > utilization > provided by chunked transfers? Will certain web servers on seeing a > content-length header not stream the content until it has been fully > loaded > in memory. > > I hope the answer is no so I can happily set content-length for static > content and used chunked encoding for dynamic. > > Thanks, > Keyur
Subbu Allamaraju wrote: > > > If you are using chunked transfer encoding, there is no need to set > the Content-Length. AFAIK, most HTTP 1.1 aware libraries will ignore > the Content-Length header when they see chunked encoding. But adding a > correct Content-Length header should not hurt. The key is to set the > correct value for Content-Length, as I saw at least one instance of a > client choking on incorrect Content-Length even when the response was > using chunked encoding. > ... This is an HTTP question, so the answer should be in the spec: "If a Content-Length header field (Section 14.13) is present, its decimal value in OCTETs represents both the entity-length and the transfer-length. The Content-Length header field MUST NOT be sent if these two lengths are different (i.e., if a Transfer-Encoding header field is present). If a message is received with both a Transfer-Encoding header field and a Content-Length header field, the latter MUST be ignored." -- <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.4.4> BR, Julian
Interesting... So (unless a Content-Disposition header is set) is it always a better idea then to never set Content-Length? On Wed, Aug 19, 2009 at 12:10 AM, Julian Reschke <julian.reschke@...>wrote: > Subbu Allamaraju wrote: > >> >> If you are using chunked transfer encoding, there is no need to set >> the Content-Length. AFAIK, most HTTP 1.1 aware libraries will ignore >> the Content-Length header when they see chunked encoding. But adding a >> correct Content-Length header should not hurt. The key is to set the >> correct value for Content-Length, as I saw at least one instance of a >> client choking on incorrect Content-Length even when the response was >> using chunked encoding. >> ... >> > > This is an HTTP question, so the answer should be in the spec: > > "If a Content-Length header field (Section 14.13) is present, its decimal > value in OCTETs represents both the entity-length and the transfer-length. > The Content-Length header field MUST NOT be sent if these two lengths are > different (i.e., if a Transfer-Encoding header field is present). If a > message is received with both a Transfer-Encoding header field and a > Content-Length header field, the latter MUST be ignored." -- < > http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.4.4> > > > BR, Julian >
Keyur Shah wrote: > Interesting... So (unless a Content-Disposition header is set) is it > always a better idea then to never set Content-Length? > ... What does this have to do with Content-Disposition? BR, Julian
thedesignofsoftware wrote: > Wouldn't a more restful approach be something like > > DELETE /user/appliances/<id>/packages/<name> Yes. This would do fine as well, albeit cruftier DELETE /api/v1/user/appliances/<id>/cmd/remove_package?name=<name> point being is what's important wrt REST is that DELETE is used for delete, not POST. If the server framework or programming idioms behind the interface are so limited that markers in the URL are needed to dispatch to the code, well it's just fugly rather than being a break with the architecture. Of course you have to wonder what happens here GET /api/v1/user/appliances/<id>/cmd/remove_package?name=<name> ;) Bill
I am trying to collect business rather than technical cases for REST/ resource oriented rather than "service oriented" architectures. If anyone has anything I would be interested. I started writing some thoughts at the link below, as a first pass based on some recent experiences. http://blog.technologyofcontent.com/2009/08/the-resource-oriented-enterprise/ thanks Justin
thedesignofsoftware wrote: > > > Hey there, > > I am totally new to REST, and I am looking at this API from the SUSE > Studio project (which is awesome and I recommend checking it out). > > http://susestudio.com/help/api/v1 <http://susestudio.com/help/api/v1> > > I fully understand the API and how it works, but I'm just wondering how > well(philosophically) some of the API methods fit into the REST > principles. For instance: > > POST /api/v1/user/appliances/<id>/cmd/remove_package?name=<name> > > Is used to *remove* a package from a virtual appliance. > > Wouldn't a more restful approach be something like > > DELETE /user/appliances/<id>/packages/<name> > > Any comments that will help my understanding of REST principles are > greatly appreciated. Also, if you can help me to understand why they > would have chosen one way over the other. The site is built on Ruby on > Rails (according to their twitter stream). > > Yeah, not very restful because their URI's aren't resource oriented (and RPCish), but per earlier discussions on DELETE vs. POST a POST /user/appliances/<id>/packages/<name>/remove_package is just as viable as DELETE /user/appliances/<id>/packages/<name> -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Hi, while working on our new RESTful web service and discovering the need for versioning (mainly preventing concurrent changes) I wondered whether it was acceptable to make ETags (i.e. presence of an If-Match header for PUT, DELETE and maybe POST where applicable) mandatory. In RFC 2616 I could not find any evidence that this allowed or forbidden. But I might have overlooked it. If you consider it acceptable: which status code would you return to a client that did not provide an appropriate header? Well certainly one of the 4xx errors, but I'm not sure which to choose in particular. Regards, Julian Reich
Julian, On Aug 27, 2009, at 11:19 AM, julianbreich wrote: > Hi, > > while working on our new RESTful web service and discovering the > need for versioning (mainly preventing concurrent changes) I > wondered whether it was acceptable to make ETags (i.e. presence of > an If-Match header for PUT, DELETE and maybe POST where applicable) > mandatory. > In RFC 2616 I could not find any evidence that this allowed or > forbidden. But I might have overlooked it. > If you consider it acceptable: which status code would you return to > a client that did not provide an appropriate header? Well certainly > one of the 4xx errors, but I'm not sure which to choose in particular. Incidently I asked the same thing a while ago and it has been suggested to use 412 Precondition Failed + explanatory text. Sounded like the best idea. Jan > > > Regards, > Julian Reich > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Justin, On Aug 26, 2009, at 4:55 PM, Justin Cormack wrote: > > I am trying to collect business rather than technical cases for REST/ > resource oriented rather than "service oriented" architectures. If > anyone has anything I would be interested. I started writing some > thoughts at the link below, as a first pass based on some recent > experiences. > > http://blog.technologyofcontent.com/2009/08/the-resource-oriented-enterprise/ > My two killer business cases for REST are - the use of REST brings the decentralization aspects of the Web into the enterprise world and allows designers and developers of networked systems to create and evolve their components without the resource and time consuming effort of bringing them into a single room to discuss APIs. In geographically distributed organizations this might sometimes even be impossible. - the use of REST facilitates fragmented change. It deliberately provides room for evolution (e.g. format extensions) that can bypass the main organizational control. Evolutions that turn out to be valuable can flow back into the main line while others silently die. No harm to ongoing communication between systems is done bei either of those. Organizations grow in size and geographically and IMO it makes perfect sense to take a technology that has seen extraordinary success over a decade and apply it to today's networked enterprise IT. Jan > thanks > > Justin > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Stefan,
a bit late, but here is another suggestion to approach this:
Resources have semantics by representing certain 'things' of the
domain space (e.g. a lock). These semantics include what happens when
you interact through HTTP with them (e.g. "PUT /lock" creates the lock
or "DELETE /lock" deletes the lock). This is the result of turning
specialized APIs into a uniform API.
In this sense the resources have a type and clients use this
information to achieve the goals that constitute a given RESTful API.
The important thing in my opinion is that the resources do not have
these types out of themselves but that what matters is by what link
the client discovered them. E.g. given <link rel="lock" href="/locks/
344"/> a client could know that it can use /locks/344 to establish a
lock on the link source resource (by way of the definition of th elink
relation). For the moment the client will think of /locks/344 as being
'a lock'.
Likewise, API specifications will use type-like language to describe
how the API goals are achieved. Atom Pub for example writes:
"4.2 Documents and Resource Classification
A Resource whose IRI is listed in a Collection is called a Member
Resource.
[...]
"
I do not see how this notion of 'type' could be avoided.
Jan
On Sep 2, 2008, at 1:41 AM, Stefan Tilkov wrote:
> What do you call the concept of "classes" or "types" of resources in
> your RESTful designs? E.g. when you decide to turn each "customer"
> into its own identifiable resource - http://example.com/customers/1234
> - what does http://example.com/customers/{id} describe? Both "resource
> class" and "resource type" would work, but don't seem really
> convincing.
>
> Stefan
> --
> Stefan Tilkov, http://www.innoq.com/blog/st/
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
Jan Algermissen wrote: > On Aug 27, 2009, at 11:19 AM, julianbreich wrote: > > > Hi, > > > > while working on our new RESTful web service and discovering the > > need for versioning (mainly preventing concurrent changes) I > > wondered whether it was acceptable to make ETags (i.e. presence of > > an If-Match header for PUT, DELETE and maybe POST where applicable) > > mandatory. > > In RFC 2616 I could not find any evidence that this allowed or > > forbidden. But I might have overlooked it. > > If you consider it acceptable: which status code would you return to > > a client that did not provide an appropriate header? Well certainly > > one of the 4xx errors, but I'm not sure which to choose in particular. > > Incidently I asked the same thing a while ago and it has been > suggested to use 412 Precondition Failed + explanatory text. > > Sounded like the best idea. > ... Nope (I think): "The precondition given in one or more of the request-header fields evaluated to false when it was tested on the server. This response code allows the client to place preconditions on the current resource metainformation (header field data) and thus prevent the requested method from being applied to a resource other than the one intended." -- <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.10.4.13> Note the first sentence. My recommendation would be 403. BR, Julian
On Aug 28, 2009, at 3:52 AM, Julian Reschke wrote: > > > My recommendation would be 403. Oh, never thought of that one. It's better, I agree. Thanks. Jan > > BR, Julian > > > ------------------------------------ > > Yahoo! Groups Links > > >
For me the business case for REST is around client/server coupling and maintenance effort. My experiences with SOAP API's on the Microsoft platform were that if you sneezed on the server API that defined the contract you had to recompile and redeploy the clients. And then there is versioning. In the WCF world versioning is a potential nightmare, with changing URLs, XML namespaces on service contracts, message contracts and data contracts. I believe that REST's use of hypermedia and conneg make maintenance, deployment and versioning much easier, which translates to lower costs to the business in the maintenance phase of an application. Darrel On Wed, Aug 26, 2009 at 4:55 PM, Justin Cormack<justin@...> wrote: > > > I am trying to collect business rather than technical cases for REST/ > resource oriented rather than "service oriented" architectures. If > anyone has anything I would be interested. I started writing some > thoughts at the link below, as a first pass based on some recent > experiences. > > http://blog.technologyofcontent.com/2009/08/the-resource-oriented-enterprise/ > > thanks > > Justin >
Darrel Miller wrote: > [snip] I believe that REST's use of hypermedia and conneg make maintenance, > deployment and versioning much easier, which translates to lower costs > to the business in the maintenance phase of an application. > +1. Also, I think interoperability is much easier to achieve with REST because the ubiquity of HTTP. This has a huge positive side-effect for integration as well. My gut tells me that SOA Governance could really reap the benefits of a RESTful architecture as well, but I don't have any details yet on how. All this plus Darrel's comment on the fragility of SOAP stacks sold me on the idea of REST which is why I've been really pushing it hard at Red Hat and all our middleware projects. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill Burke wrote: > Darrel Miller wrote: > >> [snip] I believe that REST's use of hypermedia and conneg make maintenance, >> deployment and versioning much easier, which translates to lower costs >> to the business in the maintenance phase of an application. >> >> > > +1. Also, I think interoperability is much easier to achieve with REST > because the ubiquity of HTTP. This has a huge positive side-effect for > integration as well. My gut tells me that SOA Governance could really > reap the benefits of a RESTful architecture as well, but I don't have > any details yet on how. > > All this plus Darrel's comment on the fragility of SOAP stacks sold me > on the idea of REST which is why I've been really pushing it hard at Red > Hat and all our middleware projects. > I've heard some skepticism about the reliability of RESTful HTTP conneg "in practice", because of the potential for intermediaries (you may have no control over) to interfere with control data in the transfers. - Mike
Are you suggesting that there may be intermediaries that change the Accept http header? On Fri, Aug 28, 2009 at 11:58 AM, Mike Kelly<mike@...> wrote: > > I've heard some skepticism about the reliability of RESTful HTTP conneg > "in practice", because of the potential for intermediaries (you may have > no control over) to interfere with control data in the transfers. > > - Mike >
I meant conneg to include all control data within a request - Accept*, ETag, custom headers, etc. I don't necessarily agree with the skepticism, it's just something I have encountered in discussion. - Mike Darrel Miller wrote: > Are you suggesting that there may be intermediaries that change the > Accept http header? > > On Fri, Aug 28, 2009 at 11:58 AM, Mike Kelly<mike@...> wrote: > >> I've heard some skepticism about the reliability of RESTful HTTP conneg >> "in practice", because of the potential for intermediaries (you may have >> no control over) to interfere with control data in the transfers. >> >> - Mike >> >>
Justin, 2009/8/27 Justin Cormack <justin@...>: > I am trying to collect business rather than technical cases for REST/ > resource oriented rather than "service oriented" architectures. If > anyone has anything I would be interested. I started writing some > thoughts at the link below, as a first pass based on some recent > experiences. > > http://blog.technologyofcontent.com/2009/08/the-resource-oriented-enterprise/ Many of the business reasons for applying REST are the same as those for applying SOA. REST, in a way, is SOA on steroids. In Principles of Service Design, Thomas Erl describes seven SOA objectives and eight SOA principles. The objectives are as follows: Increased Intrinsic Interoperability * Made better with a uniform interface - Interoperability stops being an "easy to plug together at design time" thing to "easy to plug together at run-time" Increased Federation * This is not an explicit target of REST, to get everyone using the same service. However, it is compatible with REST. It also occurs in practice on the Web through market forces. * While SOA emphasises reuse of services, REST emphasises reuse of components (including services), media types, and connectors. Increased Vendor Diversity Options * More or less the same for SOA and REST, however REST is probably still less mature in the enterprise space. Time will change this. Increased Business and Technology Alignment * ie, we build services that the business wants, and can change in pace with business objectives * Again REST doesn't particularly target specific valuable services, but is compatible with the notion * REST is more capable of incremental upgrade than a classical SOA, especially "dynamic evolution". Increased ROI * Classically realised in SOA by reuse of services, which again is compatible with REST * Again, a uniform interface produces wins on this front when compared to classical SOA Increased Organizational Agility * Classically realised by being able to build new applications quickly by reusing existing services * Again, compatible with REST and again a uniform interface produces improved performance Reduced IT Burden * Few legacy systems doing more means less maintenance and related support activities - again compatible with REST. * REST makes maintenance easier with its focus on dynamic evolvability By my count REST provides benefits towards at least four of these business objectives over a classical SOA, and when combined with some of the principles of SOA I would only count Increased Vendor Diversity options as a potential weakness from a business perspective. I don't think you need to make an argument against SOA to make an argument for REST. REST is a natural candidate for the next generation of SOA, or at least for being part of the mix. To my mind, there is a significant amount of common ground between the principles and business objectives of SOA and the desirable properties and constraints of REST. Both provide guidance in areas the other overlooks. For example, REST provides ample guidance on the design of interfaces between components. SOA provides ample guidance on how to construct an inventory of services that produce value for a business. They occasionally talk across purposes and practices on the ground differ significantly, however I think the two can work together surprisingly well[1]. Benjamin. [1] http://soabooks.com/book.asp?book=soa_rest&page=overview
Stefan,
On Sep 2, 2008, at 1:41 AM, Stefan Tilkov wrote:
> What do you call the concept of "classes" or "types" of resources in
> your RESTful designs? E.g. when you decide to turn each "customer"
> into its own identifiable resource - http://example.com/customers/1234
> - what does http://example.com/customers/{id} describe? Both "resource
> class" and "resource type" would work, but don't seem really
> convincing.
Resources all have the same type in a formal sense. The same methods
are legal, and the same responses can be expected. The media types
used are standard throughout the architecture. However, different
resources have different semantics.
The discovery of semantics from a client perspective always comes from
context. The URL might come from a file on disk, with associated
semantics such as "home page" or "banking site". In a hypermedia
system the semantics will be encoded into representations returned
from other resources via hyperlinks. Again the context performs the
function of supplying semantics, often semantics relative to these
other resources. The uniform interface constraint requires that the
semantics be either human/AI-readable and encoded into free-form
content or machine-readable and encoded into some standard somewhere
that says what semantics can be implied from the context of the link.
For example, HTML contains "a" elements with supporting human-readable
semantics. Atom contains links with attributes like rel="self" to
indicate the semantics that the client can expect from the linked
resource. These standard machine-readable context semantics play some
of the role in the architecture that a service contract might have
played in a classical SOA.
The fact that you are talking about a customer resource should
indicate that the client is aware of these "customer" semantics, and
more: They should be more or less aware of which customer the resource
is talking about, or at least which customer in relation to some other
resource. It might be the "joe blo" customer, or the "customer for
invoice 1234". Those are the semantics that the client should have
in-hand in relation to the resource.
From a client or a server perspective I would be perfectly happy
talking a "customer resource". That is at least the "kind" of resource
it is... but perhaps even kind is not the right term. There is an
element of classification to the concept, but primarily we are talking
about the semantics and therefore what kind of transactions this
resource might become involved with as compared to other resources. I
think that if we were to get formal we might talk about type
describing the interface of the resource, which of course is standard
across all resources. All resources have the same type. On the other
hand, perhaps they do not have the same class. In the same sense that
several different Java classes might implement the same interface,
perhaps we could talk about different classes of resource implementing
the same uniform interface.
Still, I guess class doesn't ring true to me either on a gut level...
and formality may best be avoided :) In that case we might again fall
back to a less formal "kind" terminology, or similar.
Benjamin.
On Aug 28, 2009, at 10:25 PM, Benjamin Carlyle wrote:
> Stefan,
>
> On Sep 2, 2008, at 1:41 AM, Stefan Tilkov wrote:
>> What do you call the concept of "classes" or "types" of resources in
>> your RESTful designs? E.g. when you decide to turn each "customer"
>> into its own identifiable resource - http://example.com/customers/
>> 1234
>> - what does http://example.com/customers/{id} describe? Both
>> "resource
>> class" and "resource type" would work, but don't seem really
>> convincing.
>
> Resources all have the same type in a formal sense. The same methods
> are legal, and the same responses can be expected. The media types
> used are standard throughout the architecture. However, different
> resources have different semantics.
>
> The discovery of semantics from a client perspective always comes from
> context. The URL might come from a file on disk, with associated
> semantics such as "home page" or "banking site". In a hypermedia
> system the semantics will be encoded into representations returned
> from other resources via hyperlinks. Again the context performs the
> function of supplying semantics, often semantics relative to these
> other resources. The uniform interface constraint requires that the
> semantics be either human/AI-readable and encoded into free-form
> content or machine-readable and encoded into some standard somewhere
> that says what semantics can be implied from the context of the link.
> For example, HTML contains "a" elements with supporting human-readable
> semantics. Atom contains links with attributes like rel="self" to
> indicate the semantics that the client can expect from the linked
> resource. These standard machine-readable context semantics play some
> of the role in the architecture that a service contract might have
> played in a classical SOA.
>
> The fact that you are talking about a customer resource should
> indicate that the client is aware of these "customer" semantics, and
> more: They should be more or less aware of which customer the resource
> is talking about, or at least which customer in relation to some other
> resource. It might be the "joe blo" customer, or the "customer for
> invoice 1234". Those are the semantics that the client should have
> in-hand in relation to the resource.
>
> From a client or a server perspective I would be perfectly happy
> talking a "customer resource". That is at least the "kind" of resource
> it is... but perhaps even kind is not the right term.
Another possible phrase to get rid of "type" or "kind" is "client
expectations". The specification of the linking semantics cause
the client to have certain expectations about the effect of interactions
with a given resource. These expectations constitute what is
often called "resource semantics".
A related set of resource semantics (discoverable at run time
through hypermedia) constitutes a "REST API". This is probably
the reason why it is common to document REST APIs by
listing resource "kinds".
----
What I am currently trying to get my head around is this:
When viewing a REST API as essentially a set of link semantics how
can we version such APIs? And do we need to version them at all?
I looked at the Atom Publishing Protocol and it does not say that it
is a particular version. Suppose we'd add another top level document
type that brings in new capabilities - would that lead to APP 2.0? And
how would one communicate this to clients?
Jan
> There is an
> element of classification to the concept, but primarily we are talking
> about the semantics and therefore what kind of transactions this
> resource might become involved with as compared to other resources. I
> think that if we were to get formal we might talk about type
> describing the interface of the resource, which of course is standard
> across all resources. All resources have the same type. On the other
> hand, perhaps they do not have the same class. In the same sense that
> several different Java classes might implement the same interface,
> perhaps we could talk about different classes of resource implementing
> the same uniform interface.
>
> Still, I guess class doesn't ring true to me either on a gut level...
> and formality may best be avoided :) In that case we might again fall
> back to a less formal "kind" terminology, or similar.
>
> Benjamin.
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
On 29 Aug 2009, at 03:09, Benjamin Carlyle wrote: > > Benjamin. > [1] http://soabooks.com/book.asp?book=soa_rest&page=overview That book looks interesting, I have signed up to be notified when it is available. Thanks for all your responses, which I have summarized here http://blog.technologyofcontent.com/2009/08/the-resource-oriented-enterprise-followups/ Justin
There are a number of ways to spin the benefits of any architecture (and we have seen them all), but the key benefit that matters most is the ubiquity of HTTP. Subbu On Aug 26, 2009, at 1:55 PM, Justin Cormack wrote: > > I am trying to collect business rather than technical cases for REST/ > resource oriented rather than "service oriented" architectures. If > anyone has anything I would be interested. I started writing some > thoughts at the link below, as a first pass based on some recent > experiences. > > http://blog.technologyofcontent.com/2009/08/the-resource-oriented-enterprise/ > > thanks > > Justin >
I was thinking more about this after JBoss World last week. I think the message needs to be *REAL* simple. Something like: "SOAP has failed miserably as an interoperable, cross-platform protocol. REST has proven otherwise." I heard this over and over again last week at our conference. From SOAP users and those customers that have started to define RESTful interfaces. Subbu Allamaraju wrote: > > > There are a number of ways to spin the benefits of any architecture > (and we have seen them all), but the key benefit that matters most is > the ubiquity of HTTP. > > Subbu > > On Aug 26, 2009, at 1:55 PM, Justin Cormack wrote: > > > > > I am trying to collect business rather than technical cases for REST/ > > resource oriented rather than "service oriented" architectures. If > > anyone has anything I would be interested. I started writing some > > thoughts at the link below, as a first pass based on some recent > > experiences. > > > > > http://blog.technologyofcontent.com/2009/08/the-resource-oriented-enterprise/ > <http://blog.technologyofcontent.com/2009/08/the-resource-oriented-enterprise/> > > > > thanks > > > > Justin > > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Hi everyone, I just finished reading O'Reilly's "Restful web services" book, and I am in the middle of designing an upcoming web service. For this system, the server is handling two types of transactions (jobs) on behalf of multiple clients. For each new job, the client first performs a POST to the URI that represents the job type. The server creates a job number and returns the full URI to the client. The two job types are virtually identical, except that one is stored permanently on the server. client => POST /jobtype1 server => 201 Created + http://jobserver/jobtype1/1234 (where 1234 represents the job number) From here, the client PUTs various attributes of the job to the server, (PUT /jobtype1/1234/attribute1, PUT /jobtype1/1234/attribute2), until it is ready for the server to process the job. There is a predefined set of attributes of which the client is aware, so PUT is acceptable. I am then considering whether the client should GET /jobtype1/1234/result , at which point the server will hold the request open and eventually return the result along with 200 OK. Or whether the client should POST to /jobtype1/results with the job number included in the entity body. The server would return 202 Accepted along with the result URI (http://jobserver/jobtype1/1234/result). The client would then periodically GET the result URI until successful. Any advice on this job result pattern would be greatly appreciated. Second, I am concerned that splitting jobtype1 and jobtype2 into different 'factory' resources might be unnecessary and cause additional repetition during coding, but I'm not sure. Perhaps it would be better to simply POST to a /job URI and include a POST parameter to indicate the job type? The job types are functionally very similar, except that they are stored differently on the server. However, in the business domain, it is *very* important that they are considered separate as there are major implications to the fact that jobs of one type are allowed to be stored, whereas jobs of the second type must never be stored beyond the computation and delivery of results. Thanks for reading over my questions, I really appreciate it.
Hi thedesignofsoftware, On Sep 8, 2009, at 8:18 AM, thedesignofsoftware wrote: > Hi everyone, > > I just finished reading O'Reilly's "Restful web services" book, and > I am > in the middle of designing an upcoming web service. > > For this system, the server is handling two types of transactions > (jobs) > on behalf of multiple clients. For each new job, the client first > performs a POST to the URI that represents the job type. The server > creates a job number and returns the full URI to the client. The two > job > types are virtually identical, except that one is stored permanently > on > the server. > > client => POST /jobtype1 > server => 201 Created + http://jobserver/jobtype1/1234 (where 1234 > represents the job number) Good. > > From here, the client PUTs various attributes of the job to the > server, > (PUT /jobtype1/1234/attribute1, PUT /jobtype1/1234/attribute2), > until it > is ready for the server to process the job. There is a predefined > set of > attributes of which the client is aware, so PUT is acceptable. Good. Important is though, that the client does discovover the property URIs from hypermedia instead of having them or their suffixes hard coded. In any case, make sure there is no implicit shared knowledge. > > I am then considering whether the client should GET > /jobtype1/1234/result , at which point the server will hold the > request > open What does 'hold open' mean? > and eventually return the result along with 200 OK. Or whether the > client should POST to /jobtype1/results with the job number included > in > the entity body. The server would return 202 Accepted along with the > result URI (http://jobserver/jobtype1/1234/result). The client would > then periodically GET the result URI until successful. 202 and polling seems more appropriate. But the real question is: what is starting the job? The initial POST or the setting of some attribute? > > Any advice on this job result pattern would be greatly appreciated. > > Second, I am concerned that splitting jobtype1 and jobtype2 into > different 'factory' resources might be unnecessary and cause > additional > repetition during coding, but I'm not sure. Perhaps it would be better > to simply POST to a /job URI and include a POST parameter to indicate > the job type? This should not make too much of a difference ragarding the backend code. > The job types are functionally very similar, except that > they are stored differently on the server. However, in the business > domain, it is *very* important that they are considered separate as > there are major implications to the fact that jobs of one type are > allowed to be stored, whereas jobs of the second type must never be > stored beyond the computation and delivery of results. I'd keep an eye on the domains used for the URIs. Maybe it is a good idea to distinguish the services by domain so you can later on partition them to physical machines and use DNS to direct the URIs to the correct machine. HTH, Jan > > Thanks for reading over my questions, I really appreciate it. > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
I'm very interested in that statement. Is the comment coming from developers using POD or using ReST architectures? We've had many discussions with people around ReST here, and it seems that on the one side, people have been saying they prefer ReST when talking about POD RPCish services. On the other side, the feedback has been that we've not been communicating in a pragmatic enough fashion the differences / advantages of various architectures, including DDDD, EDA etc. What would you reckon is the proportion of people that want to get ReST, as opposed to flat RPC-style POD apis? Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Bill Burke Sent: 07 September 2009 18:43 To: Subbu Allamaraju Cc: Justin Cormack; Rest List Subject: Re: [rest-discuss] Business cases for REST I was thinking more about this after JBoss World last week. I think the message needs to be *REAL* simple. Something like: "SOAP has failed miserably as an interoperable, cross-platform protocol. REST has proven otherwise." I heard this over and over again last week at our conference. From SOAP users and those customers that have started to define RESTful interfaces. Subbu Allamaraju wrote: > > > There are a number of ways to spin the benefits of any architecture > (and we have seen them all), but the key benefit that matters most is > the ubiquity of HTTP. > > Subbu > > On Aug 26, 2009, at 1:55 PM, Justin Cormack wrote: > > > > > I am trying to collect business rather than technical cases for REST/ > > resource oriented rather than "service oriented" architectures. If > > anyone has anything I would be interested. I started writing some > > thoughts at the link below, as a first pass based on some recent > > experiences. > > > > > http://blog.technologyofcontent.com/2009/08/the-resource-oriented-enterprise / > <http://blog.technologyofcontent.com/2009/08/the-resource-oriented-enterpris e/> > > > > thanks > > > > Justin > > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com ------------------------------------ Yahoo! Groups Links
I'm looking for pointers to best practices to versioning representations and support for client negotiation of specific versions. To provide different representations of the same resource, the media type alone is sufficient to drive the content type negotiation. For example, I might provide <link type="application/atom+xml"... <link type="application/rss+xml"... as alternate representations of the same resource. The client could then be expected to pick a representation that it knows how to process and GET it. But suppose we have a different version of the *same* media type "myformat" - v1.0 and v2.0? To complicate matters, let's suppose that [due to arrogant, insensitive developers:)] v2.0 is not backwards compatible with v1.0. Assuming that the service is capable of serving representations in both v1.0 and v2.0, the question becomes how might the client negotiate one version over the other for the *same* media type? I've attempted to think through the following: 1) (I can assume XML) XML versioning alone won't do because there's no way to indicate in the link itself that it's one version of the schema over the other. So even if the client retrieved a v2.0 representation and stopped processing it after seeing an unfamiliar namespace (for example), it has no way to subsequently request the older version. 2) My initial response was to simply add the versioning information to the content-type itself (e.g. application/myformat.v20 and application/myformat.v10). This makes negotiation and extensibility clean and elegant, but causes me two concerns: the "explosion-of-media-type" concern and the "nobody-else-seems-to-be-doing-it-that-way"(based on current IANA) concern. 3) The next thing that comes to mind is something like the "level" accept-extension exampled in rfc2616 (e.g. text/html;level=1). In my scenario, would be something like: application/myformat;version=2.0 I suppose. I'm aware, btw, of the value of re-using existing media types, this may be an edge case for some but I'd like to address an unfortunate reality inside some fast moving enterprises though. Thoughts/pointers appreciated... --tim
Sebastien Lambla wrote: > I'm very interested in that statement. Is the comment coming from developers > using POD or using ReST architectures? > > We've had many discussions with people around ReST here, and it seems that > on the one side, people have been saying they prefer ReST when talking about > POD RPCish services. On the other side, the feedback has been that we've not > been communicating in a pragmatic enough fashion the differences / > advantages of various architectures, including DDDD, EDA etc. > > What would you reckon is the proportion of people that want to get ReST, as > opposed to flat RPC-style POD apis? > I've never heard of the terms POD, DDDD, or EDA, but I think I understand your question. I still think the vast majority don't know the difference between RPCish stuff and REST. Many think REST is "pretty" URLs. Its what I thought a few years ago when I first started looking at REST and I find it is a common perception. This is why I think the most important step is to get people to conform to the uniform interface. Stress the importance of conforming to it. From my own experience, once you start designing interfaces that conform to the uniform interface it starts making you think more RESTfully. It starts pushing you to make RESTful decisions even if you don't know what REST truly is. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Tim Williams wrote: > 3) The next thing that comes to mind is something like the "level" > accept-extension exampled in rfc2616 (e.g. text/html;level=1). In my > scenario, would be something like: application/myformat;version=2.0 I > suppose. > I like this approach the best. I also prefer: application/myformat+xml;version=2.0 application/myformat+json;version=2.0 In other words, the "+" whether or not the media type allows the + or not. (json doesn't I think). Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Thu, Sep 10, 2009 at 8:28 PM, Bill Burke<bburke@...> wrote: > > > Tim Williams wrote: >> 3) The next thing that comes to mind is something like the "level" >> accept-extension exampled in rfc2616 (e.g. text/html;level=1). In my >> scenario, would be something like: application/myformat;version=2.0 I >> suppose. >> > > I like this approach the best. I also prefer: > > application/myformat+xml;version=2.0 > application/myformat+json;version=2.0 > > In other words, the "+" whether or not the media type allows the + or > not. (json doesn't I think). > > Bill This kind of approach definitely works well if the various representations your app supports are (or potentially could be) versioned independently. In my experience, it's also common for a client to be programmed against a particular version of an entire API specification. In that case, it's convenient to let the client assert this version number assumption in a separate HTTP header (in my case, my servers assume lack of this header means "I want the latest version supported by this server instance"), and leave the media types alone. This means you don't have to go change 23 gazillion instances of your media type strings when you update to a later spec version. Of course, naughty developers who arbitrarily break backwards compatibility can still mess this up, but if you couple this with a "please ignore any new elements that you don't recognize" rule in your API spec, you can cover a pretty large number of use cases where you've added fields in an updated representation, but can still be processed by an older-spec-version client. Craig McClanahan
On Thu, Sep 10, 2009 at 9:28 PM, Bill Burke <bburke@...> wrote: > Tim Williams wrote: > > > 3) The next thing that comes to mind is something like the "level" > > accept-extension exampled in rfc2616 (e.g. text/html;level=1). In my > > scenario, would be something like: application/myformat;version=2.0 I > > suppose. > > > > I like this approach the best. I also prefer: > > application/myformat+xml;version=2.0 > application/myformat+json;version=2.0 > > In other words, the "+" whether or not the media type allows the + or > not. (json doesn't I think). > I, on the other hand, think the best approach is to put the major version directly in the subtype (eg `application/myformat.v2+xml`). One practical issue with putting the version in a parameter is that many applications servers will be unable to facilitate content negotiation. For instance, Rails believes, quite reasonably, that `application/myformat+xml;version=2.0` and `application/myformat+xml;version=3.0` are the same mime type. Therefore, if you take this approach, you would not be able to leverage any on it's very nice content negotiation support. I am not very familiar with other web app frameworks but it would bear investigating the one you plan on using. On a more theoretical level there is the fact that parameters for a mime type are syntactically optional. This means that as a server you have to decide what to do when you get a request with an accept of `application/myformat+xml`. There are three options none of which is very good. You could return at 406 with the list types with valid versions. However, this might be somewhat disconcerting given that to date almost common MIME type parameters are both syntactically and semantically optional (eg `charset`) You could return the preferred (read: highest) version. However, this does not work because it will cause clients to break every time an addition version is created. If, as a client developer exploring the api, i get a usable response using `application/myformat+xml` i am quite likely to just use that in my code. However, my parsing of the representations will be based on the schema of the preferred version at that time. You could return the most compatible (read: lowest) version. But that means that you are encouraging users to use the least preferred version of the api. As an exploring developer i happen to leave off the version what i see is the initial flawed attempts at the api. Embedding the major version directly in the subtype (eg `application/myformat.v2+xml`) makes it absolutely clear that the major version is required. A parameter is a good place to put a minor version, though, if you need such a thing. If there are multiple server implementations for example, some clients might need a way of saying i need version 2.1 where as other might be fine with any v2 implementation. In this situation i think `application/myformat.v2+xml;level=42` works rather well. The lack of a level parameter in the accept header field implies that any level will do, where as specifying a level means that the specified level, or greater, is required. BTW, i have written a series of posts (<http://barelyenough.org/blog/tag/rest-versioning/) on this subject which you might find interesting. -- Peter Williams http://barelyenough.org
Tim, On Sep 11, 2009, at 5:04 AM, Tim Williams wrote: > I'm looking for pointers to best practices to versioning > representations and support for client negotiation of specific > versions. To provide different representations of the same resource, > the media type alone is sufficient to drive the content type > negotiation. For example, I might provide > > <link type="application/atom+xml"... > <link type="application/rss+xml"... > > as alternate representations of the same resource. The client could > then be expected to pick a representation that it knows how to process > and GET it. > > But suppose we have a different version of the *same* media type > "myformat" - v1.0 and v2.0? To complicate matters, let's suppose that > [due to arrogant, insensitive developers:)] v2.0 is not backwards > compatible with v1.0. Assuming that the service is capable of serving > representations in both v1.0 and v2.0, the question becomes how might > the client negotiate one version over the other for the *same* media > type? > > I've attempted to think through the following: > > 1) (I can assume XML) XML versioning alone won't do because there's no > way to indicate in the link itself that it's one version of the schema > over the other. So even if the client retrieved a v2.0 representation > and stopped processing it after seeing an unfamiliar namespace (for > example), it has no way to subsequently request the older version. > > 2) My initial response was to simply add the versioning information to > the content-type itself (e.g. application/myformat.v20 and > application/myformat.v10). This makes negotiation and extensibility > clean and elegant, but causes me two concerns: the > "explosion-of-media-type" concern and the > "nobody-else-seems-to-be-doing-it-that-way"(based on current IANA) > concern. This is my preferred solution - with two caveats: 1. Only put the major version number (indicating forward incompatible change) in the media type name. IOW, do not change the media type name unless the change breaks older clients. 2. Make forward incompatible changes a very rare thing (you should anyhow) to avoid media type explosion > > 3) The next thing that comes to mind is something like the "level" > accept-extension exampled in rfc2616 (e.g. text/html;level=1). In my > scenario, would be something like: application/myformat;version=2.0 I > suppose. I use this for indicating the minor version number so clients can pick a certain minor version if they wish. > > I'm aware, btw, of the value of re-using existing media types, this > may be an edge case I think that it is an edge case on the open Web, but not inside an enterprise where evolution is faster (e.g. due to new product requirements). > for some but I'd like to address an unfortunate > reality inside some fast moving enterprises though. Oh - yes, that's what I mean :-) Jan > > Thoughts/pointers appreciated... > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > >
My custom solution is and has always been to provide for backward and foward compat by using extensible serialization, not rely on a version to consider a document valid, and decide on the server side what resulting information is enough to process the request or not. I wonder, why is it that it seems so out of fashion for people to support extensible formats that have been well crafted for this, and why versioning creeps back in every couple of hundred messages. I'll attempt what I think may be the root of the problem: object serialization in xml. As long as we design architectures to accommodate restrictive tools, we're going to have a rough time. Oh, and type/subtype;param=value is *not* the same as type/subtype, under any circumstance. If rails does things differently, then rails can't process mediatypes by the spec, which is a bug you should go and fill with them. Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jan Algermissen Sent: 11 September 2009 06:47 To: Tim Williams Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Media Type Version Negotiation Tim, On Sep 11, 2009, at 5:04 AM, Tim Williams wrote: > I'm looking for pointers to best practices to versioning > representations and support for client negotiation of specific > versions. To provide different representations of the same resource, > the media type alone is sufficient to drive the content type > negotiation. For example, I might provide > > <link type="application/atom+xml"... > <link type="application/rss+xml"... > > as alternate representations of the same resource. The client could > then be expected to pick a representation that it knows how to process > and GET it. > > But suppose we have a different version of the *same* media type > "myformat" - v1.0 and v2.0? To complicate matters, let's suppose that > [due to arrogant, insensitive developers:)] v2.0 is not backwards > compatible with v1.0. Assuming that the service is capable of serving > representations in both v1.0 and v2.0, the question becomes how might > the client negotiate one version over the other for the *same* media > type? > > I've attempted to think through the following: > > 1) (I can assume XML) XML versioning alone won't do because there's no > way to indicate in the link itself that it's one version of the schema > over the other. So even if the client retrieved a v2.0 representation > and stopped processing it after seeing an unfamiliar namespace (for > example), it has no way to subsequently request the older version. > > 2) My initial response was to simply add the versioning information to > the content-type itself (e.g. application/myformat.v20 and > application/myformat.v10). This makes negotiation and extensibility > clean and elegant, but causes me two concerns: the > "explosion-of-media-type" concern and the > "nobody-else-seems-to-be-doing-it-that-way"(based on current IANA) > concern. This is my preferred solution - with two caveats: 1. Only put the major version number (indicating forward incompatible change) in the media type name. IOW, do not change the media type name unless the change breaks older clients. 2. Make forward incompatible changes a very rare thing (you should anyhow) to avoid media type explosion > > 3) The next thing that comes to mind is something like the "level" > accept-extension exampled in rfc2616 (e.g. text/html;level=1). In my > scenario, would be something like: application/myformat;version=2.0 I > suppose. I use this for indicating the minor version number so clients can pick a certain minor version if they wish. > > I'm aware, btw, of the value of re-using existing media types, this > may be an edge case I think that it is an edge case on the open Web, but not inside an enterprise where evolution is faster (e.g. due to new product requirements). > for some but I'd like to address an unfortunate > reality inside some fast moving enterprises though. Oh - yes, that's what I mean :-) Jan > > Thoughts/pointers appreciated... > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > > ------------------------------------ Yahoo! Groups Links
On Sep 11, 2009, at 11:51 AM, Sebastien Lambla wrote:
> My custom solution is and has always been to provide for backward
> and foward
> compat by using extensible serialization, not rely on a version to
> consider
> a document valid, and decide on the server side what resulting
> information
> is enough to process the request or not.
>
> I wonder, why is it that it seems so out of fashion for people to
> support
> extensible formats that have been well crafted for this, and why
> versioning
> creeps back in every couple of hundred messages.
Hmm.... but extensible formats do not save you from forward incompatible
changes. Yes they help, but if you by all means need to create something
forward incompatible, you need to change the media type to not
screw up the processor by dispatching to the new one.
>
> I'll attempt what I think may be the root of the problem: object
> serialization in xml.
>
> As long as we design architectures to accommodate restrictive tools,
> we're
> going to have a rough time.
>
> Oh, and type/subtype;param=value is *not* the same as type/subtype,
> under
> any circumstance.
What do you mean?
If type/subtype is being dispatched to some processor A then type/
subtype;foo=bar
should as well be or the dispatcher does a bad job.
There isn't much value in parameters as part of a Content-Type header
anyhow, because
you just cannot predict if there is an intermediary that strips the
params off.
I view them mostly as valuable in content type hints in hyperlink
elements such as Atom's <link> element:
<link rel="alternate" href="..." type="application/
vnd.foo.report; version=1.5"/>
<link rel="alternate" href="..." type="application/
vnd.foo.report; version=1.7"/>
The above make good sense, but
Content-Type: application/vnd.foo.report; version=1.5
does not, because only application/vnd.foo.report is important for the
dispatcher.
Jan
> If rails does things differently, then rails can't process
> mediatypes by the spec, which is a bug you should go and fill with
> them.
>
> Seb
>
> -----Original Message-----
> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com
> ] On
> Behalf Of Jan Algermissen
> Sent: 11 September 2009 06:47
> To: Tim Williams
> Cc: rest-discuss@yahoogroups.com
> Subject: Re: [rest-discuss] Media Type Version Negotiation
>
> Tim,
>
> On Sep 11, 2009, at 5:04 AM, Tim Williams wrote:
>
>> I'm looking for pointers to best practices to versioning
>> representations and support for client negotiation of specific
>> versions. To provide different representations of the same resource,
>> the media type alone is sufficient to drive the content type
>> negotiation. For example, I might provide
>>
>> <link type="application/atom+xml"...
>> <link type="application/rss+xml"...
>>
>> as alternate representations of the same resource. The client could
>> then be expected to pick a representation that it knows how to
>> process
>> and GET it.
>>
>> But suppose we have a different version of the *same* media type
>> "myformat" - v1.0 and v2.0? To complicate matters, let's suppose
>> that
>> [due to arrogant, insensitive developers:)] v2.0 is not backwards
>> compatible with v1.0. Assuming that the service is capable of
>> serving
>> representations in both v1.0 and v2.0, the question becomes how might
>> the client negotiate one version over the other for the *same* media
>> type?
>>
>> I've attempted to think through the following:
>>
>> 1) (I can assume XML) XML versioning alone won't do because there's
>> no
>> way to indicate in the link itself that it's one version of the
>> schema
>> over the other. So even if the client retrieved a v2.0
>> representation
>> and stopped processing it after seeing an unfamiliar namespace (for
>> example), it has no way to subsequently request the older version.
>>
>> 2) My initial response was to simply add the versioning information
>> to
>> the content-type itself (e.g. application/myformat.v20 and
>> application/myformat.v10). This makes negotiation and extensibility
>> clean and elegant, but causes me two concerns: the
>> "explosion-of-media-type" concern and the
>> "nobody-else-seems-to-be-doing-it-that-way"(based on current IANA)
>> concern.
>
> This is my preferred solution - with two caveats:
>
> 1. Only put the major version number (indicating forward incompatible
> change) in the media type name. IOW, do not change the media type
> name unless the change breaks older clients.
>
> 2. Make forward incompatible changes a very rare thing (you should
> anyhow)
> to avoid media type explosion
>
>>
>> 3) The next thing that comes to mind is something like the "level"
>> accept-extension exampled in rfc2616 (e.g. text/html;level=1). In my
>> scenario, would be something like: application/myformat;version=2.0 I
>> suppose.
>
> I use this for indicating the minor version number so clients can pick
> a certain minor version if they wish.
>
>>
>> I'm aware, btw, of the value of re-using existing media types, this
>> may be an edge case
>
> I think that it is an edge case on the open Web, but not inside an
> enterprise
> where evolution is faster (e.g. due to new product requirements).
>
>
>> for some but I'd like to address an unfortunate
>> reality inside some fast moving enterprises though.
>
> Oh - yes, that's what I mean :-)
>
> Jan
>
>>
>> Thoughts/pointers appreciated...
>> --tim
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Jan Algermissen wrote: > There isn't much value in parameters as part of a Content-Type header > anyhow, because > you just cannot predict if there is an intermediary that strips the > params off. > Is this theory or practice? Considering that charset is an important parameter for many media types, this would be a huge bug in a proxy cache. Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Fri, Sep 11, 2009 at 1:55 PM, Bill Burke <bburke@...> wrote: > > Jan Algermissen wrote: > > There isn't much value in parameters as part of a Content-Type header > > anyhow, because > > you just cannot predict if there is an intermediary that strips the > > params off. > > Is this theory or practice? Considering that charset is an important > parameter for many media types, this would be a huge bug in a proxy cache. Another thing to consider is that a lot of applications (certainly the ones I'm thinking about) are going to use SSL/TLS in which case they are immune to tampering. Sam (who finds Yahoo!'s HTML table antics on this list intensely aggravating).
On Fri, Sep 11, 2009 at 12:59 AM, Craig McClanahan <craigmcc@...> wrote:
> On Thu, Sep 10, 2009 at 8:28 PM, Bill Burke<bburke@...> wrote:
>>
>>
>> Tim Williams wrote:
>>> 3) The next thing that comes to mind is something like the "level"
>>> accept-extension exampled in rfc2616 (e.g. text/html;level=1). In my
>>> scenario, would be something like: application/myformat;version=2.0 I
>>> suppose.
>>>
>>
>> I like this approach the best. I also prefer:
>>
>> application/myformat+xml;version=2.0
>> application/myformat+json;version=2.0
>>
>> In other words, the "+" whether or not the media type allows the + or
>> not. (json doesn't I think).
>>
>> Bill
>
> This kind of approach definitely works well if the various
> representations your app supports are (or potentially could be)
> versioned independently. In my experience, it's also common for a
> client to be programmed against a particular version of an entire API
> specification. In that case, it's convenient to let the client assert
> this version number assumption in a separate HTTP header (in my case,
> my servers assume lack of this header means "I want the latest version
> supported by this server instance"), and leave the media types alone.
> This means you don't have to go change 23 gazillion instances of your
> media type strings when you update to a later spec version.
Thanks Craig,
This is another good option. For some reason, custom HTTP headers
seem to have a "hacky" connotation - not to me, but anyway.
4) Custom HTTP header indicating the overall version of an API.
Representation(media type) versions would be implied by the overall
API version.
This and another response makes me think of adding another option too...
5) Just rely on URI versioning and assume that people follow the
HATEOAS constraint. I have the benefit of a standard format for a
bookmark/entry resource for all services so I could define a "version"
attribute on the initial states or wrap all initial states in a
version element. Something like:
<service>
<resources version="1.0">
<link rel="search" href="/v1/search"/>
<resources>
<resources version="2.0">
<link rel="search" href="/v2/search"/>
</resources>
</service>
Thanks for helping think this through...
--tim
On Fri, Sep 11, 2009 at 5:51 AM, Sebastien Lambla <seb@...> wrote: > My custom solution is and has always been to provide for backward and foward > compat by using extensible serialization, not rely on a version to consider > a document valid, and decide on the server side what resulting information > is enough to process the request or not. Sorry Seb, can you clarify what you mean by 'extensible serialization' and how it solves the problem? Is this idea explained somewhere in some detail? I think of serialization as the "conversion of data to bits for transmission" and I'm unfortunately not making the connection. > I wonder, why is it that it seems so out of fashion for people to support > extensible formats that have been well crafted for this, I'm using XML, an extensible format, but changes still require a new schema - a new version - I'm struggling to see how 'extensibility' obviates my essential problem which is, 'client is programmed based on knowledge of a particular representation(schema), if the schema changes in an incompatible way, how does the client negotiate for the representation it understands'? > and why versioning > creeps back in every couple of hundred messages. Well, this specifically, and media types in general come up frequently for a few reasons: 1) it's the least addressed aspect of REST in the dissertation (apparently because Roy ran out of time) and 2) once one groks the basics of REST, its apparent that media type design/negotiation is critically important and important to get right and 3) existing explanations on the Internet take a 'use-existing-media-type' approach that is naive and somewhat unhelpful in the complexities that arise when implementing an architecture in a large enterprise. > I'll attempt what I think may be the root of the problem: object > serialization in xml. > > As long as we design architectures to accommodate restrictive tools, we're > going to have a rough time. Can you point me to some resource that describes what you are referring to in more concrete detail? I find statements like this to be too nebulous to truly appreciate. Thanks, --tim
On Fri, Sep 11, 2009 at 1:48 AM, Peter Williams <pezra@...> wrote: > On Thu, Sep 10, 2009 at 9:28 PM, Bill Burke <bburke@...> wrote: >> Tim Williams wrote: >> >> > 3) The next thing that comes to mind is something like the "level" >> > accept-extension exampled in rfc2616 (e.g. text/html;level=1). In my >> > scenario, would be something like: application/myformat;version=2.0 I >> > suppose. >> > >> >> I like this approach the best. I also prefer: >> >> application/myformat+xml;version=2.0 >> application/myformat+json;version=2.0 >> >> In other words, the "+" whether or not the media type allows the + or >> not. (json doesn't I think). >> > > I, on the other hand, think the best approach is to put the major > version directly in the subtype (eg `application/myformat.v2+xml`). > > One practical issue with putting the version in a parameter is that > many applications servers will be unable to facilitate content > negotiation. For instance, Rails believes, quite reasonably, that > `application/myformat+xml;version=2.0` and > `application/myformat+xml;version=3.0` are the same mime type. > Therefore, if you take this approach, you would not be able to > leverage any on it's very nice content negotiation support. I am not > very familiar with other web app frameworks but it would bear > investigating the one you plan on using. > > On a more theoretical level there is the fact that parameters for a > mime type are syntactically optional. This means that as a server you > have to decide what to do when you get a request with an accept of > `application/myformat+xml`. There are three options none of which is > very good. > > You could return at 406 with the list types with valid versions. > However, this might be somewhat disconcerting given that to date > almost common MIME type parameters are both syntactically and > semantically optional (eg `charset`) > > You could return the preferred (read: highest) version. However, this > does not work because it will cause clients to break every time an > addition version is created. If, as a client developer exploring the > api, i get a usable response using `application/myformat+xml` i am > quite likely to just use that in my code. However, my parsing of the > representations will be based on the schema of the preferred version > at that time. > > You could return the most compatible (read: lowest) version. But that > means that you are encouraging users to use the least preferred > version of the api. As an exploring developer i happen to leave off > the version what i see is the initial flawed attempts at the api. > > Embedding the major version directly in the subtype > (eg `application/myformat.v2+xml`) makes it absolutely clear that > the major version is required. A parameter is a good place to put a > minor version, though, if you need such a thing. If there are > multiple server implementations for example, some clients might need a > way of saying i need version 2.1 where as other might be fine with any > v2 implementation. In this situation i think > `application/myformat.v2+xml;level=42` works rather well. The lack of > a level parameter in the accept header field implies that any level > will do, where as specifying a level means that the specified level, > or greater, is required. > > > BTW, i have written a series of posts > (<http://barelyenough.org/blog/tag/rest-versioning/) on this subject which > you might find interesting. Thanks Peter, I wish I'd crafted the right google keywords to find this before I posed the question - well done:) --tim
On Fri, Sep 11, 2009 at 3:51 AM, Sebastien Lambla <seb@...> wrote: > My custom solution is and has always been to provide for backward and foward > compat by using extensible serialization, not rely on a version to consider > a document valid, and decide on the server side what resulting information > is enough to process the request or not. I think this is a nice ideal and should be the goal when designing APIs and formats. However, it is unrealistic to assume there will never be a need for in compatible changes. If you always maintain backwards compatibility as the format/api develops the representations will become, over time, so full of obsolete, deprecated and obsolescent sections that be cost of switching to a new clean format w/o all the cruft will be worth for ease of understanding alone. > Oh, and type/subtype;param=value is *not* the same as type/subtype, under > any circumstance. If rails does things differently, then rails can't process > mediatypes by the spec, which is a bug you should go and fill with them. I think that whether `type/subtype;param=value` is the same as `type/subtype` depends a great deal on the application, the mime type and the parameter. For applications written in a modern application frameworks they actually the same for all common media types. Regardless, dispatching on the type and subtype alone is the standard practice right now. Whats more it works rather well in real world. By far the most common parameter in the wild to day is `charset` which is something the framework can handled transparently w/o bothering my application about it at all. -- Peter Williams http://barelyenough.org
On Fri, Sep 11, 2009 at 8:21 AM, mike amundsen <mamund@...> wrote: > If your server needs to support common browsers, media-type versioning > is a non-starter. The best you can achieve is to place the version in > an optional parameter and interpret a missing parameters as equal to > the most recent version. > > The most universal way to include versioning information is in the > URI. Both custom clients and common browsers will handle these w/o > problems. I disagree. Versions in the URI are real pain for some custom clients. Consider a custom client that builds a large set of data in which each that references resources in your API. For example, i once worked on an app that provided a way configure a monitoring tool. These configurations where rather complex and there was one per piece of equipment. The equipment being monitored was identified by a separate application (there was more than one way to configure monitoring). So for each piece of equipment that was monitored there could be a configuration. These configurations reference the equipment by its URI in the inventory system. At one point a new version of the inventory API became available. We updated the configuration tool to support the improved API. If inventory API version been embedded in the equipment URIs we would have been in a tough spot. We would have had code in the configuration system to handle both versions of the inventory API, which would have significantly increasing the complexity and maintenance of the system. Or we would have had to bulk rewrite all the stored equipment URIs to include the new version, which is only an option because we happened to be in control of both sides. (And it is a way to anti-HATEOAS for my taste, anyway.) However, using versioning in the media type meant that we where able to just remove support for the previous inventory API version and add support for new one to the configuration management tool and the deploy. No client side URI construction or increased complexity required. > Server can return 410 for obsolete URI (versions) and clients can > adjust accordingly. This assumes that the previous versions are unsupported. In my experience is it much better to continue to support earlier versions of an API for quite some time. This give people time to transition clients to the new API in a manner that fits with their priorities and resource availabilities. In the case of a truly collaborative distributed system that means that you should plan on supporting most version indefinitely because some clients will just never be updated. -- Peter Williams http://barelyenough.org
I like the version param because it allows the default "application/myformat" to get the latest version. if you do "application/myformat.v2" then you don't have this. Peter Williams wrote: > > > On Fri, Sep 11, 2009 at 3:51 AM, Sebastien Lambla <seb@... > <mailto:seb%40serialseb.com>> wrote: > > My custom solution is and has always been to provide for backward and > foward > > compat by using extensible serialization, not rely on a version to > consider > > a document valid, and decide on the server side what resulting > information > > is enough to process the request or not. > > I think this is a nice ideal and should be the goal when designing > APIs and formats. However, it is unrealistic to assume there will > never be a need for in compatible changes. If you always maintain > backwards compatibility as the format/api develops the representations > will become, over time, so full of obsolete, deprecated and > obsolescent sections that be cost of switching to a new clean format > w/o all the cruft will be worth for ease of understanding alone. > > > Oh, and type/subtype;param=value is *not* the same as type/subtype, under > > any circumstance. If rails does things differently, then rails can't > process > > mediatypes by the spec, which is a bug you should go and fill with them. > > I think that whether `type/subtype;param=value` is the same as > `type/subtype` depends a great deal on the application, the mime type > and the parameter. For applications written in a modern application > frameworks they actually the same for all common media types. > Regardless, dispatching on the type and subtype alone is the standard > practice right now. Whats more it works rather well in real world. > By far the most common parameter in the wild to day is `charset` which > is something the framework can handled transparently w/o bothering my > application about it at all. > > -- > Peter Williams > http://barelyenough.org <http://barelyenough.org> > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Fri, Sep 11, 2009 at 10:16 AM, Bill Burke <bburke@...> wrote: > I like the version param because it allows the default > "application/myformat" to get the latest version. if you do > "application/myformat.v2" then you don't have this. That is a bug, not a feature. One hard and fast rule of good version management is that any request that works to day will continue to working in a compatible way or fail with an explicit error. You never want new versions of a system to break existing clients (unless you are explicitly EOLing the API version), even if they are imperfectly implemented. Having an identical request quietly work today in a that is incompatible with how it worked yesterday is not acceptable to me. -- Peter Williams http://barelyenough.org
> Hmm.... but extensible formats do not save you from forward > incompatible > changes. Yes they help, but if you by all means need to create > something > forward incompatible, you need to change the media type to not > screw up the processor by dispatching to the new one. If you break your capacity to alter the format on reception / sending, then you've already failed and need to start fresh with a new format altogether. Slapping a version number on it won't help. > If type/subtype is being dispatched to some processor A then type/ > subtype;foo=bar > should as well be or the dispatcher does a bad job. No, it should ignore the ones it doesn't understand, but not ignore the ones required by a media type. See the atom extra media type as an example of what your suggestion entails: your processor for items couldn't be dispatched individually. Seb
> I think this is a nice ideal and should be the goal when designing > APIs and formats. However, it is unrealistic to assume there will > never be a need for in compatible changes. If you always maintain > backwards compatibility as the format/api develops the representations > will become, over time, so full of obsolete, deprecated and > obsolescent sections that be cost of switching to a new clean format > w/o all the cruft will be worth for ease of understanding alone. This is not correct. If you decide to phase out unilaterally your media type by introducing a new version for the new functionality, you do not need the old version anymore. All the same in an extensible format, my <ns1:CustomerState /> won't be needed if I phase it out and my new clients require <cs2:CustomerState />. Point is, when you phase in new data or phase out old data, the structure (aka the sacrosaint xsd) is irrelevant to the ability of the handler to execute the process. Seb
> Sorry Seb, can you clarify what you mean by 'extensible serialization' > and how it solves the problem? Is this idea explained somewhere in > some detail? I think of serialization as the "conversion of data to > bits for transmission" and I'm unfortunately not making the > connection. Object -> xml is what I meant by serialization. > I'm using XML, an extensible format, but changes still require a new > schema - a new version - I'm struggling to see how 'extensibility' > obviates my essential problem which is, 'client is programmed based on > knowledge of a particular representation(schema), if the schema > changes in an incompatible way, how does the client negotiate for the > representation it understands'? xsd:Any + xsd:AnyAttribute + xmlns. The attachment to a strict schema is exactly what introduces the need to version. Ad-hoc independent additions to container formats targeted to your needs means supporting multiple versions is easy. > and 3) existing explanations on the Internet take a > 'use-existing-media-type' approach that is naive and somewhat > unhelpful in the complexities that arise when implementing an > architecture in a large enterprise. > Can you point me to some resource that describes what you are > referring to in more concrete detail? I find statements like this to > be too nebulous to truly appreciate. Object -> xml together with strict xsd is the root of most of the issues people seem to have with media type design. This is my core point. The message format itself, aka your xsd, is to me completely irrelevant. What is relevant is if the media type processor, through whatever means it has to find data in a format, can gather enough information to process a request. If it does, the process will go through, if not it won't. And when you get to that point, chances are fairly high that xsd is in fact more trouble than its worth. Seb
My loose analysis of the issue is that most enterprise developers are dealing with new types that flow directly from their code. Tools bang out a schema that looks like structured data. The class/record fidelity is very close. Transforming from angle brackets to code, using a framework, is comfortable for a rank and file enterprise developer. What happens after this is where trouble begins. A small change to the class results in a mismatch and the need to rev the version of the XML-Schema to accommodate the change, hence the question on this mailing list. Someone made the assumption (correctly, IMO) that media types will not rev all that often in the wild because it limits adoption. image/jpeg, text/html, application/atompub+svc, are all nice, but life is different inside the enterprise. Often there are efforts to canonicalise the "core" business objects and this helps, but as it was pointed out earlier, these rev as business definitions change…and they're typically built on XML-Schema so they're just bigger types. The vicious reality of versioning is typically delayed, but not for long enough. Extensions in XML-Schema provide some relief, but these black holes in a schema definition force the developer into an unnatural position of having to query a document for a value instead of using offsets (either numeric or via "getters"). If you throw XML-Schema out it forces engineers to query the document with the assumption that you're only interested in what you know, which I believe is the beginning steps to HATEOAS, The catch is that by not using XML-Schema you'll get laughed out of a design meeting in any enterprise. So where is the middle ground here? One idea I've been kicking around is to look at XHTML and how the XML-Schema works for creating a forward extensible media-format. I can't point to successful adoption in the enterprise from my experience, but it works (granted not perfectly) for the web. It passes the sniff test from an enterprise point of view because the safety of XML-Schema is there, but the structure of the resulting document is very loose forcing the developer to query the document. Querying can be done via XQuery or a Dom navigation, it's still XML so not completely foreign. In thinking about this, application/xhtml+xml becomes the media-type, but it begs how find the "entity" you're searching for in this bag of angle brackets. I see the question gets pushed to while looking for rel tags. HTML5 has a discussion for proposed rel types and it seems open, so enterprises can go nuts and add their own (versioned) type. I don't know if this is a better place to allow type proliferation, but it keeps the discussion away from the content-type http header. -Noah --- In rest-discuss@yahoogroups.com, Tim Williams <williamstw@...> wrote: > > I'm looking for pointers to best practices to versioning > representations and support for client negotiation of specific > versions. To provide different representations of the same resource, > the media type alone is sufficient to drive the content type > negotiation. For example, I might provide > > <link type="application/atom+xml"... > <link type="application/rss+xml"... > > as alternate representations of the same resource. The client could > then be expected to pick a representation that it knows how to process > and GET it. > > But suppose we have a different version of the *same* media type > "myformat" - v1.0 and v2.0? To complicate matters, let's suppose that > [due to arrogant, insensitive developers:)] v2.0 is not backwards > compatible with v1.0. Assuming that the service is capable of serving > representations in both v1.0 and v2.0, the question becomes how might > the client negotiate one version over the other for the *same* media > type? > > I've attempted to think through the following: > > 1) (I can assume XML) XML versioning alone won't do because there's no > way to indicate in the link itself that it's one version of the schema > over the other. So even if the client retrieved a v2.0 representation > and stopped processing it after seeing an unfamiliar namespace (for > example), it has no way to subsequently request the older version. > > 2) My initial response was to simply add the versioning information to > the content-type itself (e.g. application/myformat.v20 and > application/myformat.v10). This makes negotiation and extensibility > clean and elegant, but causes me two concerns: the > "explosion-of-media-type" concern and the > "nobody-else-seems-to-be-doing-it-that-way"(based on current IANA) > concern. > > 3) The next thing that comes to mind is something like the "level" > accept-extension exampled in rfc2616 (e.g. text/html;level=1). In my > scenario, would be something like: application/myformat;version=2.0 I > suppose. > > I'm aware, btw, of the value of re-using existing media types, this > may be an edge case for some but I'd like to address an unfortunate > reality inside some fast moving enterprises though. > > Thoughts/pointers appreciated... > --tim >
[oops, sorry, missed this while i was on vacation - Mark] --- In rest-discuss@yahoogroups.com, Justin Cormack <justin@...> wrote: > > > I am trying to collect business rather than technical cases for REST/ > resource oriented rather than "service oriented" architectures. If > anyone has anything I would be interested. I started writing some > thoughts at the link below, as a first pass based on some recent > experiences. > > http://blog.technologyofcontent.com/2009/08/the-resource-oriented-enterprise/ > > thanks > > Justin > Tracking SOA along a technical pathway gleans an interesting difference. The business advantage of ROA can be likened to other information systems that thrive due to transparency. SOA services should be used to model service-to-service interactions (read: opaque services interacting). For example, ship this package for me to her. The package only needs to be inspect-able up to the point that the service is responsible. In this example, the shipper needs only provide the shipping service transparency into the weight, addressing, legality of the contents (stretch case area), yada yada. Within the services that spawned SOA, most interactions were one-way messages. The SLA is simple. The service provider must acknowledge (or non-acknowledge) receipt (ownership of delivery). The service provider should also provide a means for the shipper to receive (or check up on (poll)) the final acknowledgement (or any non-acknowledgement along the path). Every service within the chain follows the SLA (or the calling service compensates to ensure they meet their SLA). The shipping example can provide good insight into where ROA would have provided the optimal transparency from the start. (And conversely where SOA opaqueness leads to much re-negotiation.) After the shipper ships the package, they get antsy, they want to track status all along the shipment path. A reply of "That's not our SLA!" is not very service-oriented (albeit technically correct). Within SOA, the SLA is re-negotiated and all services within the chain go through this same re-negotiation. The services introduce storing of acknowledgments. And a storage invalidation scheme is introduced (wouldn't want to keep acknowledgments that no one will ask for). As a sidenote (from the SOA perspective), each service models the acknowledgment for transmission up chain(s) as well as down chain(s) and for storage. The storing and representation of acknowledgments pretty well models the world without computers shipping scenario. That is how information flowed. And it is also very resource oriented. It is how a ROA service would be designed from the start. We must think both in terms of resources and services. We are here to fulfill our clients' expectations (read: get paid). And we are here to be well served. ROA services model requester -> resource action(s) including potentially triggered actions. The modeling actions within ROA is HATEOS. Through a reduction of verbs and a plethora of resources, the modeling of actions is transparent to consumers of the service as well as the implementers of the service. Contract negotiation, being a necessary evil, cannot be avoided. ROA services are generally modeled and can be propped up (or mocked) quickly. By focusing on the resources, risk can be reduced by constructing the "edge cases" resources. This can be done in SOA, but most SOA developers do not understand their tools to that level or do not understand that in the end they are constructing a resource.
Noah, On Sep 11, 2009, at 11:16 PM, noahsingleton wrote: > My loose analysis of the issue is that most enterprise developers > are dealing with new types that flow directly from their code. Tools > bang out a schema that looks like structured data. The class/record > fidelity is very close. Transforming from angle brackets to code, > using a framework, is comfortable for a rank and file enterprise > developer. What happens after this is where trouble begins. A small > change to the class results in a mismatch and the need to rev the > version of the XML-Schema to accommodate the change, hence the > question on this mailing list. > > Someone made the assumption (correctly, IMO) that media types will > not rev all that often in the wild because it limits adoption. image/ > jpeg, text/html, application/atompub+svc, are all nice, but life is > different inside the enterprise. Often there are efforts to > canonicalise the "core" business objects and this helps, but as it > was pointed out earlier, these rev as business definitions change… > and they're typically built on XML-Schema so they're just bigger > types. The vicious reality of versioning is typically delayed, but > not for long enough. > > Extensions in XML-Schema provide some relief, but these black holes > in a schema definition force the developer into an unnatural > position of having to query a document for a value instead of using > offsets (either numeric or via "getters"). If you throw XML-Schema > out it forces engineers to query the document with the assumption > that you're only interested in what you know, which I believe is the > beginning steps to HATEOAS, The catch is that by not using XML- > Schema you'll get laughed out of a design meeting in any enterprise. > > So where is the middle ground here? > > One idea I've been kicking around is to look at XHTML and how the > XML-Schema works for creating a forward extensible media-format. I > can't point to successful adoption in the enterprise from my > experience, but it works (granted not perfectly) for the web. It > passes the sniff test from an enterprise point of view because the > safety of XML-Schema is there, but the structure of the resulting > document is very loose forcing the developer to query the document. > Querying can be done via XQuery or a Dom navigation, it's still XML > so not completely foreign. > > In thinking about this, application/xhtml+xml becomes the media- > type, but it begs how find the "entity" you're searching for in this > bag of angle brackets. I see the question gets pushed to while > looking for rel tags. HTML5 has a discussion for proposed rel types > and it seems open, so enterprises can go nuts and add their own > (versioned) type. I don't know if this is a better place to allow > type proliferation, but it keeps the discussion away from the > content-type http header. The content type header should express the complete payload semantics, otherwise understanding the payload is dependent on out of band information which creates unnecessary coupling and limits the visibility (e.g. for intermediaries). Jan > > -Noah > > --- In rest-discuss@yahoogroups.com, Tim Williams <williamstw@...> > wrote: >> >> I'm looking for pointers to best practices to versioning >> representations and support for client negotiation of specific >> versions. To provide different representations of the same resource, >> the media type alone is sufficient to drive the content type >> negotiation. For example, I might provide >> >> <link type="application/atom+xml"... >> <link type="application/rss+xml"... >> >> as alternate representations of the same resource. The client could >> then be expected to pick a representation that it knows how to >> process >> and GET it. >> >> But suppose we have a different version of the *same* media type >> "myformat" - v1.0 and v2.0? To complicate matters, let's suppose >> that >> [due to arrogant, insensitive developers:)] v2.0 is not backwards >> compatible with v1.0. Assuming that the service is capable of >> serving >> representations in both v1.0 and v2.0, the question becomes how might >> the client negotiate one version over the other for the *same* media >> type? >> >> I've attempted to think through the following: >> >> 1) (I can assume XML) XML versioning alone won't do because there's >> no >> way to indicate in the link itself that it's one version of the >> schema >> over the other. So even if the client retrieved a v2.0 >> representation >> and stopped processing it after seeing an unfamiliar namespace (for >> example), it has no way to subsequently request the older version. >> >> 2) My initial response was to simply add the versioning information >> to >> the content-type itself (e.g. application/myformat.v20 and >> application/myformat.v10). This makes negotiation and extensibility >> clean and elegant, but causes me two concerns: the >> "explosion-of-media-type" concern and the >> "nobody-else-seems-to-be-doing-it-that-way"(based on current IANA) >> concern. >> >> 3) The next thing that comes to mind is something like the "level" >> accept-extension exampled in rfc2616 (e.g. text/html;level=1). In my >> scenario, would be something like: application/myformat;version=2.0 I >> suppose. >> >> I'm aware, btw, of the value of re-using existing media types, this >> may be an edge case for some but I'd like to address an unfortunate >> reality inside some fast moving enterprises though. >> >> Thoughts/pointers appreciated... >> --tim >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Bill, On Sep 11, 2009, at 1:55 PM, Bill Burke wrote: > > > Jan Algermissen wrote: >> There isn't much value in parameters as part of a Content-Type header >> anyhow, because >> you just cannot predict if there is an intermediary that strips the >> params off. > > Is this theory or practice? I was told by network admin that such behaviour could not be ruled out but I lack a definitive source. > Considering that charset is an important parameter for many media > types, this would be a huge bug in a proxy cache. > I'd assume that the charset parameter is somewhat special since it has been in the original RFC from the beginning. Unclear is what happens when you roll your own parameters for your own media types. What I did not find is a normative description of the intended intermediary behaviour regarding media type parameters in the Content- Type header. Does anyone know where/if that is defined? Jan > Bill > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com
Jan Algermissen wrote: > > The content type header should express the complete payload semantics, > otherwise understanding the payload is dependent on out of band > information which creates unnecessary coupling and limits the > visibility (e.g. for intermediaries). > If you are contrasting a *custom* content-type parameter with a custom version header, I don't agree there is more coupling or less visibility with the latter. In fact, intermediary mechanisms would seem better off if the version is a separate header in its own right; e.g. the vary mechanism (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.44) - Mike
What do you mean by complete payload semantics? In the case of xhtml, the semantics of the document are that you scan the document, build context by reading or looking for a particular token (in my example this would be a rel tag if the agent is autonomous) and follow a link. How an agent searches a document is unique to that agent. The same goes for application/xml, it just folks want to have it bind to a schema so the tools know who to consume the document in its entirety. This is brittle since the serialization techniques rely on a very concise definition of the payload. Anything out of the ordinary causes an exception or error. -Noah On Mon, Sep 14, 2009 at 5:19 AM, Mike Kelly <mike@...> wrote: > Jan Algermissen wrote: > >> >> The content type header should express the complete payload semantics, >> otherwise understanding the payload is dependent on out of band >> information which creates unnecessary coupling and limits the visibility >> (e.g. for intermediaries). >> >> > > If you are contrasting a *custom* content-type parameter with a custom > version header, I don't agree there is more coupling or less visibility with > the latter. > > In fact, intermediary mechanisms would seem better off if the version is a > separate header in its own right; e.g. the vary mechanism ( > http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.44) > > - Mike >
The same goes for application/xml, it just folks want to have it bind to a schema so the tools know who to consume the document in its entirety. This is brittle since the serialization techniques rely on a very concise definition of the payload. Anything out of the ordinary causes an exception or error. Amen Seb
On Mon, Sep 14, 2009 at 10:26 AM, Noah Campbell <noahcampbell@...> wrote: > The same goes for application/xml, it just folks want to have it > bind to a schema so the tools know who to consume the document in its > entirety. This is brittle since the serialization techniques rely > on a very concise definition of the payload. Anything out of the > ordinary causes an exception or error. In my view the problem is not that `application/xml` deserialization is often rather unforgiving. (That is an issue, but it's just an implementation detail.) The real problem is that media types like `application/xml` do describe the semantics at all. When a request claims to accept `application/xml` what does that really mean? RSS? Maybe Atom? Maybe BPML? Or perhaps some proprietary inventory xml format? I think Jan Algermissen was pointing out is that the client should provide a sufficiently precise media type in it's content type and accept header fields that the server, and all the intermediates that care, know what it really needs to function correctly. XHTML meets this standard not because it has a well defined syntax but because a great deal of work has gone into describing the semantics of what its grammar means. However, using XHTML as a container for application specific data while not utilizing its core semantics seems to miss the mark. If the server could reasonably produce a response for the content type and acceptable media types of a request that would cause the client to not function then the media type is not specific enough. Using XHTML as a container format puts you in exactly that position. -- Peter Williamsthat http://barelyenough.org
On Fri, Sep 11, 2009 at 1:14 PM, Sebastien Lambla <seb@...> wrote:
>> Sorry Seb, can you clarify what you mean by 'extensible serialization'
>> and how it solves the problem? Is this idea explained somewhere in
>> some detail? I think of serialization as the "conversion of data to
>> bits for transmission" and I'm unfortunately not making the
>> connection.
>
> Object -> xml is what I meant by serialization.
>
>> I'm using XML, an extensible format, but changes still require a new
>> schema - a new version - I'm struggling to see how 'extensibility'
>> obviates my essential problem which is, 'client is programmed based on
>> knowledge of a particular representation(schema), if the schema
>> changes in an incompatible way, how does the client negotiate for the
>> representation it understands'?
>
> xsd:Any + xsd:AnyAttribute + xmlns.
>
>
> The attachment to a strict schema is exactly what introduces the need to
> version. Ad-hoc independent additions to container formats targeted to your
> needs means supporting multiple versions is easy.
It seems to me that strict schema's don't "introduce the need to
version" - incompatible data changes driven by business requirements
do - I think one needs to 'version' regardless of whether the main
container format is schema-backed or not. If I'm understanding your
approach - co-mingle different version-representations in the same
"document" and put them in their own namespace - you're still
"versioning", you're just relegating that duty to a namespace?
Something like,
<document>
<author xmlns="http://example.org/doc/v1.0">Thomas Jefferson</author>
<authors xmlns="http://example.org/doc/v2.0">
<author>Thomas Jefferson</author>
</authors>
<document>
Is that the essence of what you propose? I reckon the downsides are
inefficiency and complexity (depending upon how many revs you needed
to support at any one time).
Thanks,
--tim
> If I'm understanding your > approach - co-mingle different version-representations in the same > "document" and put them in their own namespace - you're still > "versioning", you're just relegating that duty to a namespace? > Something like, > > <document> > <author xmlns="http://example.org/doc/v1.0">Thomas Jefferson</author> > <authors xmlns="http://example.org/doc/v2.0"> > <author>Thomas Jefferson</author> > </authors> > <document> Well, this example is flawed by definition. If you don't start enforcing serialization rules about the number of elements that can appear, you would just add a second <author> tag. But let's assume for a moment what this specific example would lead us to. Client understands the v1 extension, and is released at the same time as a server understanding v1. Everything is nice and rosey. Now business requirements suddenly change, and you need to support multiple authors on a document. You update server v2 to look for multiple authors. The server can now emit the following: <document> <author xmlns="http://example.org/doc/v1.0">Thomas Jefferson</author> <author xmlns="http://example.org/doc/v1.0">Benjamin Franklin</author> </document> Now your v2 client can happily send and display multiple authors, and your v2 server can certainly understand those. What do you do with your v1 client? In an example where you create a new schema altogether, you already admit that clients in version 1 will by definition continue receiving only one document author. Now let's assume that you have, when documenting your media type, stated that the document author shall be represented by the first <author> tag found in the document, and nothing else. V1 client receives v2 media type, see first occurrence of the tag, modifies it and sends it back to the server. You can either recover from the assumption the v1 client made, that there is only one author, or you can't. If you can't, supporting two versions for media types won't help you. If you can, either solutions will work equally well. One of them will be long running, the other one will be short lived. I think those goals require a bit more care than simply slapping an object into an xml serialzier and expect the world to suddenly fall into place, but evolving a media type in such a way ensures that you don't end up with 20 versions of the same media type. This will help your media type live long enough to be useful beyond the one scenario it was designed to cover, fulfilling the design for serendipity and economy of scale that I see as the benefits of a ReST architecture. In other words, if the view of the world of client v1 is incompatible with what server v2 can do, creating a new media type is useless, because you have no recovery model and won't be able to still support both versions. If they are compatible, decentralized extensibility can solve the problem without imposing a new schema all the time. Seb
On Mon, Sep 14, 2009 at 6:19 PM, Sebastien Lambla <seb@...> wrote: > > > V1 client receives v2 media type, see first occurrence of the tag, modifies > it and sends it back to the server. I think the idea of versioning media types is that via conneg the V1 client would never receive a V2 media type, therefore there would never be any confusion as to the client's intent. Darrel
That's the whole point. If the scenario is, as Tim explained earlier, that the data change is indeed braking (because driven by changing requirements), that won't help in any way. But if you are going to support multiple clients, you may as well make decisions early in your media type design to avoid multiplication of formats, by ensuring extensibility, error recovery, and non-destructive updates, and you should be in a much better place than having two media types, be it that you support them at the same time or not. > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@yahoogroups.com] On Behalf Of Darrel Miller > Sent: 15 September 2009 00:02 > To: Rest List > Subject: Re: [rest-discuss] Media Type Version Negotiation > > On Mon, Sep 14, 2009 at 6:19 PM, Sebastien Lambla <seb@...> > wrote: > > > > > > V1 client receives v2 media type, see first occurrence of the tag, > modifies > > it and sends it back to the server. > > I think the idea of versioning media types is that via conneg the V1 > client would never receive a V2 media type, therefore there would > never be any confusion as to the client's intent. > > Darrel > > > ------------------------------------ > > Yahoo! Groups Links > > >
Consider what browser agents do today? They will get their html and then query the document to see what "mode" they need to process. This is typically based on the doctype in html; see https://developer.mozilla.org/en/Mozilla's_DOCTYPE_sniffing for an example of one browser. HTML is built on the principle of, if I don't understand it, I'll ignore it. Browser also try their best to render the intent. Anyone involved in those projects understand how much effort is required to overcome junk input. Making REST services more accepting requires a change in development style. Application/xml is descriptive enough for opaque xml blobs, but I would expect HTTP 415 more often than not. For this, creating media-types makes sense (eschewing version) and I would create a media-type, such as application/vnd.example.profile or application/profile+xml, and rely on the content to indicate the version. A DOCTYPE or a namespace could serve the role for detecting a version, requiring an inspection of the document before processing it. What does this look like in practice? Lets start by examining the typical approach using XSD. Assume that you have an xml document; for example, a list of customers. In XML-Schema, you would likely have a ns0:CustomersType that contains 0..many ns0:CustomerType elements. Traditionally, this is mapped to a collection of type customer (List<Customer> for those familiar with Java) and marshaled up to a handler. Abstraction is good(tm) the developer says, I won't ever be bothered by invalid input...sweet! But not sweet once the app is deployed. Someone revs the schema, perhaps simply changing the namespace, so ns1:CustomerType and ns0:CustomerType are no longer equivalent even though there shape is exactly the same! This means an entire rev of the application is required to accommodate something as trivial as a change in namespace. Lets say the developer threw out the marshaling framework and worked directly at the request/response level where they are able to inspect the byte stream. Now they could take the request and stuff it into an XSD validating parser but this won't buy them anything beyond the recently discarded framework. Instead, using xpath, a developer finds all the CustomerTypes (e.g. //Customer) and processes each element, regardless of the location in the document. The code is not as short when the documents are marshaled into an object, but it's much more accommodating. It also avoids the marshaling overhead when only a handful of values are needed. But wait, doesn't this boil into a big ball of mud? Perhaps, but only if you let it. It could lead to a huge if/then/else mess, but there are plenty of ways to avoid such branching. Better yet, the service can send a redirect (3xx?) to another service who can handle an unknown or older type. In short, I would recommend that the handler make its best effort to accommodate the input, trying to not make any assumptions about the structure of the input. This means ditching marshaling stacks and handle the bytes directly, querying the document to find matches based on intent, dump namespaces, ignore case, plan for parent/child relationships being more then one generation apart, and when in doubt, find a meaningful response code in HTTP to signal the user what went wrong. -Noah PS. I tried to tighten up my grammar. I had a head cold and an itch to push send. Sorry for the previous mistakes. On Mon, Sep 14, 2009 at 11:40 AM, Peter Williams <pezra@...>wrote: > On Mon, Sep 14, 2009 at 10:26 AM, Noah Campbell <noahcampbell@...> > wrote: > > > The same goes for application/xml, it just folks want to have it > > bind to a schema so the tools know who to consume the document in its > > entirety. This is brittle since the serialization techniques rely > > on a very concise definition of the payload. Anything out of the > > ordinary causes an exception or error. > > In my view the problem is not that `application/xml` deserialization > is often rather unforgiving. (That is an issue, but it's just an > implementation detail.) The real problem is that media types like > `application/xml` do describe the semantics at all. When a request > claims to accept `application/xml` what does that really mean? RSS? > Maybe Atom? Maybe BPML? Or perhaps some proprietary inventory xml > format? > > I think Jan Algermissen was pointing out is that the client should > provide a sufficiently precise media type in it's content type and > accept header fields that the server, and all the intermediates that > care, know what it really needs to function correctly. > > XHTML meets this standard not because it has a well defined syntax but > because a great deal of work has gone into describing the semantics of > what its grammar means. However, using XHTML as a container for > application specific data while not utilizing its core semantics seems > to miss the mark. If the server could reasonably produce a response > for the content type and acceptable media types of a request that > would cause the client to not function then the media type is not > specific enough. Using XHTML as a container format puts you in > exactly that position. > > -- > Peter Williamsthat > http://barelyenough.org >
On Wed, Sep 16, 2009 at 1:32 AM, Noah Campbell <noahcampbell@...> wrote: > Consider what browser agents do today? They will get their html and > then query the document to see what "mode" they need to process. This > is typically based on the doctype in html; see > https://developer.mozilla.org/en/Mozilla's_DOCTYPE_sniffing for an > example of one browser. > > HTML is built on the principle of, if I don't understand it, I'll > ignore it. Browser also try their best to render the intent. Anyone > involved in those projects understand how much effort is required to > overcome junk input. Making REST services more accepting requires a > change in development style. Application/xml is descriptive enough > for opaque xml blobs, but I would expect HTTP 415 more often than not. > For this, creating media-types makes sense (eschewing version) and I > would create a media-type, such as application/vnd.example.profile or > application/profile+xml, and rely on the content to indicate the > version. A DOCTYPE or a namespace could serve the role for detecting > a version, requiring an inspection of the document before processing > it. It seems to me that detecting a version is fairly easy, the real issue is allowing the client to "negotiate" for a specific version when the origin server is able to provide more than one. > What does this look like in practice? Lets start by examining the > typical approach using XSD. Assume that you have an xml document; for > example, a list of customers. In XML-Schema, you would likely have a > ns0:CustomersType that contains 0..many ns0:CustomerType elements. > Traditionally, this is mapped to a collection of type customer > (List<Customer> for those familiar with Java) and marshaled up to a > handler. Abstraction is good(tm) the developer says, I won't ever be > bothered by invalid input...sweet! But not sweet once the app is > deployed. Someone revs the schema, perhaps simply changing the > namespace, so ns1:CustomerType and ns0:CustomerType are no longer > equivalent even though there shape is exactly the same! This means an > entire rev of the application is required to accommodate something as > trivial as a change in namespace. "simply changing the namespace" seems to me a bigger deal than you are suggesting. A namespace tells the client the specific meaning behind CustomerType, if the server changes to a different meaning of CustomerType it seems reasonable to me that it would break things. > Lets say the developer threw out the marshaling framework and worked > directly at the request/response level where they are able to inspect > the byte stream. Now they could take the request and stuff it into an > XSD validating parser but this won't buy them anything beyond the > recently discarded framework. Instead, using xpath, a developer finds > all the CustomerTypes (e.g. //Customer) and processes each element, > regardless of the location in the document. The code is not as short > when the documents are marshaled into an object, but it's much more > accommodating. It also avoids the marshaling overhead when only a > handful of values are needed. But it's XML, so "location in the document" is important. The same element in a different location could/would have different purpose so you can't necessarily process them the same. > But wait, doesn't this boil into a big ball of mud? Perhaps, but only > if you let it. It could lead to a huge if/then/else mess, but there > are plenty of ways to avoid such branching. Better yet, the service > can send a redirect (3xx?) to another service who can handle an > unknown or older type. The key question becomes how does the client negotiate for that older version? > In short, I would recommend that the handler make its best effort to > accommodate the input, trying to not make any assumptions about the > structure of the input. This means ditching marshaling stacks and > handle the bytes directly, querying the document to find matches based > on intent, dump namespaces, ignore case, plan for parent/child > relationships being more then one generation apart, and when in doubt, > find a meaningful response code in HTTP to signal the user what went > wrong. Your recommendation seems to me to break a lot without the apparent improvements to the situation. With your suggestions you're kinda defining loose rules for a whole new format that's inconsistent with XML. Sebastian's suggestion - as uncomfortable as it initially makes me - is a tradeoff that really does seem to solve a lot. Thanks, --tim > On Mon, Sep 14, 2009 at 11:40 AM, Peter Williams <pezra@barelyenough.org> wrote: >> >> On Mon, Sep 14, 2009 at 10:26 AM, Noah Campbell <noahcampbell@...> wrote: >> >> > The same goes for application/xml, it just folks want to have it >> > bind to a schema so the tools know who to consume the document in its >> > entirety. This is brittle since the serialization techniques rely >> > on a very concise definition of the payload. Anything out of the >> > ordinary causes an exception or error. >> >> In my view the problem is not that `application/xml` deserialization >> is often rather unforgiving. (That is an issue, but it's just an >> implementation detail.) The real problem is that media types like >> `application/xml` do describe the semantics at all. When a request >> claims to accept `application/xml` what does that really mean? RSS? >> Maybe Atom? Maybe BPML? Or perhaps some proprietary inventory xml >> format? >> >> I think Jan Algermissen was pointing out is that the client should >> provide a sufficiently precise media type in it's content type and >> accept header fields that the server, and all the intermediates that >> care, know what it really needs to function correctly. >> >> XHTML meets this standard not because it has a well defined syntax but >> because a great deal of work has gone into describing the semantics of >> what its grammar means. However, using XHTML as a container for >> application specific data while not utilizing its core semantics seems >> to miss the mark. If the server could reasonably produce a response >> for the content type and acceptable media types of a request that >> would cause the client to not function then the media type is not >> specific enough. Using XHTML as a container format puts you in >> exactly that position. >> >> -- >> Peter Williamsthat >> http://barelyenough.org > > > >
I've been looking around for popular mechanisms for encrypting message bodies. multipart/encrypted and multipart/signed seemed like a general widely used (or not?) format for encryption/signing. My question is, wasn't Content-Encoding designed for this very thing? Wouldn't it make more sense to have: POST /somewhere Content-Type: application/xml Content-Encoding: encrypted; key=... <data> -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Uh?!
I'm not seeing the value proposition for REST-*. Anyone want to help me out? mca http://amundsen.com/blog/ On Wed, Sep 16, 2009 at 10:34, Sebastien Lambla <seb@...> wrote: > > > Uh?! > > >
Josh:
http://www.jboss.org/reststar/
mca
http://amundsen.com/blog/
On Wed, Sep 16, 2009 at 11:04, Josh Sled <jsled@...> wrote:
> mike amundsen <mamund@...> writes:
>> I'm not seeing the value proposition for REST-*.
>>
>> Anyone want to help me out?
>
> For those of us not following the same sites/lists you are, what are you
> referring to? Google is unhelpful for such a term.
>
> --
> ...jsled
> http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
>
On Wed, Sep 16, 2009 at 10:34 AM, Sebastien Lambla <seb@...> wrote: > > > Uh?! Or, Ugh!
Yours is more guttural, mine was more vocal. :) > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@yahoogroups.com] On Behalf Of Tim Williams > Sent: 16 September 2009 16:30 > To: Sebastien Lambla > Cc: Rest List > Subject: Re: [rest-discuss] REST-* > > On Wed, Sep 16, 2009 at 10:34 AM, Sebastien Lambla <seb@...> > wrote: > > > > > > Uh?! > > Or, Ugh! > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Wed, Sep 16, 2009 at 5:30 PM, Tim Williams <williamstw@...> wrote: > On Wed, Sep 16, 2009 at 10:34 AM, Sebastien Lambla <seb@...> wrote: > > > > Uh?! > > Or, Ugh! Right. I've been on their case<https://fedorahosted.org/pipermail/deltacloud-devel/2009-September/000138.html> about the governance and have had some useful<https://fedorahosted.org/pipermail/deltacloud-devel/2009-September/000145.html> and some less<https://fedorahosted.org/pipermail/deltacloud-devel/2009-September/000142.html> useful<https://fedorahosted.org/pipermail/deltacloud-devel/2009-September/000147.html> responses from Red Hat about it. I find this kind of wording<http://www.jboss.org/reststar/community/governance.html>particularly disconcerting, and straight out of the WS-* era: > Red Hat, as the founder of REST-*, gets a permanent seat on the board. All > other board members must be elected by the overall membership once a year. It's unfortunate that this article<http://www.infoworld.com/articles/hn/xml/02/03/12/020312hnwsi.html>has vanished as it gave great coverage of the governance shenanigans that plagued WS-*, but basically until such time as this looks like something other than a lone vendor's attempt to own some standards I'm not giving it the time of day (despite the concept itself being sound). Sam
> > I'm not giving it the time of day (despite the concept itself being > sound). > Maybe the concept is sound, but calling it REST is not. The proposed specs are describing an RPC-like system. What they are not describing is an architecture whereby representations of application state are transferred, but rather an architecture made up of procedures invoked upon specified URIs. "If you need to bend the rules of REST to create a simpler design, then that's the path that should be taken." No, if you're calling an architecture RESTful, then the only path you can take is to design a Uniform Interface. Simpler design choices abound. Nowhere does REST state as a goal, simplified application design. There are no shortcuts to gaining the benefits of RESTful architecture. Eschewing the Uniform Interface for a simpler RPC design won't yield the benefits of REST. Therefore, calling the result RESTful is misleading at best, particularly if you call it pragmatic -- not following the REST design pattern can't be claimed to yield the same results as following the REST design pattern. Pragmatic REST means following REST as closely as technology allows, not resorting to RPC instead of a Uniform Interface. -Eric
If this is just a place to define common patterns, then that makes a bit more sense. If, however, this follows the WS-* scheme, then I must ask what we didn't learn about WS-* that makes this seem like a good idea. Ryan Riley ryan.riley@... http://panesofglass.org/ http://wizardsofsmart.net/
Agreed. What is the reason for viewing dynamic interfaces as bad/scary/inferior? Ryan Riley ryan.riley@... http://panesofglass.org/ http://wizardsofsmart.net/
Eric J. Bowman wrote: > > > > > > I'm not giving it the time of day (despite the concept itself being > > sound). > > > > Maybe the concept is sound, but calling it REST is not. The proposed > specs are describing an RPC-like system. What they are not describing > is an architecture whereby representations of application state are > transferred, but rather an architecture made up of procedures invoked > upon specified URIs. > YES! The proposed specs are kind of RPCish. (They are at least conforming to the uniform interface). The point is to jumpstart things. To be fair.... The two transaction specifications (compensation and 2pc) were written 8 years ago. The messaging one was a simple exercise I did to create a facade over an existing messaging implementation (JMS). I think there are a huge amount of improvements we can make here. For instance instead of defining URL patterns, URLs can be made more opaque and instead the spec defines a set of resources, link relationships, and interactions with those relationships. For messaging JMS is a *VERY* session oriented model. The key piece to extract from this exercise is that HTTP messages are being exchanged rather than an envelope format. To improve things, Atom can be borrowed from (at least the interactions not the envelope format). restms.org has also done some interesting stuff. I and others doing messaging at Red Hat have a few other ideas as well. For security, I think many things are well defined (authentication comes to mind). I think OAuth has huge potential to manage authentication when you are running a Queue, Topic, Transaction Management, Workflow when you're dealing with Service as a Service (SaaS) (think Amazon SQS). multipart/encrypted and signed could be used for multipoint messaging. The question becomes, how do you integrate all these existing security mechanisms with the services REST-*.org want to define? Finally, a few years ago I started off very skeptical against REST. A friend and former mentor of mine, Steve Vinoski took the time over weeks to convince me otherwise. Through the process I kept an open mind and became convinced REST was good direction for distributed computing. I think I've kept an open mind while discussing things on this list. I hope many of you can keep an open mind with what we're trying to do. Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Sam Johnston wrote: > > > On Wed, Sep 16, 2009 at 5:30 PM, Tim Williams <williamstw@... > <mailto:williamstw@...>> wrote: > > On Wed, Sep 16, 2009 at 10:34 AM, Sebastien Lambla <seb@... > <mailto:seb@...>> wrote: > > > > > > Uh?! > > > > Or, Ugh! > > Right. I've been on their case > <https://fedorahosted.org/pipermail/deltacloud-devel/2009-September/000138.html> about > the governance and have had some useful > <https://fedorahosted.org/pipermail/deltacloud-devel/2009-September/000145.html> and > some less > <https://fedorahosted.org/pipermail/deltacloud-devel/2009-September/000142.html> > useful > <https://fedorahosted.org/pipermail/deltacloud-devel/2009-September/000147.html> responses > from Red Hat about it. I find this kind of wording > <http://www.jboss.org/reststar/community/governance.html> particularly > disconcerting, and straight out of the WS-* era: > > > Red Hat, as the founder of REST-*, gets a permanent seat on the > board. All other board members must be elected by the overall > membership once a year. > > If this is all you are concerned about, then I'm pretty happy. If you read the website you'll see that we are very open to changing any part of the governance model. > It's unfortunate that this article > <http://www.infoworld.com/articles/hn/xml/02/03/12/020312hnwsi.html> has > vanished as it gave great coverage of the governance shenanigans that > plagued WS-*, but basically until such time as this looks like something > other than a lone vendor's attempt to own some standards I'm not giving > it the time of day (despite the concept itself being sound). > The organization is going to be run as an open source project. IP will be licensed under ASL 2.0. Anybody can participate in discussions. As I said on my blog, Red Hat (and JBoss) has a pretty good history of running open communities and projects. When they've become popular we've brought them to a standardization effort so that all can share in the IP. With this effort we're going to be both the project and the standards body. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill Burke wrote: > Eric J. Bowman wrote: > >> >> >> > >> > I'm not giving it the time of day (despite the concept itself being >> > sound). >> > >> >> Maybe the concept is sound, but calling it REST is not. The proposed >> specs are describing an RPC-like system. What they are not describing >> is an architecture whereby representations of application state are >> transferred, but rather an architecture made up of procedures invoked >> upon specified URIs. >> >> > > YES! The proposed specs are kind of RPCish. (They are at least > conforming to the uniform interface). The point is to jumpstart things. > > To be fair.... > > The two transaction specifications (compensation and 2pc) were written 8 > years ago. The messaging one was a simple exercise I did to create a > facade over an existing messaging implementation (JMS). > > I think there are a huge amount of improvements we can make here. For > instance instead of defining URL patterns, URLs can be made more opaque > and instead the spec defines a set of resources, link relationships, and > interactions with those relationships. > > For messaging JMS is a *VERY* session oriented model. The key piece to > extract from this exercise is that HTTP messages are being exchanged > rather than an envelope format. To improve things, Atom can be borrowed > from (at least the interactions not the envelope format). restms.org > has also done some interesting stuff. I and others doing messaging at > Red Hat have a few other ideas as well. > > For security, I think many things are well defined (authentication comes > to mind). I think OAuth has huge potential to manage authentication > when you are running a Queue, Topic, Transaction Management, Workflow > when you're dealing with Service as a Service (SaaS) (think Amazon SQS). > multipart/encrypted and signed could be used for multipoint messaging. > The question becomes, how do you integrate all these existing security > mechanisms with the services REST-*.org want to define? > > > Finally, a few years ago I started off very skeptical against REST. A > friend and former mentor of mine, Steve Vinoski took the time over weeks > to convince me otherwise. Through the process I kept an open mind and > became convinced REST was good direction for distributed computing. I > think I've kept an open mind while discussing things on this list. > > I hope many of you can keep an open mind with what we're trying to do. > > Bill > So this would be a directory of useful hypermedia, and extensions for HTTP? That could be a good resource - Mike
On Sep 16, 2009, at 12:17 PM, Bill Burke wrote: > I hope many of you can keep an open mind with what we're trying to do. > Bill, if you want people to have an open mind about what you are trying to do, then the respectful thing would be to remove REST from the name of your site. Quite frankly, this is the single dumbest attempt at one-sided "standardization" of anti-REST architecture that I have ever seen. It even manages to one-up the previous all-time-idiocy of IBM when they renamed their CORBA toolkit "Web Services" in a deliberate attempt to confuse customers into thinking they had something to do with the Web. Distributed transactions are an architectural component of non-REST interaction. Message queues are a common integration technique for non-REST architectures. To claim that either one is a component of "Pragmatic REST" is the equivalent of putting a giant Red Dunce Hat on your head and then parading around as if it were the latest fashion statement. The idea that the community would welcome such a pack of marketing morons as the standards-bearers of REST is simply ridiculous. Just close the stupid site down. Sincerely, Roy T. Fielding <http://roy.gbiv.com/> Chief Scientist, Day Software <http://www.day.com/>
Roy T. Fielding wrote: > > Bill, if you want people to have an open mind about what you are > trying to do, then the respectful thing would be to remove REST > from the name of your site. > I respectfully suggest DIRT-* (figure out your own meaning for the acronym). The concept is WS-* without SOAP, right? If a disciplined process leads to RESTful solutions, good. But not all problems are nails for the REST hammer to solve. Suggesting that the entire WS-* stack, including the screws, can just be hammered in like nails, without first inventing a hammer-driven screw (an actual RESTful 2PC protocol) is disrespectful. -Eric
Roy T. Fielding wrote: > On Sep 16, 2009, at 12:17 PM, Bill Burke wrote: >> I hope many of you can keep an open mind with what we're trying to do. >> > > Bill, if you want people to have an open mind about what you are > trying to do, then the respectful thing would be to remove REST > from the name of your site. > > Quite frankly, this is the single dumbest attempt at one-sided > "standardization" of anti-REST architecture that I have ever seen. > It even manages to one-up the previous all-time-idiocy of IBM > when they renamed their CORBA toolkit "Web Services" in a > deliberate attempt to confuse customers into thinking they > had something to do with the Web. > Huh? I brought REST to JBoss because I believed in it. I thought it could be a way to fundementally change how our users consume our projects, products, and services. > Distributed transactions are an architectural component of > non-REST interaction. Distributed 2pc transactions are not RESTful. I don't think I or anyone at Red Hat ever claimed that. But why isn't a RESTful interface for a 2PC transaction manager useful? Cannot a transaction manager itself have a RESTful interface? The application that is using the transaction manager will not be RESTful, but the interaction with the service will be. But that's just 2pc. I think there is a lot of potential for compensation models (do/undo) in large integration. > Message queues are a common integration > technique for non-REST architectures. So you think services like Amazon SQS and S3 can't have RESTful interfaces? Are not useful for application developers? Can't be consumed restfully? > To claim that either one > is a component of "Pragmatic REST" is the equivalent of putting > a giant Red Dunce Hat on your head and then parading around as > if it were the latest fashion statement. > > The idea that the community would welcome such a pack of > marketing morons as the standards-bearers of REST is simply > ridiculous. Just close the stupid site down. > I took me a long time to get REST taken seriously internally at JBoss. After a hard fought battle I just want to say thanks for the moral support. :) -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Sep 16, 2009, at 2:54 PM, Bill Burke wrote: > 2PC transaction manager useful? Cannot a transaction manager itself > have a RESTful interface? The application that is using the > transaction > manager will not be RESTful, but the interaction with the service > will be. The transaction manager can have its own interface. But it still does not make sense to expose raw transaction managers to clients when you are building loosely coupled distributed systems (whether RESTful or not). WS-Tx specs have done this mistake, and we know how well they are adopted. When a server has broken down its use cases to low-level operations like transactions and compensations, the server designer may have over-abstracted the interface for clients. In such cases, it is better to take a step back, and see why you had to think of transactions in the place. The same is true for a number of so-called hard problems in REST such as partial updates, batches, copying, merging etc. Most of these problems go away or become simplified when you think in terms of application-level use cases and not low-level operations. Sincerely, Subbu
Bill Burke wrote: > > Eric J. Bowman wrote: > > > > Maybe the concept is sound, but calling it REST is not. The proposed > > specs are describing an RPC-like system. What they are not > > describing is an architecture whereby representations of > > application state are transferred, but rather an architecture made > > up of procedures invoked upon specified URIs. > > > > YES! The proposed specs are kind of RPCish. (They are at least > conforming to the uniform interface). The point is to jumpstart > things. > No, they don't begin to conform to the Uniform Interface. There is no concept of 'resource' to be found. The existing specs are wholly RPCish and are useful only as background information. They provide a blueprint for what you are trying to re-implement as REST. So start with a disciplined approach -- the first step in designing a RESTful protocol is to identify your resources (not assign URIs, just figure out what the resources are) and figure out how they relate to one another. "P-URL/commit" is a case in point. This is not a REST resource. If it were, then a GET wouldn't return 400. You can't have a REST resource that claims it doesn't exist for one request method but not another. A POST to this non-existent resource triggers a procedure -- a Remote Procedure Call if I've ever seen one -- no representation state is being transferred as there's no query string or message body. This is not what is meant by self-descriptive messages. The interaction between components is specific to your application, not generic, because HTTP doesn't have a COMMIT method, tunneled over POST or not. To begin to be RESTful, the transaction would need to be triggered by a POST to P-URL containing some sort of query string or message body to process, which triggers your stored COMMIT procedure opaquely. That's a generic interaction. But that still doesn't work, because P-URL wouldn't be your resource. Even if it is, /commit etc. aren't sub-resources, they're remote procedures to be called on P-URL. In a Uniform Interface, HTTP method calls are invoked on P-URL. The query or entity content (i.e. hypermedia) combined with the request method is the driver of application state. Changing state by invoking a POST on some action URL is certainly doable in HTML, but it isn't HEAS as there's no representation of a resource being transferred -- the component interactions are specific to your application, i.e. "If you want to manipulate resource A, POST nothing to URL B." The generic way is to manipulate resource A directly by passing a representation of the desired application state. > > To be fair.... > > The two transaction specifications (compensation and 2pc) were > written 8 years ago. The messaging one was a simple exercise I did > to create a facade over an existing messaging implementation (JMS). > Right, there's absolutely no concept of REST resources in them. The challenge is to model these component interactions as transfers of resource representations. The first step is to identify your resources and figure out how they relate to one another. > > I think there are a huge amount of improvements we can make here. > For instance instead of defining URL patterns, URLs can be made more > opaque and instead the spec defines a set of resources, link > relationships, and interactions with those relationships. > Which you seem to understand. However, without having embarked on this work, you haven't proven that a RESTful end result is possible. Clean slate time. You'll either come up with a solution based on the REST design pattern, and prove Roy wrong, or you'll wind up where everyone else has, which is "Huh, Roy's right." But you're nowhere near enough to boldly claim that problem solved, since you're defining URI patterns rather than describing what your resources are to begin with. > > I hope many of you can keep an open mind with what we're trying to do. > I do have an open mind to the possibility that someone might come up with a RESTful 2PC implementation. But until you've laid the basic groundwork of describing your resources, link relationships and component interactions, you need to keep an open mind to the possibility that it can't be done. You can't know the outcome of applying the REST discipline to a problem space, ahead of time. All you have is proof that WS-* can leverage HTTP as an alternative to SOAP. I'm sure you could model some WS-specs as REST, but the ones you've chosen to start with are the ones I'm almost certain you can't. -Eric
Comment below. On Wed, Sep 16, 2009 at 5:13 PM, Subbu Allamaraju <subbu@...> wrote: > On Sep 16, 2009, at 2:54 PM, Bill Burke wrote: > > > 2PC transaction manager useful? Cannot a transaction manager itself > > have a RESTful interface? The application that is using the > > transaction > > manager will not be RESTful, but the interaction with the service > > will be. > > The transaction manager can have its own interface. But it still does > not make sense to expose raw transaction managers to clients when you > are building loosely coupled distributed systems (whether RESTful or > not). WS-Tx specs have done this mistake, and we know how well they > are adopted. When a server has broken down its use cases to low-level > operations like transactions and compensations, the server designer > may have over-abstracted the interface for clients. In such cases, it > is better to take a step back, and see why you had to think of > transactions in the place. The same is true for a number of so-called > hard problems in REST such as partial updates, batches, copying, > merging etc. Most of these problems go away or become simplified when > you think in terms of application-level use cases and not low-level > operations. I somewhat disagree with Roy here, although I know how much good it will do be to disagree with Roy about anything regard ReST. But I do agree with Subbu, which may seem contradictory. I do think 2-phase transactions can be ReSTful, but they do need to be redefined as application use cases. (And of course they need to abandon the Atomic rules...) And I think this is done all the time, but people are still thinking database transactions instead of application transactions. For example, a request for quotation followed by an order is actually a 2-phase transaction, but anything that looks like a transaction manager is buried behind the scenes.
Subbu Allamaraju wrote: > > > > On Sep 16, 2009, at 2:54 PM, Bill Burke wrote: > > > 2PC transaction manager useful? Cannot a transaction manager itself > > have a RESTful interface? The application that is using the > > transaction > > manager will not be RESTful, but the interaction with the service > > will be. > > The transaction manager can have its own interface. Thank you. And thats all I really wanted to hear. > But it still does > not make sense to expose raw transaction managers to clients when you > are building loosely coupled distributed systems (whether RESTful or > not). WS-Tx specs have done this mistake, and we know how well they > are adopted. When a server has broken down its use cases to low-level > operations like transactions and compensations, the server designer > may have over-abstracted the interface for clients. In such cases, it > is better to take a step back, and see why you had to think of > transactions in the place. The same is true for a number of so-called > hard problems in REST such as partial updates, batches, copying, > merging etc. Most of these problems go away or become simplified when > you think in terms of application-level use cases and not low-level > operations. > You are preaching to the choir. I wrote this back in 2007: http://bill.burkecentral.com/2007/09/18/distributed-compensation-with-rest-and-jbpm/ Specifically: "Reading this fed more fuel to my idea that compensations really belonged as part of a process model." The "Transaction" section probably need to change to position 2pc transactions as an anti-pattern. But, this idea that everything has to be handcrafted isn't right either. The right solution is somewhere in the middle (hence middleware). This is why I like REST better than the way CORBA and WS-* has gone about describing things. Concepts like HATEOAS change the game a bit and makes things more flexible. This is why REST-* is so important to me. How do these traditional middleware services change with the introduction of RESTful Web Services? -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Sep 16, 2009, at 4:04 PM, Bill Burke wrote: > > > Subbu Allamaraju wrote: >> On Sep 16, 2009, at 2:54 PM, Bill Burke wrote: >> > 2PC transaction manager useful? Cannot a transaction manager itself >> > have a RESTful interface? The application that is using the >> > transaction >> > manager will not be RESTful, but the interaction with the service >> > will be. >> The transaction manager can have its own interface. > > Thank you. And thats all I really wanted to hear. But I did not say that is a good idea to build one. It would taste and smell the same as any other RPC/SOAP distributed transaction interface. > But, this idea that everything has to be handcrafted isn't right > either. The right solution is somewhere in the middle (hence > middleware). This is why I like REST better than the way CORBA and > WS-* has gone about describing things. Concepts like HATEOAS change > the game a bit and makes things more flexible. The right solution belongs somewhere behind servers but not between clients and servers. HTTP isn't middleware. It is an application protocol. The problem with the above approach as well as that of attempts like RETRO is that they generalize (i.e. over-abstract) the problem space and thereby leak implementation details (viz transactions or compensation) to clients. No matter whether the interface is uniform or not, this is bad design for distributed applications. Clients "cancel orders" but don't "compensate for a business process". If a server is forcing the clients to do the latter, it has dug itself into a hole. Sincerely, Subbu
On Wed, Sep 16, 2009 at 6:04 PM, Bill Burke <bburke@...> wrote: > > But, this idea that everything has to be handcrafted isn't right either. > The right solution is somewhere in the middle (hence middleware). > This is why I like REST better than the way CORBA and WS-* has gone > about describing things. Concepts like HATEOAS change the game a bit > and makes things more flexible. > > This is why REST-* is so important to me. How do these traditional > middleware services change with the introduction of RESTful Web Services? This seems to me the right question. So throw out the old ways of doing things and think about what you are actually talking about. What are you doing with a 2pc? That's technical. What are you actually doing? You are trying to take several actions at the same time, right? Do they *really* need to happen at the same time? Or can interim steps occur with a final POST/PUT to a new resource with your desired end result? That's quite possibly the same end goal with much richer semantics. As to the idea of a message queue being only a non-REST architectural pattern, I don't quite follow. (Of course, I'm still new to most of this so I would generally just nod and accept.) What is Atom if not an excellent medium for providing a message queue? Why not POST/PUT the intermediary resources, display their state via Atom feeds and, once everything is published, take your final action to the final state? You now have a transaction-ish process using fully RESTful *and* meaningful semantics with full audit support. I like the example of buying a book or books in an online bookstore. I can add books (PotentialOrderItem) to my shopping cart (PotentialOrder), decide to buy some of them (PendingOrder + PendingOrderItems), and then add additional books later (PotentialOrderItem -> PendingOrderItem) before my order is processed (ProcessedOrder + ProcessedOrderItems). Once I submit my payment info (PaymentRequest) and that has been approved (PaymentReceived) and the warehouse acknowledges everything is ready (PendingOrderPrepared), I could then know that my order (Order) would be completed and no further actions may be processed against it except such things as shipping notifications, returns, etc. My types (in parentheses) are quite clear, my state transitions are clear, and I can wait to take a final action once I know everything is in that needs to be. (Sorry if that is completely trivial to all of you, but that's the kind of thing I have been tossing about inside my head of late.) Ryan Riley >
This is all great feedback and what I was hoping to hear from people. Eric J. Bowman wrote: >> I think there are a huge amount of improvements we can make here. >> For instance instead of defining URL patterns, URLs can be made more >> opaque and instead the spec defines a set of resources, link >> relationships, and interactions with those relationships. >> > > Which you seem to understand. However, without having embarked on this > work, you haven't proven that a RESTful end result is possible. Clean > slate time. You'll either come up with a solution based on the REST > design pattern, and prove Roy wrong, or you'll wind up where everyone > else has, which is "Huh, Roy's right." But you're nowhere near enough > to boldly claim that problem solved, since you're defining URI patterns > rather than describing what your resources are to begin with. > FYI, I really wasn't ready to announce today. Mark mentioned it as a bullet point in his keynote. I mentioned it briefly in my JAX-RS talk. Infoworld had an article today about REST-* and presented it *VERY POORLY*. Mark felt compelled to respond. Anne Manes got in the act, now we've snowballed into this... Now I'm forced to play catchup... Here's hoping I'm not voted off the island. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Ryan Riley wrote: > As to the idea of a message queue being only a non-REST architectural > pattern, I don't quite follow. (Of course, I'm still new to most of this > so I would generally just nod and accept.) What is Atom if not an > excellent medium for providing a message queue? Why not POST/PUT the > intermediary resources, display their state via Atom feeds and, once > everything is published, take your final action to the final state? You > now have a transaction-ish process using fully RESTful /and/ meaningful > semantics with full audit support. > I think Roy means queue vs. pub/sub? Atom is pub/sub. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
What has Atom got to do with message queues or pub/sub? It is a format. It does not specify any protocol. On Sep 16, 2009, at 4:57 PM, Bill Burke wrote: > > > Ryan Riley wrote: >> As to the idea of a message queue being only a non-REST >> architectural pattern, I don't quite follow. (Of course, I'm still >> new to most of this so I would generally just nod and accept.) What >> is Atom if not an excellent medium for providing a message queue? >> Why not POST/PUT the intermediary resources, display their state >> via Atom feeds and, once everything is published, take your final >> action to the final state? You now have a transaction-ish process >> using fully RESTful /and/ meaningful semantics with full audit >> support. > > I think Roy means queue vs. pub/sub? Atom is pub/sub. > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com
On Wed, Sep 16, 2009 at 7:03 PM, Subbu Allamaraju <subbu@...> wrote: > What has Atom got to do with message queues or pub/sub? It is a format. It > does not specify any protocol. Quite right. I am using it here as a format to provide a list of items that may be processed. It's not a true message queue, but neither is the example a true transaction. Different paradigms often make it difficult to explain a clear transition, which was really my point. :) Ryan Riley ryan.riley@... http://panesofglass.org/ http://wizardsofsmart.net/
http://www.ietf.org/rfc/rfc5023.txt "Atom Publishing Protocol" What, is calling AtomPub just Atom bad etiquette? Subbu Allamaraju wrote: > > > What has Atom got to do with message queues or pub/sub? It is a > format. It does not specify any protocol. > > On Sep 16, 2009, at 4:57 PM, Bill Burke wrote: > > > > > > > Ryan Riley wrote: > >> As to the idea of a message queue being only a non-REST > >> architectural pattern, I don't quite follow. (Of course, I'm still > >> new to most of this so I would generally just nod and accept.) What > >> is Atom if not an excellent medium for providing a message queue? > >> Why not POST/PUT the intermediary resources, display their state > >> via Atom feeds and, once everything is published, take your final > >> action to the final state? You now have a transaction-ish process > >> using fully RESTful /and/ meaningful semantics with full audit > >> support. > > > > I think Roy means queue vs. pub/sub? Atom is pub/sub. > > > > > > -- > > Bill Burke > > JBoss, a division of Red Hat > > http://bill.burkecentral.com <http://bill.burkecentral.com> > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Sep 16, 2009, at 5:12 PM, Bill Burke wrote: > http://www.ietf.org/rfc/rfc5023.txt > > "Atom Publishing Protocol" > > What, is calling AtomPub just Atom bad etiquette? Not bad etiquette, but completely incorrect. Even AtomPub has nothing to do with pub/sub or messaging. It is a just a specialized application protocol. Subbu
Eric J. Bowman wrote: >> To be fair.... >> >> The two transaction specifications (compensation and 2pc) were >> written 8 years ago. The messaging one was a simple exercise I did >> to create a facade over an existing messaging implementation (JMS). >> > > Right, there's absolutely no concept of REST resources in them. The > challenge is to model these component interactions as transfers of > resource representations. The first step is to identify your resources > and figure out how they relate to one another. > I'm finally catching up and put some of my ideas in writing for messaging its a bit more resource focused with opaque URLs and link relationships defined. It will need further revisions on how its presented and formatted. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill Burke wrote: > > > > > Eric J. Bowman wrote: > >> To be fair.... > >> > >> The two transaction specifications (compensation and 2pc) were > >> written 8 years ago. The messaging one was a simple exercise I did > >> to create a facade over an existing messaging implementation (JMS). > >> > > > > Right, there's absolutely no concept of REST resources in them. The > > challenge is to model these component interactions as transfers of > > resource representations. The first step is to identify your resources > > and figure out how they relate to one another. > > > > I'm finally catching up and put some of my ideas in writing for > messaging its a bit more resource focused with opaque URLs and link > relationships defined. It will need further revisions on how its > presented and formatted. > Damn the URLS are here: http://groups.google.com/group/reststar-messaging/web/submission-2-draft-restful-queue http://groups.google.com/group/reststar-messaging/web/submission-2-draft-restful-pub-sub I'll stop spamming unless somebody has more things to discuss. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill Burke wrote: > As I said on my blog, Red Hat (and JBoss) has a pretty good history of > running open communities and projects. When they've become popular > we've brought them to a standardization effort so that all can share in > the IP. With this effort we're going to be both the project and the > standards body. "Pretty good history of running open communities and projects"??? Thanks Bill, that was funny. There are few projects who have worse reputation when it comes to dealing with users (Marc's and Gavin's "approach" is infamous, not famous), spewing hyperbole left and right, faking grassroot support, trying to enforce draconian contracts on partners, and so on. Have you forgotten this already? I'm ashamed of having been a cofounder of such a deeply pathological and ponerized project. If I could do it all over again I would have stayed away, and hopefully JBoss would never have existed. C'est la vie. /Rickard
On Sep 16, 2009, at 6:06 PM, Eric J. Bowman wrote: > There are no shortcuts to gaining the benefits of RESTful > architecture. +1 What the community would really benefit from would be efforts of (quasi-)standardizing media types and link relations that address 80% of the typical intra-enterprise use cases. Jan
Bill, On Sep 16, 2009, at 9:17 PM, Bill Burke wrote: > I hope many of you can keep an open mind with what we're trying to do. Let's look at this from another angle: What do you think is missing from existing Web standards in order to successfully apply REST to 'the enterprise'? Jan
On Thu, Sep 17, 2009 at 1:33 AM, Jan Algermissen <algermissen1971@...>wrote: > > > > On Sep 16, 2009, at 6:06 PM, Eric J. Bowman wrote: > > > There are no shortcuts to gaining the benefits of RESTful > > architecture. > > +1 > > What the community would really benefit from would be efforts of > (quasi-)standardizing media types and link relations that address 80% > of the typical intra-enterprise use cases. > I completely agree. This is where the "work" of REST really lies. (and implicitly, the work of mapping domain models to these standardized media type representations). --peter keane > Jan > >
Hi Bill, I can't imagine a worst start for your ambitious "standardization" effort than hiding your plans from REST creator himself. Choosing to have JBoss lead and own such an effort in a JCP-style way doesn't inspire me good things either. In addition, the "REST-*" naming is so uninspiring that it must have been suggested by a REST opponent! It reminds me of the attempt to describe the JAX-RS API as the "REST API for Java"... REST defines architecture principles, not protocols, not media types and doesn't even mandate the use of HTTP. The term "REST API" is already ambiguous but widely used as a shortcut for "RESTful [HTTP] API". Please don't add confusion. In addition, I think that the idea of "standardizing" such RESTful HTTP APIs at this stage is just wrong and will lead to results similar to "WS-*" due to politics and market timing issues. It doesn't mean that proposing RESTful APIs for such problems isn't useful or necessary. As you know, there are already existing efforts in those areas and having more attempts at this stage sounds useful. For example, Google recently announced its PubSubHubbub effort: http://code.google.com/p/pubsubhubbub/ Then, let the most popular media types (such as Atom) or RESTful APIs (such as Amazon S3) win and be leveraged by developers and supported by tools. Just don't lead people into thinking that there is or even need to be a consensus in the REST community on how to define such RESTful APIs, especially when you haven't even consulted it with your plans. My suggestion is that Roy quickly starts and leads an official REST project, maybe at the W3C or at IETF where important things engaging the REST community could be organized and decided. I'm afraid otherwise that such a wonderful adventure will get side-tracked or harmed. Best regards, Jerome Louvel -- Restlet ~ Founder and Lead developer ~ http://www.restlet.org Noelios Technologies ~ Co-founder ~ http://www.noelios.com Roy T. Fielding wrote : > > > On Sep 16, 2009, at 12:17 PM, Bill Burke wrote: > > I hope many of you can keep an open mind with what we're trying to do. > > > > Bill, if you want people to have an open mind about what you are > trying to do, then the respectful thing would be to remove REST > from the name of your site. > > Quite frankly, this is the single dumbest attempt at one-sided > "standardization" of anti-REST architecture that I have ever seen. > It even manages to one-up the previous all-time-idiocy of IBM > when they renamed their CORBA toolkit "Web Services" in a > deliberate attempt to confuse customers into thinking they > had something to do with the Web. > > Distributed transactions are an architectural component of > non-REST interaction. Message queues are a common integration > technique for non-REST architectures. To claim that either one > is a component of "Pragmatic REST" is the equivalent of putting > a giant Red Dunce Hat on your head and then parading around as > if it were the latest fashion statement. > > The idea that the community would welcome such a pack of > marketing morons as the standards-bearers of REST is simply > ridiculous. Just close the stupid site down. > > Sincerely, > > Roy T. Fielding <http://roy.gbiv.com/ <http://roy.gbiv.com/>> > Chief Scientist, Day Software <http://www.day.com/ <http://www.day.com/>>
On Sep 17, 2009, at 9:52 AM, Jerome Louvel wrote: > > Hi Bill, > > I can't imagine a worst start for your ambitious "standardization" > effort than hiding your plans from REST creator himself. Choosing to > have JBoss lead and own such an effort in a JCP-style way doesn't > inspire me good things either. > > In addition, the "REST-*" naming is so uninspiring that it must have > been suggested by a REST opponent! It reminds me of the attempt to > describe the JAX-RS API as the "REST API for Java"... > > REST defines architecture principles, not protocols, not media types > and > doesn't even mandate the use of HTTP. The term "REST API" is already > ambiguous but widely used as a shortcut for "RESTful [HTTP] API". > Please > don't add confusion. > > I have to agree with Jérome in practically every aspect. > In addition, I think that the idea of "standardizing" such RESTful > HTTP > APIs at this stage is just wrong and will lead to results similar to > "WS-*" due to politics and market timing issues. > > My suggestion would be to move this off the JBoss domain, change the name, and invite the community to define (or rewrite) the charter - i.e. turn this into a best practices site that might spawn standardization HTTP-related efforts at e.g. IETF (where they belong). > It doesn't mean that proposing RESTful APIs for such problems isn't > useful or necessary. As you know, there are already existing efforts > in > those areas and having more attempts at this stage sounds useful. For > example, Google recently announced its PubSubHubbub effort: > http://code.google.com/p/pubsubhubbub/ > > Then, let the most popular media types (such as Atom) or RESTful APIs > (such as Amazon S3) win and be leveraged by developers and supported > by > tools. Just don't lead people into thinking that there is or even need > to be a consensus in the REST community on how to define such RESTful > APIs, especially when you haven't even consulted it with your plans. > > My suggestion is that Roy quickly starts and leads an official REST > project, maybe at the W3C or at IETF where important things engaging > the > REST community could be organized and decided. I'm afraid otherwise > that > such a wonderful adventure will get side-tracked or harmed. > Here I have to disagree - I don't have any idea what an "official REST project" might be doing. Best, Stefan > > Best regards, > Jerome Louvel > -- > Restlet ~ Founder and Lead developer ~ http://www.restlet.org > Noelios Technologies ~ Co-founder ~ http://www.noelios.com > > Roy T. Fielding wrote : > > > > > > On Sep 16, 2009, at 12:17 PM, Bill Burke wrote: > > > I hope many of you can keep an open mind with what we're trying > to do. > > > > > > > Bill, if you want people to have an open mind about what you are > > trying to do, then the respectful thing would be to remove REST > > from the name of your site. > > > > Quite frankly, this is the single dumbest attempt at one-sided > > "standardization" of anti-REST architecture that I have ever seen. > > It even manages to one-up the previous all-time-idiocy of IBM > > when they renamed their CORBA toolkit "Web Services" in a > > deliberate attempt to confuse customers into thinking they > > had something to do with the Web. > > > > Distributed transactions are an architectural component of > > non-REST interaction. Message queues are a common integration > > technique for non-REST architectures. To claim that either one > > is a component of "Pragmatic REST" is the equivalent of putting > > a giant Red Dunce Hat on your head and then parading around as > > if it were the latest fashion statement. > > > > The idea that the community would welcome such a pack of > > marketing morons as the standards-bearers of REST is simply > > ridiculous. Just close the stupid site down. > > > > Sincerely, > > > > Roy T. Fielding <http://roy.gbiv.com/ <http://roy.gbiv.com/>> > > Chief Scientist, Day Software <http://www.day.com/ <http://www.day.com/ > >> > > >
Jan Algermissen wrote: > > > > On Sep 16, 2009, at 6:06 PM, Eric J. Bowman wrote: > > > There are no shortcuts to gaining the benefits of RESTful > > architecture. > > +1 > > What the community would really benefit from would be efforts of > (quasi-)standardizing media types and link relations that address 80% > of the typical intra-enterprise use cases. > I disagree on focusing so much on the media types. What I liked about REST was that the interactions can be predefined and you can snap on media types as needed by specific clients. Like HTTP conneg handle the needs of various clients. With a middleware service I'd want to focus on the resource and link relationship definition. Define a few example media types where appropriate but don't require anything. This allows implementations a bit of flexibility to define their own format and use HTTP conneg. An example could be the management interface of a middleware service. Really areas where you're not gonna get a lot of agreement on what the representation should be and you really don't want something that is the lowest-common-denominator or something that is bloat. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Hi Bill, > I disagree on focusing so much on the media types. What I liked about > REST was that the interactions can be predefined and you can snap on > media types as needed by specific clients. Like HTTP conneg handle > the > needs of various clients. That doesn't sound right to me - you seem to be implying that media types are just ways that consumers might want their information encoded: xml, json, csv, whatever. I think media types define contracts, and so aren't that interchangeable. Jim
Jim Webber wrote: > Hi Bill, > > >> I disagree on focusing so much on the media types. What I liked about >> REST was that the interactions can be predefined and you can snap on >> media types as needed by specific clients. Like HTTP conneg handle >> the >> needs of various clients. >> > > That doesn't sound right to me - you seem to be implying that media > types are just ways that consumers might want their information > encoded: xml, json, csv, whatever. I think media types define > contracts, and so aren't that interchangeable. > Is there any need to make a distinction between resource and representation, then?
Is there a reason a client shouldn't respect the origin server's cache-control if it's over SSL? I don't immediately see anything in HTTP or TLS that indicates I can't but I came across Mark's cache tutorial[1] where he says, "If the request is authenticated or secure (i.e., HTTPS), it won’t be cached." and now I'm wondering if I've missed something. I'm hoping he's simply describing the way things happen to be inside browsers rather than implying the way thing should be in service clients. Thanks, --tim [1] - http://www.mnot.net/cache_docs/
Just to add my opinion, fwiw, I agree completely with Jerome. Jerome Louvel wrote: > > > > Hi Bill, > > I can't imagine a worst start for your ambitious "standardization" > effort than hiding your plans from REST creator himself. Choosing to > have JBoss lead and own such an effort in a JCP-style way doesn't > inspire me good things either. > > In addition, the "REST-*" naming is so uninspiring that it must have > been suggested by a REST opponent! It reminds me of the attempt to > describe the JAX-RS API as the "REST API for Java"... > > REST defines architecture principles, not protocols, not media types and > doesn't even mandate the use of HTTP. The term "REST API" is already > ambiguous but widely used as a shortcut for "RESTful [HTTP] API". Please > don't add confusion. > > In addition, I think that the idea of "standardizing" such RESTful HTTP > APIs at this stage is just wrong and will lead to results similar to > "WS-*" due to politics and market timing issues. > > It doesn't mean that proposing RESTful APIs for such problems isn't > useful or necessary. As you know, there are already existing efforts in > those areas and having more attempts at this stage sounds useful. For > example, Google recently announced its PubSubHubbub effort: > http://code.google.com/p/pubsubhubbub/ > <http://code.google.com/p/pubsubhubbub/> > > Then, let the most popular media types (such as Atom) or RESTful APIs > (such as Amazon S3) win and be leveraged by developers and supported by > tools. Just don't lead people into thinking that there is or even need > to be a consensus in the REST community on how to define such RESTful > APIs, especially when you haven't even consulted it with your plans. > > My suggestion is that Roy quickly starts and leads an official REST > project, maybe at the W3C or at IETF where important things engaging the > REST community could be organized and decided. I'm afraid otherwise that > such a wonderful adventure will get side-tracked or harmed. > > Best regards, > Jerome Louvel > -- > Restlet ~ Founder and Lead developer ~ http://www.restlet.org > <http://www.restlet.org> > Noelios Technologies ~ Co-founder ~ http://www.noelios.com > <http://www.noelios.com> > > Roy T. Fielding wrote : > > > > > > On Sep 16, 2009, at 12:17 PM, Bill Burke wrote: > > > I hope many of you can keep an open mind with what we're trying to do. > > > > > > > Bill, if you want people to have an open mind about what you are > > trying to do, then the respectful thing would be to remove REST > > from the name of your site. > > > > Quite frankly, this is the single dumbest attempt at one-sided > > "standardization" of anti-REST architecture that I have ever seen. > > It even manages to one-up the previous all-time-idiocy of IBM > > when they renamed their CORBA toolkit "Web Services" in a > > deliberate attempt to confuse customers into thinking they > > had something to do with the Web. > > > > Distributed transactions are an architectural component of > > non-REST interaction. Message queues are a common integration > > technique for non-REST architectures. To claim that either one > > is a component of "Pragmatic REST" is the equivalent of putting > > a giant Red Dunce Hat on your head and then parading around as > > if it were the latest fashion statement. > > > > The idea that the community would welcome such a pack of > > marketing morons as the standards-bearers of REST is simply > > ridiculous. Just close the stupid site down. > > > > Sincerely, > > > > Roy T. Fielding <http://roy.gbiv.com/ <http://roy.gbiv.com/> > <http://roy.gbiv.com/ <http://roy.gbiv.com/>>> > > Chief Scientist, Day Software <http://www.day.com/ > <http://www.day.com/> <http://www.day.com/ <http://www.day.com/>>> > >
--- In rest-discuss@yahoogroups.com, Stefan Tilkov <stefan.tilkov@...> wrote: > Here I have to disagree - I don't have any idea what an "official REST > project" might be doing. > > Best, > Stefan Hi, I have to second Stefan. REST is the name for this architectural style and with Roy's thesis well defined. Not much left to do. We/you could further investigate architectures that are so to speak "RESTful" (follow the style) - but we cannot call them REST. Hence a "REST project" does not make sense. Regards, Nicolai
Hello Mike, > Is there any need to make a distinction between resource and > representation, then? I had to think about that for a couple of minutes, but the answer is "yes." Although the information model held by a service might the the same whether we interact with it through CSV or Atom, the protocol through which we have to interact with that service would be radically different, and it's the media type which defines (or not) that interaction protocol. Switching out CSV for XML is pretty isomorphic in plenty of cases. Switching out CSV for Atom (which has a more sophisticated processing model that includes hypermedia) isn't. Jim
Well, Roy Fielding said (wrote) several times that there were parts of his thesis that he had to drop due to time constraints, so if he wanted to expand those missing parts, that could be a such a "REST Project". But that can't come from the community, it has to be his personal decision, of course. codeblogger@... wrote: > > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Stefan Tilkov > <stefan.tilkov@...> wrote: > > > Here I have to disagree - I don't have any idea what an "official REST > > project" might be doing. > > > > Best, > > Stefan > > Hi, > > I have to second Stefan. REST is the name for this architectural style > and with Roy's thesis well defined. Not much left to do. We/you could > further investigate architectures that are so to speak "RESTful" > (follow the style) - but we cannot call them REST. Hence a "REST > project" does not make sense. > > Regards, > Nicolai > >
Stefan Tilkov wrote: > On Sep 17, 2009, at 9:52 AM, Jerome Louvel wrote: > >> Hi Bill, >> >> I can't imagine a worst start for your ambitious "standardization" >> effort than hiding your plans from REST creator himself. This was a huge mistake by me. I was wrong. I should have at least warned Roy. I don't think he would have given any blessing though. >> Choosing to >> have JBoss lead and own such an effort in a JCP-style way doesn't >> inspire me good things either. >> We want to define middleware services. I think we're as good as anybody in developing middleware. We have a huge user base to tap for a feedback loop. We have large customers willing to work with us to help develop their RESTful applications. Our efforts in organizations like the JCP have been to make their processes more open, to bring popular user-driven technology to JCP specifications, and to fix technical problems within specifications. > My suggestion would be to move this off the JBoss domain, change the > name, and invite the community to define (or rewrite) the charter - > i.e. turn this into a best practices site that might spawn > standardization HTTP-related efforts at e.g. IETF (where they belong). > Do to technical issues rest-star.org is a redirect to jboss.org. It has to do with the web authoring tools the graphic arts guys were using. The intention all along was to have it separate from Red Hat. It didn't make a lot of sense to do this work under something like the RESTEasy Java project as we wanted it to be more than something consumable by Java developers. Also I think on the website we stated that we were totally open to changing the charter. Maybe we shouldn't even of had a charter to begin with. Given the feedback here though, we may relaunch it as an open source specification effort instead of a "official" standards body. And then bring each individual thing to the W3C or IETF. >> It doesn't mean that proposing RESTful APIs for such problems isn't >> useful or necessary. As you know, there are already existing efforts >> in >> those areas and having more attempts at this stage sounds useful. For >> example, Google recently announced its PubSubHubbub effort: >> http://code.google.com/p/pubsubhubbub/ >> >> Then, let the most popular media types (such as Atom) or RESTful APIs >> (such as Amazon S3) win and be leveraged by developers and supported >> by >> tools. Just don't lead people into thinking that there is or even need >> to be a consensus in the REST community on how to define such RESTful >> APIs, especially when you haven't even consulted it with your plans. >> >> My suggestion is that Roy quickly starts and leads an official REST >> project, maybe at the W3C or at IETF where important things engaging >> the >> REST community could be organized and decided. I'm afraid otherwise >> that >> such a wonderful adventure will get side-tracked or harmed. >> > Here I have to disagree - I don't have any idea what an "official REST > project" might be doing. > I think we were pretty clear on rest-star.org that we weren't an "official REST project" or trying to define the meaning of REST or be "official REST". From the website: "REST-* is an organization dedicated to bringing the architecture of the Web to common patterns in middleware technology". The name REST-* was, on purpose, supposed to convey images of WS-*. Other than being tonque-and-cheek, and ignoring the negative connotations of WS-*, WS-* generally equates to defining middleware: "The REST-* community aims to introduce new REST-based standards for these traditional services" The key word was *traditional*. What I think I should have said was: "The REST-* community aims to morph these traditional services into a more RESTful approach through a set of open specifications." -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Sep 17, 2009, at 2:04 PM, Bill Burke wrote: > > > Jan Algermissen wrote: >> On Sep 16, 2009, at 6:06 PM, Eric J. Bowman wrote: >> > There are no shortcuts to gaining the benefits of RESTful >> > architecture. >> +1 >> What the community would really benefit from would be efforts of >> (quasi-)standardizing media types and link relations that address 80% >> of the typical intra-enterprise use cases. > > I disagree on focusing so much on the media types. What I liked > about REST was that the interactions can be predefined You cannot 'predefine' interactions in a RESTful system. You can only define link traversal semantics (qua media types or link relations). There is no need to (and you must not) standardize anything else. Note for example, that all client side expectations that constitute the Atom Publishing Protocol are essentially expressed as link/ hypermedia semantics along the lines of "If a client comes across some hypermedia that links to a resource in this and that way it may assume this and that about the effect that an HTTP method call will have". There is no contract besides link semantics. Jan > and you can snap on media types as needed by specific clients. Like > HTTP conneg handle the needs of various clients. > > With a middleware service I'd want to focus on the resource and link > relationship definition. Define a few example media types where > appropriate but don't require anything. This allows implementations > a bit of flexibility to define their own format and use HTTP > conneg. An example could be the management interface of a > middleware service. Really areas where you're not gonna get a lot of > agreement on what the representation should be and you really don't > want something that is the lowest-common-denominator or something > that is bloat. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com
Bill, On Sep 17, 2009, at 4:03 PM, Bill Burke wrote: > "REST-* is an organization dedicated to bringing the architecture of > the > Web to common patterns in middleware technology". this is I think were everybody chokes because Web architecture is contrary to the common understanding of 'common patterns in middleware technology'. Better would be 'dedicated to overcomming the complexities and coupling of common patterns in middleware technology by replacing them with RESTful use of HTTP and other Web standards' or similar. Jan
Comments below. On Thu, Sep 17, 2009 at 9:03 AM, Bill Burke <bburke@redhat.com> wrote: > The name REST-* was, on purpose, supposed to convey images of WS-*. > Other than being tonque-and-cheek, and ignoring the negative > connotations of WS-*, WS-* generally equates to defining middleware: > > "The REST-* community aims to introduce new REST-based standards for > these traditional services" > > The key word was *traditional*. What I think I should have said was: > > "The REST-* community aims to morph these traditional services into a > more RESTful approach through a set of open specifications." Having once been a member of some WS-* groups on a previous job, I understand where the name and approach are coming from. What it means to me is that some of the previous WS-* proponents have seen the error of their ways, at least partly, and are climbing aboard the REST bandwagon. Which has its good and bad aspects, but I would celebrate it. But the WS-* distributed-object mindset is deeply embedded in enterprise development land, and is difficult to abandon completely in one swell foop. Don't know if I've completely abandoned it myself. So some tough love may be in order (i.e. Roy's response coupled with getting Bill to revise the plans to something more compatible).
On Sep 17, 2009, at 4:03 PM, Bill Burke wrote: > The name REST-* was, on purpose, supposed to convey images of WS-*. You are aware that on this list WS-* usually is used to convey explicitly the image of WS-death star, yes? "RESTifying the enterprise" would maybe convey in a more rest-discuss friendly way the image you had intended to convey :-) Jan
Jim Webber wrote: > > > Hello Mike, > > > Is there any need to make a distinction between resource and > > representation, then? > > I had to think about that for a couple of minutes, but the answer is > "yes." > > Although the information model held by a service might the the same > whether we interact with it through CSV or Atom, the protocol through > which we have to interact with that service would be radically > different, and it's the media type which defines (or not) that > interaction protocol. > I don't think the media type has to define interactions. This is where I like the potential of Link headers. Then the client can interact with the service regardless of the media type exchanged. A good example (sorry) is the message queue/topic service I'm trying to define at rest-star.org. The service itself really doesn't care what media types it is exchanging. Resources defined within a queue service don't really care (or know) what media types they are publishing . They do have a set of well defined link relationships. Another good example is a storage service like Amazon's S3. Doesn't really matter what you are storing, but the link relationships defined storage resources are still interesting. I do agree that using something like XML schema to enforce a contract might be an interesting idea. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Hey Bill, > I don't think the media type has to define interactions. This is > where I like the potential of Link headers. Then the client can > interact with the service regardless of the media type exchanged. Yeah, I can see that. Retrofit flat media types like XML with link headers and you've got a hypermedia package. Rhetorical question: I wonder why some folks prefer to build that into the format? I guess I'd fall into that category too - is the only advantage that my contract (media type spec) is atomic versus the any- old-format plus link headers model? [snip] > I do agree that using something like XML schema to enforce a > contract might be an interesting idea. Doh! I never meant to imply that. A media type might have a bunch of schemas associated with it, but it's the media type spec that declares the contract. Jim
Hey Jan, > You are aware that on this list WS-* usually is used to convey > explicitly the image of WS-death star, yes? "RESTifying the > enterprise" would maybe convey in a more rest-discuss friendly way the > image you had intended to convey :-) I support that notion of Web-ifying the enterprise. I think we all would really: what works on the big Web can be scaled down to work in less ambitious environments like enterprise systems. My stumbling block is that it's a platform vendor driving this, and I can't help but feel there's a potential for conflict of interest. Red Hat could do much more sensible and ambitious things than mooch around with enterprise middleware - they certainly have the talent and bandwidth to do something radical. If I wore a Red Hat, then I'd be pushing to build a kick-ass Web server and nice frameworks around it to deal with things like hypermedia formats and business protocols (i.e. not JAX-RS). Then I'd declare victory for the Web over middleware (mostly) and wrong-foot the opposition in the process. Lovely. Jim
Jim Webber wrote: > > I do agree that using something like XML schema to enforce a > > contract might be an interesting idea. > > Doh! I never meant to imply that. A media type might have a bunch of > schemas associated with it, but it's the media type spec that declares > the contract. > You don't think a media type spec could use a schema to define a contract? i.e. application/vnd.order+xml Its the only ammunition/answer I've thought of when I get the question "Where's the WSDL?" Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
I've finally caught up on the email discussions and with the benefit of hindsight the only word that comes to mind when looking at how we (Red Hat) went about this is DO'h! In our defence I'll say that the aims behind what we're trying to do are genuinely sincere and community oriented. Furthermore it was never our intention to alienate others in the REST community. Bill's been doing a great job of evangelizing REST in the Java world and this effort is meant to compliment that and take it to the next level. Yes, we should have communicated this more widely beforehand, sought input to the processes, the aims etc, and for that I take the lion's share of the blame: the only excuse is over enthusiasm and for that I apologize to all those who feel affected negatively by this. I'll repeat: it wasn't our intention and looking back on things I'd definitely do it differently. With that in mind I hope we can move forward in a constructive manner as a community. I would be a real shame if the main aims behind this effort were lost beneath our inability to manage the announcement correctly. There's been some good constructive criticism and suggestions during this discussion and I'm sure Bill will act on them. I hope that everyone can see past our (primarily my) cock up (for those none Brits http://www.thefreedictionary.com/cockup) and move on. Mark.
Jim Webber wrote: > > > Hey Jan, > > > You are aware that on this list WS-* usually is used to convey > > explicitly the image of WS-death star, yes? "RESTifying the > > enterprise" would maybe convey in a more rest-discuss friendly way the > > image you had intended to convey :-) > > I support that notion of Web-ifying the enterprise. I think we all > would really: what works on the big Web can be scaled down to work in > less ambitious environments like enterprise systems. > > My stumbling block is that it's a platform vendor driving this, and I > can't help but feel there's a potential for conflict of interest. > yes and no. No, because a simpler more scalable architecture like the Web would greatly reduce the amount of effort we currently have to invest in our projects and products, specifically around tooling and management. Yes, well, because we sell middleware. Then again, REST-* was about defining restful middleware services, not defining REST itself. > Red Hat could do much more sensible and ambitious things than mooch > around with enterprise middleware - they certainly have the talent and > bandwidth to do something radical. If I wore a Red Hat, then I'd be > pushing to build a kick-ass Web server and nice frameworks around it > to deal with things like hypermedia formats and business protocols > (i.e. not JAX-RS). JAX-RS was for me (and Red Hat) to get my feet wet. Until recently, I was the only person in our division evalengizing REST as a direction we should take. My efforts weren't taken seriously until a few large customers started using RESTEasy. Now I have a bit more political capital to steer us in a RESTful direction. REST-* was an effort to have people tell us what the direction should be. BTW, what do you have against JAX-RS? I've liked it so far because it doesn't get in the way with how you want to design your resources. Its just a simple bridge between the HTTP message and your Java code. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Hi Bill, [snip] > BTW, what do you have against JAX-RS? I've liked it so far because > it doesn't get in the way with how you want to design your > resources. Its just a simple bridge between the HTTP message and > your Java code. Primarily that it has a misleading name, or at least fails to deliver on that name. It should have been called "Java API for interfacing to Web servers that's a bit nicer than servlets, and by the way why on Earth did the servlet designers not use the bloody return value anyway?" Obviously the bit after the comma is more artistic, and the appropriate JCP could choose to adopt it or not as it sees fit. Which resonates with something Mark Baker said the other day, that REST-* should have been called HTTP-*. Like REST-*, JAX-RS is all about HTTP, not really about REST. After all, if we consider JAX-RS to be a framework for RESTful services, then we should also apply that moniker to servlets, no? Jim
Jim Webber wrote: > > > Hi Bill, > > [snip] > > > BTW, what do you have against JAX-RS? I've liked it so far because > > it doesn't get in the way with how you want to design your > > resources. Its just a simple bridge between the HTTP message and > > your Java code. > > Primarily that it has a misleading name, or at least fails to deliver > on that name. You guys worry way to much about the semantics of a name. Naming it JAX-RS sets a direction. Starts developers on the path to exploring and investigating RESTful design. You need to understand that Java developers are ingrained in the RPC world. JAX-RS sets an initial direction for the Java community without trying to do too much too soon. > It should have been called "Java API for interfacing to > Web servers that's a bit nicer than servlets, and by the way why on > Earth did the servlet designers not use the bloody return value anyway?" > Not really. It does focus the developer a tiny bit more into thinking of representations. It also doesn't allow the developer to dispatch based on a query parameter. > Obviously the bit after the comma is more artistic, and the > appropriate JCP could choose to adopt it or not as it sees fit. > > Which resonates with something Mark Baker said the other day, that > REST-* should have been called HTTP-*. Yup, the initial submissions to REST-* were RPCish, I've already admitted to that. I think the 2nd draft of messaging I posted late last night is a bit more RESTful. I'm confident with some constructive input it will become more restful as it progresses. I'm working resubmitting a restful interface for transactions as well. When Mark Little first introduced me to the tx api, I thought it could be greatly improved by defining a set of link relationships and making URLs more opaque (Yes, I'm repeating myself.). > Like REST-*, JAX-RS is all > about HTTP, not really about REST. After all, if we consider JAX-RS to > be a framework for RESTful services, then we should also apply that > moniker to servlets, no? > Yup. In my JAX-RS talk I do state that there's no reason you can't use servlets to implement RESTful web services. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Thu, Sep 17, 2009 at 10:04 AM, Bill Burke <bburke@...> wrote: > Its the only ammunition/answer I've thought of when I get the question > "Where's the WSDL?" > > I think this hits the biggest concern I have for real REST adoption. There is and should be no real equivalent to WSDL in a REST world. WSDL is great for building tightly-coupled, static implementations. The beauty of REST is very similar, imho, to the beauty of dynamic languages and dynamic UIs. I suppose the closest you could come would be to have a public starting-point URI which provides URIs to the next options, which provide their own next options, ad infinitum. You wouldn't build to a static spec; you allow the URIs to be presented to you as you come upon them. It's a different mindset requiring different patterns and tooling. So when you consider the media type as a contract, your contract could be implemented at many different URIs, possibly all at the same time (like http/ftp download mirrors) with the possibility that some are on/off as they are allowed. Why do we need such static contracts? Ryan Riley ryan.riley@... http://panesofglass.org/ http://wizardsofsmart.net/
Jim, IMHO, your thoughts on naming should be addressed, but not by the JCP group. The term "REST" has been used to describe both the ReST architectural constraints and the specific architecture described in "RESTful Web Services." The rest-discuss group has a solid understanding of the differences between the two, but most "REST" practitioners don't. IMHO, this group, or a group like this one, needs to come up with a better, more official distinction between "Restful Web Services" to describe the evolving architecture that is currently espoused. IMHO, we need a quasi-official REST group that names and describes the constraints and architectures. There are quite a few big technology players and many more small players in the REST sphere now. That means that there's a lot of money involved. I don't think a 9 year old, partially unfinished thesis; a handful of articles; a few blog entries; a yahoo news group; and a couple of books are enough to capture the full meaning of REST at this point. -Solomon On Thu, Sep 17, 2009 at 11:33 AM, Jim Webber <jim@...> wrote: > > > Hi Bill, > > [snip] > > > > BTW, what do you have against JAX-RS? I've liked it so far because > > it doesn't get in the way with how you want to design your > > resources. Its just a simple bridge between the HTTP message and > > your Java code. > > Primarily that it has a misleading name, or at least fails to deliver > on that name. It should have been called "Java API for interfacing to > Web servers that's a bit nicer than servlets, and by the way why on > Earth did the servlet designers not use the bloody return value anyway?" > > Obviously the bit after the comma is more artistic, and the > appropriate JCP could choose to adopt it or not as it sees fit. > > Which resonates with something Mark Baker said the other day, that > REST-* should have been called HTTP-*. Like REST-*, JAX-RS is all > about HTTP, not really about REST. After all, if we consider JAX-RS to > be a framework for RESTful services, then we should also apply that > moniker to servlets, no? > > Jim > > >
An official "REST project" could just say that "REST" is the original thesis and nothing more. At least it should help with the interpretation of the thesis, a sort of synthesis of the various explanations that Roy gave here in this list or on his blog. Maybe, it could go a bit beyond and collaborate with (or maybe lead) related efforts such as HTTP and "Waka" protocols. It could also identify and rank the RESTfulness of so-called RESTful APIs, collect and describe best practices. Best regards, Jerome Louvel -- Restlet ~ Founder and Lead developer ~ http://www.restlet.org Noelios Technologies ~ Co-founder ~ http://www.noelios.com Ant�nio Mota wrote : > > > Well, Roy Fielding said (wrote) several times that there were parts of > his thesis that he had to drop due to time constraints, so if he wanted > to expand those missing parts, that could be a such a "REST Project". > But that can't come from the community, it has to be his personal > decision, of course. > > codeblogger@... <mailto:codeblogger%40ymail.com> wrote: > > > > > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com> > > <mailto:rest-discuss%40yahoogroups.com>, Stefan Tilkov > > <stefan.tilkov@...> wrote: > > > > > Here I have to disagree - I don't have any idea what an "official REST > > > project" might be doing. > > > > > > Best, > > > Stefan > > > > Hi, > > > > I have to second Stefan. REST is the name for this architectural style > > and with Roy's thesis well defined. Not much left to do. We/you could > > further investigate architectures that are so to speak "RESTful" > > (follow the style) - but we cannot call them REST. Hence a "REST > > project" does not make sense. > > > > Regards, > > Nicolai > > > > > >
On Thu, Sep 17, 2009 at 11:04 AM, Bill Burke <bburke@...> wrote: > > > Jim Webber wrote: >> > I do agree that using something like XML schema to enforce a >> > contract might be an interesting idea. >> >> Doh! I never meant to imply that. A media type might have a bunch of >> schemas associated with it, but it's the media type spec that declares >> the contract. >> > > You don't think a media type spec could use a schema to define a contract? > > i.e. > > application/vnd.order+xml > > Its the only ammunition/answer I've thought of when I get the question > "Where's the WSDL?" I actually wish there was an answer to that in the form of a media type spec for the "bookmark" url. Something simple that contains links to a doap file, atom feed for rel="status", a list of links to the "bookmark" url of its dependencies, and of course a list of the entry resources. I reckon it'd sorta be like a runtime version of a DOAP file. Maybe it doesn't make as much sense on the wild internet but inside an organization it'd let you do things like build dependency graphs, have a service status dashboard(amazon/GApps style), etc. --tim
Hi Bill, Well irrespective of the cause of all the activity on the list, I've learned a lot reading today so that's good. So: [snip] > When Mark Little first introduced me to the tx api, I thought it > could be greatly improved by defining a set of link relationships > and making URLs more opaque (Yes, I'm repeating myself.). There are two concepts that are comingled here. The first is whether you can write a (transaction) protocol in a RESTful manner. I think the answer is obviously yes. Transactions are protocol-tastic, and link relations (or hypermedia formats) are great at describing protocols. Then there's the other part: but do we need transactions? And I don't mean here the old 2pc sucks for widely distributed systems (cos it does!), but whether in fact the Web already is a coordination platform. After all, every time I interact with a resource I get a status code back which gives me a clue about whether or not my interaction was successful, and how it was/was not successful so that I can choose to make forward (or backward) progress. Each interaction is a nice little unit of work, and hypermedia threads them together into business transactions. Still I'd be interested to know how many folks out there are clamouring for inter-service transactions. When I built transactions middleware back in the day (also with Mark, spooky!) we didn't have gazillions of folks wanting to do this. In my day job now I don't see many folks wanting to do this either. Jim
On Thu, Sep 17, 2009 at 8:27 AM, Bill Burke <bburke@...> wrote: > No, because a simpler more scalable architecture like the Web would > greatly reduce the amount of effort we currently have to invest in our > projects and products, specifically around tooling and management. I think at this juncture, really, this concept of "REST-*" should be more a "patterns" collection than some set of standards. A "REST Patterns" site I think would be very helpful to a lot of people as it would give solid examples of how some things could be done, even if does not present the to the metal specifics of how it is done. It can promote the REST architecture and talk at a higher level, it could even leave HTTP completely out of the equation. REST is a new way of thinking for many, and having answers to "how do I..." questions, in a central place, with some nice diagrams at a high level, would be very valuable. The goal being not interoperability, per se, rather best practices for REST in the enterprise, regardless of the actual protocols etc. used. Regards, Will Hartung
Jan Algermissen wrote: > > There is no contract besides link semantics. > Agreed. I think this notion of media type as contract, has led to the plethora of REST claimants which assign PATCH semantics to PUT. -Eric
On Thu, Sep 17, 2009 at 12:16 PM, Jim Webber <jim@...> wrote: > Still I'd be interested to know how many folks out there are > clamouring for inter-service transactions. When I built transactions > middleware back in the day (also with Mark, spooky!) we didn't have > gazillions of folks wanting to do this. In my day job now I don't see > many folks wanting to do this either. People do this all the time, some are even 2PC, they just don't call them transactions. As Subbu suggested, they define them as application interactions. The example I keep using is request for quotation and then order. But as Mark just reminded me, there's also reservations.
On Thu, Sep 17, 2009 at 12:32 PM, Will Hartung <willh@...> wrote: > I think at this juncture, really, this concept of "REST-*" should be more > a "patterns" collection than some set of standards. A "REST > Patterns" site I think would be very helpful to a lot of people as it > would give solid examples of how some things could be done, even if > does not present the to the metal specifics of how it is done. > Agreed. Ryan Riley ryan.riley@... http://panesofglass.org/ http://wizardsofsmart.net/
On a related note: http://steve.vinoski.net/blog/2009/04/09/qcon-london-2008-presentation-video/ The answer is to implement the OPTIONS method. Every resource now has its interface 'defined', I'd do it as HTML. Now, write a CGI script using wget and/or libcurl, recurse through the site calling OPTIONS and outputting a long <dl> with each <dt> containing the URI and the HTML from each OPTIONS call wrapped inside <dd> tags. This would be "a document listing all resource URIs, and for each one, the HTTP verbs that apply to it, the representations available from it, and what status codes to expect from invoking operations on it." Assign that CGI script an URL. -Eric Bill Burke wrote: > > > Jim Webber wrote: > > > I do agree that using something like XML schema to enforce a > > > contract might be an interesting idea. > > > > Doh! I never meant to imply that. A media type might have a bunch of > > schemas associated with it, but it's the media type spec that > > declares the contract. > > > > You don't think a media type spec could use a schema to define a > contract? > > i.e. > > application/vnd.order+xml > > Its the only ammunition/answer I've thought of when I get the > question "Where's the WSDL?" > > Bill > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
Will Hartung wrote: > On Thu, Sep 17, 2009 at 8:27 AM, Bill Burke <bburke@...> wrote: >> No, because a simpler more scalable architecture like the Web would >> greatly reduce the amount of effort we currently have to invest in our >> projects and products, specifically around tooling and management. > > I think at this juncture, really, this concept of "REST-*" should be > more a "patterns" collection than some set of standards. A "REST > Patterns" site I think would be very helpful to a lot of people as it > would give solid examples of how some things could be done, even if > does not present the to the metal specifics of how it is done. > This isn't something I think myself or Red Hat should do. I hate to bring Subbu into this conversation, but upcoming O'Reilly RSW Cookbook seems really really awesome (he showed me the outline). A website to back up this cookbook might be an nice add on. Red Hat will stick to middleware services. > It can promote the REST architecture and talk at a higher level, it > could even leave HTTP completely out of the equation. > > REST is a new way of thinking for many, and having answers to "how do > I..." questions, in a central place, with some nice diagrams at a high > level, would be very valuable. > > The goal being not interoperability, per se, rather best practices for > REST in the enterprise, regardless of the actual protocols etc. used. > Many developers I talk to are *hungry* for true interoperability. Myself included. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
+1 WSDL provides machine readable documents to aid frameworks in code gen. Long story short, this is brittle and not really worth repeating. Since state can and will changes after a request, you do yourself a service to assume that links, or documents containing links, have changed. Any attempt to codify the link conventions for a service effectively kill the link semantics that hypertext relies on and introduce brittleness. -Noah On Thu, Sep 17, 2009 at 11:24 AM, Eric J. Bowman <eric@...>wrote: > > > On a related note: > > > http://steve.vinoski.net/blog/2009/04/09/qcon-london-2008-presentation-video/ > > The answer is to implement the OPTIONS method. Every resource now has > its interface 'defined', I'd do it as HTML. Now, write a CGI script > using wget and/or libcurl, recurse through the site calling OPTIONS and > outputting a long <dl> with each <dt> containing the URI and the HTML > from each OPTIONS call wrapped inside <dd> tags. This would be "a > document listing all resource URIs, and for each one, the HTTP verbs > that apply to it, the representations available from it, and what > status codes to expect from invoking operations on it." Assign that > CGI script an URL. > > -Eric > > > Bill Burke wrote: > > > > > > > Jim Webber wrote: > > > > I do agree that using something like XML schema to enforce a > > > > contract might be an interesting idea. > > > > > > Doh! I never meant to imply that. A media type might have a bunch of > > > schemas associated with it, but it's the media type spec that > > > declares the contract. > > > > > > > You don't think a media type spec could use a schema to define a > > contract? > > > > i.e. > > > > application/vnd.order+xml > > > > Its the only ammunition/answer I've thought of when I get the > > question "Where's the WSDL?" > > > > Bill > > -- > > Bill Burke > > JBoss, a division of Red Hat > > http://bill.burkecentral.com > > > > >
I received some off-list questions, thought I'd share one with the group: > > I am learning about REST. I am trying to understand how to do real > REST as opposed to HTTP RPC. > Good for you! Please allow me to indoctrinate you into the cult... ;-) REST is the only Uniform Interface defined for distributed hypermedia applications. REST is a unique hybrid of network and application architectural principles with a firm grounding in Computer Science, whereas WS-* ignores the principles of network architecture and has a firm grounding in corporate profit-seeking. In Computer Science, a Uniform Interface is one that is generic (see Principle of Generality), thus re-usable between applications. SQL defines a Uniform Interface for DBMSs. REST defines a Uniform Interface for distributed hypermedia applications. Applying the constraints of REST to interactions between components (clients, servers, caches etc.) in such an application, results in a Uniform Interface, the primary benefit being scalability. Not that other benefits like serendipitous re-use are to be taken lightly... > > This characteristic of Uniform Interface > seems to be the main issue. WHat makes an interface Uniform? In > wikipedia it says that one of the conditions is that the message is > self-descriptive, somehow the client will know what to do with a > response based on the mime media-type that describes the response. > Is that all? > Nope. Unless it's been dramatically edited recently, the Wikipedia REST page doesn't get much respect on rest-discuss because it's a WS-*- centric POV. Nothing more confusing, IMHO, than trying to discuss REST in terms of WS-*. Anyway, headers (not just Content-Type), request methods and status codes, are what make HTTP messages self- descriptive. Bear in mind that FTP messages are also self-descriptive, as REST is a protocol-neutral design pattern. The terminology can be a bit confusing, as the Uniform Interface itself is considered a constraint in REST, applied on top of the constraints imposed by the architecture of the Web itself -- the culmination of which is expressed as the client-cache-stateless-server constraint. The Uniform Interface constraint is the culmination of four other constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state. So the challenge in applying REST to any problem area, is to model it as a distributed hypermedia application -- a collection of resources linked together through hypermedia representations. Once that's accomplished, apply REST's Uniform Interface constraints to the interactions between components transferring and manipulating those representations. -Eric
What you describe is just description publishing and is very different than defining contract. I was thinking more of of the schema requiring an XML document to define a specific set of link relationships. For example: Older client requests: Accept: application/vnd.order-entry+xml;version=1 The client is guaranteed that a set of link relationships will exist within the order-entry representation because of the schema backing the media type. Machine-based clients (at least those that are application driven) can't guess how to traverse links. They have to know ahead of time what to do. If you use this pattern and propagate it down the chain of relationships and their types and each relationship you have versioned interactions. Well, that was the idea anyways...Whether it would work in practice, I don't know. Eric J. Bowman wrote: > On a related note: > > http://steve.vinoski.net/blog/2009/04/09/qcon-london-2008-presentation-video/ > > The answer is to implement the OPTIONS method. Every resource now has > its interface 'defined', I'd do it as HTML. Now, write a CGI script > using wget and/or libcurl, recurse through the site calling OPTIONS and > outputting a long <dl> with each <dt> containing the URI and the HTML > from each OPTIONS call wrapped inside <dd> tags. This would be "a > document listing all resource URIs, and for each one, the HTTP verbs > that apply to it, the representations available from it, and what > status codes to expect from invoking operations on it." Assign that > CGI script an URL. > > -Eric > > Bill Burke wrote: > >> >> Jim Webber wrote: >>> > I do agree that using something like XML schema to enforce a >>> > contract might be an interesting idea. >>> >>> Doh! I never meant to imply that. A media type might have a bunch of >>> schemas associated with it, but it's the media type spec that >>> declares the contract. >>> >> You don't think a media type spec could use a schema to define a >> contract? >> >> i.e. >> >> application/vnd.order+xml >> >> Its the only ammunition/answer I've thought of when I get the >> question "Where's the WSDL?" >> >> Bill >> -- >> Bill Burke >> JBoss, a division of Red Hat >> http://bill.burkecentral.com >> -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill Burke wrote: > > What you describe is just description publishing and is very > different than defining contract. > Exactly. I've never understood the infatuation with contracts, or media-type versioning either. HTML 2-4 are very different beasts, yet their media type is the same. Version the schema, just as DOCTYPEs indicate HTML versions. Just my two cents. -Eric > > I was thinking more of of the > schema requiring an XML document to define a specific set of link > relationships. For example: > > Older client requests: > > Accept: application/vnd.order-entry+xml;version=1 > > The client is guaranteed that a set of link relationships will exist > within the order-entry representation because of the schema backing > the media type. Machine-based clients (at least those that are > application driven) can't guess how to traverse links. They have to > know ahead of time what to do. > > If you use this pattern and propagate it down the chain of > relationships and their types and each relationship you have > versioned interactions. > > Well, that was the idea anyways...Whether it would work in practice, > I don't know. > > Eric J. Bowman wrote: > > On a related note: > > > > http://steve.vinoski.net/blog/2009/04/09/qcon-london-2008-presentation-video/ > > > > The answer is to implement the OPTIONS method. Every resource now > > has its interface 'defined', I'd do it as HTML. Now, write a CGI > > script using wget and/or libcurl, recurse through the site calling > > OPTIONS and outputting a long <dl> with each <dt> containing the > > URI and the HTML from each OPTIONS call wrapped inside <dd> tags. > > This would be "a document listing all resource URIs, and for each > > one, the HTTP verbs that apply to it, the representations available > > from it, and what status codes to expect from invoking operations > > on it." Assign that CGI script an URL. > > > > -Eric > > > > Bill Burke wrote: > > > >> > >> Jim Webber wrote: > >>> > I do agree that using something like XML schema to enforce a > >>> > contract might be an interesting idea. > >>> > >>> Doh! I never meant to imply that. A media type might have a bunch > >>> of schemas associated with it, but it's the media type spec that > >>> declares the contract. > >>> > >> You don't think a media type spec could use a schema to define a > >> contract? > >> > >> i.e. > >> > >> application/vnd.order+xml > >> > >> Its the only ammunition/answer I've thought of when I get the > >> question "Where's the WSDL?" > >> > >> Bill > >> -- > >> Bill Burke > >> JBoss, a division of Red Hat > >> http://bill.burkecentral.com > >> > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com
--- In rest-discuss@yahoogroups.com, Bill Burke <bburke@...> wrote: > > I think we were pretty clear on rest-star.org that we weren't an > "official REST project" or trying to define the meaning of REST or be > "official REST". From the website: > > "REST-* is an organization dedicated to bringing the architecture of the > Web to common patterns in middleware technology". > > The name REST-* was, on purpose, supposed to convey images of WS-*. > Other than being tonque-and-cheek, and ignoring the negative > connotations of WS-*, WS-* generally equates to defining middleware: > > "The REST-* community aims to introduce new REST-based standards for > these traditional services" > > The key word was *traditional*. What I think I should have said was: > > "The REST-* community aims to morph these traditional services into a > more RESTful approach through a set of open specifications." > I'm in total slackjawed awe that you guys conciously made this decision, as techincal people. At least you could blame your marketing folks, and distance yourself from the situation. I think you are sort of missing the point of Stefan's feedback and Roy's criticisms. You shouldn't come up with some one sentence catch-all slogan, either. Instead, you should probably look at your project analysis document and scope statement, understand what the heck it is you are trying to do, and explain what problems you are going to solve. To repeat, > "The REST-* community aims to morph these traditional services into a > more RESTful approach through a set of open specifications." doesn't communicate anything but a big fat :words: emoticon, so people will just give you a :tl;dr: emoticon back. Honestly, I don't even understand how you are pushing REST at JBoss. I would expect whatever persuasion you used there to be mirrored in your marketing material to external devs. But I don't see it. What I see is fuzzy thinking. You might be right, but in no thanks to fuzzy thinking. I'm mostly frustrated myself, dealing with a lot of dumb "REST API" libraries that simultaneously claim to be "object-oriented", when they explicitly pass values down a call chain, creating a hierarchical dependency chain between callers and callees. This looks a lot like Structured Design to me, not OO. Also, I find vendors of "REST APIs" poorly document their own services. "It's self-describing, you figure out what went wrong." I guess REST also means to these people "ping service, random junk reply, you figure out what to do with it".
Noah Campbell wrote: > > > +1 > > WSDL provides machine readable documents to aid frameworks in code gen. > Long story short, this is brittle and not really worth repeating. > Its why RESTEasy doesn't support WADL. Although, if you look at WADL you'll see that you can define link semantics somewhat. At least with exchanged XML documents. I still wouldn't use WADL... > Since state can and will changes after a request, you do yourself a > service to assume that links, or documents containing links, have > changed. Any attempt to codify the link conventions for a service > effectively kill the link semantics that hypertext relies on and > introduce brittleness. > I see your point. When Craig M blogged about the benefits of links (in machine clients) he stated that one was the resource can publish its viable, currently available, state transitions i.e. "on" or "off", but not publish both "on" and "off". I guess this is something you can't define within an XML schema. When I get asked "How do I define a contract?" I first say a searchable WIKI or web page describing your published services and media types is the way to go. Humans have to code the servers and clients, but this never sits well with those entrenched in WSDL. So I throw up the possibility of defining the semantics within an exchanged schema. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Eric J. Bowman wrote: > Bill Burke wrote: > >> What you describe is just description publishing and is very >> different than defining contract. >> > > Exactly. I've never understood the infatuation with contracts, or > media-type versioning either. HTML 2-4 are very different beasts, yet > their media type is the same. Version the schema, just as DOCTYPEs > indicate HTML versions. Just my two cents. > I hope I didn't seem in favor of a WSDL like document defining the contract. I'm also not in favor of them and would much prefer human readable and searchable documentation to discover what the interactions mean. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
2009/9/17 Will Hartung <willh@...> > > I think at this juncture, really, this concept of "REST-*" should be > more a "patterns" collection than some set of standards. A "REST > Patterns" site I think would be very helpful to a lot of people as it > would give solid examples of how some things could be done, even if > does not present the to the metal specifics of how it is done. > > It can promote the REST architecture and talk at a higher level, it > could even leave HTTP completely out of the equation. > > REST is a new way of thinking for many, and having answers to "how do > I..." questions, in a central place, with some nice diagrams at a high > level, would be very valuable. > > The goal being not interoperability, per se, rather best practices for > REST in the enterprise, regardless of the actual protocols etc. used. > > Regards, > > Will Hartung > As a member of a team that is implementing a quasi-REST infrastructure over several protocol I couldn't agree more. More than this, something with the unfortunate name of REST-DeadStar sure looks like a vendor lock-in project, and I said "look" as their authors already repeated that it's not. But sometimes is not enough to "be", it's also necessary to "look" like...
(resending this since I used a wrong from email address; sorry for the dup) On Sep 17, 2009, at 11:04 AM, Ryan Riley wrote: > On Thu, Sep 17, 2009 at 12:32 PM, Will Hartung <willh@...> > wrote: > I think at this juncture, really, this concept of "REST-*" should be > more a "patterns" collection than some set of standards. A "REST > Patterns" site I think would be very helpful to a lot of people as it > would give solid examples of how some things could be done, even if > does not present the to the metal specifics of how it is done. I started a REST/WOA patterns wiki site some time ago, but it didn't take root with the REST community: http://restpatterns.org/ - Steve -------------- Steve G. Bjorg http://mindtouch.com http://twitter.com/bjorg irc.freenode.net #mindtouch
(resending this since I used a wrong from email address; sorry for the dup) On Sep 17, 2009, at 11:37 AM, Eric J. Bowman wrote: > REST is the only Uniform Interface defined for distributed hypermedia > applications. REST is a unique hybrid of network and application > architectural principles with a firm grounding in Computer Science, > whereas WS-* ignores the principles of network architecture and has a > firm grounding in corporate profit-seeking. You may then find it ironic that the HTTP and SOAP specs have an author in common. :) - Steve -------------- Steve G. Bjorg http://mindtouch.com http://twitter.com/bjorg irc.freenode.net #mindtouch
Steve Bjorg wrote: > > You may then find it ironic that the HTTP and SOAP specs have an > author in common. :) > Hey, I said this was indoctrination... you clearly need a re-education camp! Didn't SOAP come along before all the WS-* madness, though? -Eric
Would I be wrong to add shopping carts to this list? On Thu, Sep 17, 2009 at 6:47 PM, Bob Haugen <bob.haugen@...> wrote: > > > On Thu, Sep 17, 2009 at 12:16 PM, Jim Webber <jim@...<jim%40webber.name>> > wrote: > > Still I'd be interested to know how many folks out there are > > clamouring for inter-service transactions. When I built transactions > > middleware back in the day (also with Mark, spooky!) we didn't have > > gazillions of folks wanting to do this. In my day job now I don't see > > many folks wanting to do this either. > > People do this all the time, some are even 2PC, they just don't call > them transactions. > > As Subbu suggested, they define them as application interactions. > > The example I keep using is request for quotation and then order. > > But as Mark just reminded me, there's also reservations. > >
johnzabroski wrote: > > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Bill Burke <bburke@...> wrote: > > > > > I think we were pretty clear on rest-star.org that we weren't an > > "official REST project" or trying to define the meaning of REST or be > > "official REST". From the website: > > > > "REST-* is an organization dedicated to bringing the architecture of the > > Web to common patterns in middleware technology". > > > > The name REST-* was, on purpose, supposed to convey images of WS-*. > > Other than being tonque-and-cheek, and ignoring the negative > > connotations of WS-*, WS-* generally equates to defining middleware: > > > > "The REST-* community aims to introduce new REST-based standards for > > these traditional services" > > > > The key word was *traditional*. What I think I should have said was: > > > > "The REST-* community aims to morph these traditional services into a > > more RESTful approach through a set of open specifications." > > > > I'm in total slackjawed awe that you guys conciously made this decision, > as techincal people. At least you could blame your marketing folks, and > distance yourself from the situation. > > I think you are sort of missing the point of Stefan's feedback and Roy's > criticisms. > > You shouldn't come up with some one sentence catch-all slogan, either. > Instead, you should probably look at your project analysis document and > scope statement, understand what the heck it is you are trying to do, > and explain what problems you are going to solve. To repeat, > > > "The REST-* community aims to morph these traditional services into a > > more RESTful approach through a set of open specifications." > > doesn't communicate anything but a big fat :words: emoticon, so people > will just give you a :tl;dr: emoticon back. > > Honestly, I don't even understand how you are pushing REST at JBoss. I > would expect whatever persuasion you used there to be mirrored in your > marketing material to external devs. But I don't see it. What I see is > fuzzy thinking. You might be right, but in no thanks to fuzzy thinking. > > I'm mostly frustrated myself, dealing with a lot of dumb "REST API" > libraries that simultaneously claim to be "object-oriented", when they > explicitly pass values down a call chain, creating a hierarchical > dependency chain between callers and callees. This looks a lot like > Structured Design to me, not OO. Also, I find vendors of "REST APIs" > poorly document their own services. "It's self-describing, you figure > out what went wrong." I guess REST also means to these people "ping > service, random junk reply, you figure out what to do with it". > I just hope that when we take all this feedback into account and revise both the site, definitions, community setup, message, and specifications we'll get as much feedback. Just to let you know, this was going to be announced in stages to build feedback. 1. Red hat internally 2. resteasy mail list (did it a few weeks ago) 3. JBoss World mentioning (2 weeks ago) 4. blog 5. rest-discuss 6. Official PR I got through 1-3 over the past 3 weeks with some feedback. Never got to 4 and 5 because things snowballed yesterday. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Thu, Sep 17, 2009 at 2:29 PM, Alexandros Marinos <al3xgr@...> wrote: > > > > Would I be wrong to add shopping carts to this list? > > On Thu, Sep 17, 2009 at 6:47 PM, Bob Haugen <bob.haugen@...> wrote: >> >> >> >> On Thu, Sep 17, 2009 at 12:16 PM, Jim Webber <jim@...> wrote: >> > Still I'd be interested to know how many folks out there are >> > clamouring for inter-service transactions. When I built transactions >> > middleware back in the day (also with Mark, spooky!) we didn't have >> > gazillions of folks wanting to do this. In my day job now I don't see >> > many folks wanting to do this either. >> >> People do this all the time, some are even 2PC, they just don't call >> them transactions. >> >> As Subbu suggested, they define them as application interactions. >> >> The example I keep using is request for quotation and then order. >> >> But as Mark just reminded me, there's also reservations. > >
On Thu, Sep 17, 2009 at 2:29 PM, Alexandros Marinos <al3xgr@...> wrote: > Would I be wrong to add shopping carts to this list? Hadn't thought of that, but you are probably correct. Moreover, some shopping carts (like those at Amazon where the products are supplied by different vendors) get pretty complicated on the back end. >> On Thu, Sep 17, 2009 at 6:47 PM, Bob Haugen <bob.haugen@...> wrote: >>> >>> >>> >>> On Thu, Sep 17, 2009 at 12:16 PM, Jim Webber <jim@...> wrote: >>> > Still I'd be interested to know how many folks out there are >>> > clamouring for inter-service transactions. When I built transactions >>> > middleware back in the day (also with Mark, spooky!) we didn't have >>> > gazillions of folks wanting to do this. In my day job now I don't see >>> > many folks wanting to do this either. >>> >>> People do this all the time, some are even 2PC, they just don't call >>> them transactions. >>> >>> As Subbu suggested, they define them as application interactions. >>> >>> The example I keep using is request for quotation and then order. >>> >>> But as Mark just reminded me, there's also reservations.
On Thu, Sep 17, 2009 at 10:11 AM, Jan Algermissen <algermissen1971@...> wrote: > > Bill, > > On Sep 17, 2009, at 4:03 PM, Bill Burke wrote: > > > "REST-* is an organization dedicated to bringing the architecture of > > the > > Web to common patterns in middleware technology". > > this is I think were everybody chokes because Web architecture is > contrary to the common understanding of 'common patterns in middleware > technology'. Better would be > > 'dedicated to overcomming the complexities and coupling of common > patterns in middleware technology by replacing them with RESTful use > of HTTP and other Web standards' or similar. +1 For the umpteenth time, HTTP is a application protocol NOT a middleware protocol (and definitely not a sub-protocol FOR a middleware protocol on top of it). I don't know why middleware folk (vendors, architects, developers) never get this. The fundamental mistake that middleware folk made with WS-* was to fail to see that HTTP is an application protocol, not a low level transport protocol. They thought middleware services needed to be put on TOP of HTTP. Fail! Putting middleware services (protocols) on top of an application protocol is like trying to use an appliance as if it were infrastructure, eg using oven to drive your central heating system. If some middleware vendors want to try yet again to build a middleware stack on HTTP, feel free. Just don't claim it has ANYTHING to do with the Web or with REST. I said this at the W3C Workshop on Web of Services for Enterprise Computing <http://www.w3.org/2007/01/wos-papers/gall> 2 1/2 years ago and I'll say it again: The large set of WS-* specifications is almost entirely focused on recreating traditional middleware capabilities using XML as the syntax for the formal message structure and the formal interface description. For example, WS-Reliability ans WS-ReliableMessaging simply apply the lessons learned by MOM vendors to reimplement their time tested algorithms for message acknowledgement, resend, etc. The same is true of WS-AtomicTransaction and WS-BusinessActivity and their relationship to traditional transactional middleware algorithms, eg two-phase commit. While there is nothing wrong with such work--in fact there is great value in having the major middleware vendors finally agree on middleware standards after so many years of proprietary middleware protocols--the problem is that such work has nothing to do with web architecture or the W3C. The only overlap between WS-* and the Web is a technological one: WS-* uses Web technologies such as XML and HTTP. If the middleware vendors want to embrace the web, they should help build more conformant webware (what Roy calls REST components in his thesis): web servers, proxies, gateways, caches, browers, etc. Instead they keep trying to misuse these components as building blocks for yet more middleware software. I guess this is just another manifestation of the perennial dumb network <http://en.wikipedia.org/wiki/Dumb_network> vs smart network ( end-to-end <http://en.wikipedia.org/wiki/End_to_end_principle>) debate: REST (dumb network) vs Enterprise Middleware (smart network). -- Nick Nick Gall Phone: +1.781.608.5871 AOL IM: Nicholas Gall Yahoo IM: nick_gall_1117 MSN IM: (same as email) Google Talk: (same as email) Email: nick.gall AT-SIGN gmail DOT com Weblog: http://ironick.typepad.com/ironick/
On Thu, Sep 17, 2009 at 7:18 AM, Tim Williams <williamstw@...> wrote: > Is there a reason a client shouldn't respect the origin server's > cache-control if it's over SSL? I don't immediately see anything in > HTTP or TLS that indicates I can't but I came across Mark's cache > tutorial[1] where he says, "If the request is authenticated or secure > (i.e., HTTPS), it won’t be cached." and now I'm wondering if I've > missed something. I'm hoping he's simply describing the way things > happen to be inside browsers rather than implying the way thing should > be in service clients. Obviously, intermediates are not going to be able to do any caching, but I cannot think of any reason a local cache in the client would not be allowed. I have implement HTTP clients that use SSL and cache the responses. It would be very difficult to create reasonably performant and scalable web-arch based applications without caching. I think that caching SSL responses is a must from a pragmatic stand point. -- Peter Williams http://barelyenough.org
Btw, is there not an ssl mode where the data travels openly, but is nevertheless guaranteed to not have been tampered with? I am told this would have to be using a null cypher, but a hash. That mode might allow intermediate caching I suppose.... Henry On 17 Sep 2009, at 22:52, Peter Williams wrote: > On Thu, Sep 17, 2009 at 7:18 AM, Tim Williams <williamstw@gmail.com> > wrote: >> Is there a reason a client shouldn't respect the origin server's >> cache-control if it's over SSL? I don't immediately see anything in >> HTTP or TLS that indicates I can't but I came across Mark's cache >> tutorial[1] where he says, "If the request is authenticated or secure >> (i.e., HTTPS), it won’t be cached." and now I'm wondering if I've >> missed something. I'm hoping he's simply describing the way things >> happen to be inside browsers rather than implying the way thing >> should >> be in service clients. > > Obviously, intermediates are not going to be able to do any caching, > but I cannot think of any reason a local cache in the client would not > be allowed. I have implement HTTP clients that use SSL and cache the > responses. It would be very difficult to create reasonably performant > and scalable web-arch based applications without caching. I think > that caching SSL responses is a must from a pragmatic stand point. > > -- > Peter Williams > http://barelyenough.org > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Sep 17, 2009, at 4:47 PM, Mark Little wrote: > Bill's been doing a great job of evangelizing REST in the Java world > and this effort is meant to compliment that and take it to the next > level. I agree with that (and just for the record, I think quite a few of the reactions on this list and elsewhere were way out of line). Stefan
Would it behoove the REST community to have a quasi-standards body? Do you think that this kind of initiative, if done right, would be a Good Thing (tm)? -Solomon
On Thu, Sep 17, 2009 at 12:53 PM, Eric J. Bowman <eric@...> wrote: > Exactly. I've never understood the infatuation with contracts, or > media-type versioning either. HTML 2-4 are very different beasts, yet > their media type is the same. Version the schema, just as DOCTYPEs > indicate HTML versions. Just my two cents. I dispute your assertion that HTML 2 is that different from 4 in this context. As Jan Algermissen pointed out earlier, link semantics pretty much are the contract in REST. I don't think the semantics of link traversal have ever changed in HTML (at least not in an incompatible way). -- Peter Williams http://barelyenough.org
> I agree with that (and just for the record, I think quite a few of > the reactions on this list and elsewhere were way out of line). +1
What are the problems that such a group would be trying to solve? It's not like we don't already have plenty of standardization bodies. And it's not like the majority of them are completely opaque and closed to communities that don't pay a big fat check to get a seat on the board. Seb From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Solomon Duskis Sent: 17 September 2009 22:58 To: REST-Discuss Discussion Group Subject: [rest-discuss] Body @ REST? Would it behoove the REST community to have a quasi-standards body? Do you think that this kind of initiative, if done right, would be a Good Thing (tm)? -Solomon
On Thu, Sep 17, 2009 at 11:57 PM, Solomon Duskis <sduskis@...> wrote: > Would it behoove the REST community to have a quasi-standards body? No. The world has more than enough standards bodies already, let alone "quasi-standards bodies". A better/faster/cheaper (and safer!) way to achieve our aims as a community is to leverage the existing infrastructure and promote the benefits of REST, educating existing practitioners as to how to apply the principles to their respective fields of endeavour. An obvious staring point for this would be the IETF (where some of us have already been contributing to initiatives like the Web Linking<http://tools.ietf.org/html/draft-nottingham-http-link-header>draft) and other similar standards efforts (like PubSubHubbub <http://code.google.com/p/pubsubhubbub/> and OCCI<http://www.occi-wg.org/> ). > Do you think that this kind of initiative, if done right, would be a Good Thing (tm)? "If done right" are the key words here. It is far more likely to be done wrong, as evidenced by the REST-* debacle. The closest I've seen to being "done right" throughout this discussion is the existing (albeit apparently abandoned) REST Patterns<http://restpatterns.org/>wiki and the suggestion that Roy could run an official project. Getting some rough consensus on how to handle things like transactions, idempotency, etc. and then clearly documenting it would be a good place to begin, along with identifying some elegant examples of RESTful interfaces from which people can learn. Settling on some sensible terminology would undoubtedly help too - REST is useful in the abstract but my preference for the way it is usually applied today is Resource Oriented Architecture (ROA) - courtesy the RESTful Web Services book. Sam
Hiya all,
I've taken a back-seat this time, luckily due to too much stuff going
on in the real world. My two bobs worth, and I'll just start with this
one (although my reply is not directed at Nick throughout) ;
On Fri, Sep 18, 2009 at 06:35, Nick Gall <nick.gall@...> wrote:
> For the umpteenth time, HTTP is a application protocol NOT a middleware protocol
We all have our definitions (in our heads, at best) of what thing is
called what and what that part of technology is categorised under. The
thing is, it's all really philosophically interchangeable in the end
(for a good example of what I'm talking about, see "Everything is
miscellaneous"; google it). I know we're all hardcore engineers here
and all, but if we take a step back and examine what we're doing here
in human terms, what the heck is all this bickering about?
I find this ("HTTP is a application protocol NOT a middleware
protocol") a bit of a strange assertion. First of all, the notion of
middleware is to glue software components and / or systems together,
and as such an application protocol *is* a middleware protocol. Are we
here just battling over semantics that actually are superfluous in
nature? Aren't we trying to say that RESTful design removes much of
the need for *traditional* middleware whos sole purpose is to glue
applications together through structures simulating OO (which REST
does for you in better ways through resources [mining from
identification management]), so why are you recreating the middleware
problem through something that already solves much of it? This
misguidance will of course be misinterpreted as commercial bias.
People whinge a lot that Red Hat has too much of a bias. Um, computer
scientists have a bias too, even people who have no bias have a basic
philosophical bias, and these semantics *will* affect (as oppose to
mere observatory notions) the work that is done. The argument of bias
is mostly moot (but not completely, of course). I personally get the
feeling that these REST-* moves were genuinly of good nature, the only
business bias that I can see is that they jumped on an idea and went
with it. That's more happening in the REST world in a long time, and
as such I *really* welcome it.
The name "REST-*" is not *so* bad. Of course, it reels up too much
crap baggage from the WS-* scene (and if we're unlucky, too much of
that crap will be adapted unscrutinized because of the traditional
middleware thinking), but in terms of "defining good scaffolding for
RESTful systems design" it's a good all-embracing term. I think the
first two drafts there indeed are HTTP-*, but if this is meant to be a
junction-point of various RESTful goodness, the name might still be a
good future name (except the WS-* associations themselves, which I
suspect were driven more by marketing than anything else). I'd like to
see more defined content-types, too, that are more enterprise in
nature, perhaps common practices for dealing with value systems,
different means of capturing state (as in, the messages themselves as
opposed to the state embedded in the hyperlinks), and so on.
Their two first drafts are possibly somewhat misguided in terms of
hardcore REST (and don't jump on me here; I am too a hardcore REST
proponent), but if the community got involved and worked in similar
ways to, say, the Atom work-groups (which I had the pleasure of
joining), it could turn into something beautiful and much needed. Just
because parts of a platform is misguided doesn't mean it can't be
fixed and needs to be shunned and laughed at from the get go (and yes,
I'm looking at you, Roy! :)
I kinda get the feeling that *both* sides of this divide have
forgotten, overlooked or don't see the importance of the REST state
system, which makes the notion of an API a bit moot. Yes, many have
riled against the API notion, but more for alternative reasons. And I
think it all comes down to this; if you do it RESTfully, you don't
really care about APIs, but resources. And when the API is gone, your
notion of middleware is quite different (it moves into the caching and
forwarding arena of smart web-servers), and that *this* is the crux of
the current disagreements.
Semantics drift in and out of systems, and right now someone is trying
to put a square REST resource into a huge [death]star-shaped
middleware end-point. I suggest the RESTafarians come up with good
guidance and alternatives, and the middleware people gently shift
their thinking.
In the end, though, thanks, Red Hat, for daring to stir things up a
bit. It was getting a bit boring. :)
Regards,
Alex
--
Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps
--- http://shelter.nu/blog/ ----------------------------------------------
------------------ http://www.google.com/profiles/alexander.johannesen ---
On Sep 17, 2009, at 4:36 PM, Alexander Johannesen wrote:
> Hiya all,
>
> I've taken a back-seat this time, luckily due to too much stuff going
> on in the real world. My two bobs worth, and I'll just start with this
> one (although my reply is not directed at Nick throughout) ;
>
> On Fri, Sep 18, 2009 at 06:35, Nick Gall <nick.gall@...> wrote:
> > For the umpteenth time, HTTP is a application protocol NOT a
> middleware protocol
>
> We all have our definitions (in our heads, at best) of what thing is
> called what and what that part of technology is categorised under. The
> thing is, it's all really philosophically interchangeable in the end
> (for a good example of what I'm talking about, see "Everything is
> miscellaneous"; google it). I know we're all hardcore engineers here
> and all, but if we take a step back and examine what we're doing here
> in human terms, what the heck is all this bickering about?
>
Yes, we are all a cloud of atoms -- what's the point of distinguishing
us into types, shapes, or names?
> I find this ("HTTP is a application protocol NOT a middleware
> protocol") a bit of a strange assertion. First of all, the notion of
> middleware is to glue software components and / or systems together,
> and as such an application protocol *is* a middleware protocol. Are we
> here just battling over semantics that actually are superfluous in
> nature? Aren't we trying to say that RESTful design removes much of
> the need for *traditional* middleware whos sole purpose is to glue
> applications together through structures simulating OO (which REST
> does for you in better ways through resources [mining from
> identification management]), so why are you recreating the middleware
> problem through something that already solves much of it? This
> misguidance will of course be misinterpreted as commercial bias.
>
No, that's not what middleware means (it certainly isn't tied to OO).
There is a lot of good middleware and a lot of good distributed
architectures that use various types of middleware for communication
according to various good architectural styles. There is plenty of
middleware active in the Web as well, like content distribution
networks,
name resolvers, load balancers, etc. Middleware are implementation
packages that may or may not fit a given architecture, and should be
used only when and where the fit is right. Likewise, there are
plenty of good RESTful architectures that end at the resource
interface, behind which is another architecture composed of
middleware services that are not (and should not be) RESTful.
> People whinge a lot that Red Hat has too much of a bias. Um, computer
> scientists have a bias too, even people who have no bias have a basic
> philosophical bias, and these semantics *will* affect (as oppose to
> mere observatory notions) the work that is done. The argument of bias
> is mostly moot (but not completely, of course). I personally get the
> feeling that these REST-* moves were genuinly of good nature, the only
> business bias that I can see is that they jumped on an idea and went
> with it. That's more happening in the REST world in a long time, and
> as such I *really* welcome it.
>
Would you welcome it if you went to a car dealer, asked to see a
new Camaro, and he insisted on showing you a Winnebago instead?
Would you welcome it if you went to MacDonald's and ordered a
filet-o-fish sandwich, and they gave you a Big Mac instead?
WTF is wrong with you people?
Red Hat (or, more accurately, JBoss) unilaterally decided to set
themselves up as the equivalent of Sun within a JCP-like standards
organization on a topic for which they not only had NOTHING to
do with creating and know NOTHING about, but for which they
actually sell products that are the exact opposite of RESTful
architecture. The organization claims that it will specify REST
standards, that such standards will be led by benevolent dictators
in the form of Spec Leads, that no overlapping proposals would be
allowed (IOW, the first Spec Lead owns the entire topic as BDFL),
and that Red Hat would be the only permanent member of the board
to ensure that never changed.
And, guess what -- the organization starts with a couple Spec Leads
owning the topics that JBoss wants to SELL YOU as REST, because
they know damn well that their current market is shrinking: their
entire product architecture is based on J2EE, the giant black
hole of Java that is sucking itself into oblivion.
But, no, this won't be the oh-too-formal version of REST that
is found in my dissertation. This will be the new, "Pragmatic",
AssHat version of REST, so that JBoss can tell their customers
that they are the new leaders in "REST" architecture and all
they need to do get their systems to be RESTful 2.0-compliant
is to buy their J2EE application server, now with REST-beans.
Bill Burke is not evangelizing REST throughout the Java world.
What he is evangelizing is the same old architectures that he
evangelized as J2EE and WS-* -- the only difference is that now
he starts off calling them REST. All he is doing is making
people like you confused over what is REST and what is not, in
the hope of making a business out of clambering onto yet
another buzzword.
No, I see no reason to appreciate that nonsense. Not in the least.
I don't care if it was an "honest mistake" or a deliberate attempt
at fraud -- the result is the same. Only a complete idiot would
participate in such a forum.
If you want a place to collect REST ideas and patterns, then
use the wiki that Mark Baker established. He actually knows
what he is talking about, doesn't pretend to be a dictator, and
actually earned the role of editor long before most of you had
even heard of the term.
....Roy
In learning about, implementing, and evangelizing (or trying to) REST, the
most difficult task has always been getting from
kind-of-RESTish-but-not-really (focus on clean URLs, documented "simple" API
w/ custom XML, etc.) and a more true REST approach (focus on media-types &
link relations). "Kind-of-REST" is a real rut -- it's harder to get to real
REST from there than from pure RPC. A conversation with someone steeped in
RPC begins w/ talk of resources & representations, then talks about uniform
interface, then goes on to HATEOS & link relations -- REST is a clear
paradigm shift. A conversation w/ a kind-of-REST person gets stuck on "I'm
pretty close, so I won't worry about the hard parts" and there's no real
leap into the REST paradigm. And it's still RPC just gussied up.
Having little experience in the J2EE 'middleware' world, I don't know if
that's what's happening here, but if so, I'd have to agree w/ Roy's
statements that this kinda, sorta thing is worse than nothing. I find this
comment particularly true/useful: "there are plenty of good RESTful
architectures that end at the resource interface, behind which is another
architecture composed of middleware services that are not (and should not
be) RESTful."
(sorry for the top-post. gmail is acting crazy for me just now).
--peter keane
On Thu, Sep 17, 2009 at 10:33 PM, Roy T. Fielding <fielding@...> wrote:
>
>
> On Sep 17, 2009, at 4:36 PM, Alexander Johannesen wrote:
>
> > Hiya all,
> >
> > I've taken a back-seat this time, luckily due to too much stuff going
> > on in the real world. My two bobs worth, and I'll just start with this
> > one (although my reply is not directed at Nick throughout) ;
> >
> > On Fri, Sep 18, 2009 at 06:35, Nick Gall <nick.gall@...<nick.gall%40gmail.com>>
> wrote:
> > > For the umpteenth time, HTTP is a application protocol NOT a
> > middleware protocol
> >
> > We all have our definitions (in our heads, at best) of what thing is
> > called what and what that part of technology is categorised under. The
> > thing is, it's all really philosophically interchangeable in the end
> > (for a good example of what I'm talking about, see "Everything is
> > miscellaneous"; google it). I know we're all hardcore engineers here
> > and all, but if we take a step back and examine what we're doing here
> > in human terms, what the heck is all this bickering about?
> >
>
> Yes, we are all a cloud of atoms -- what's the point of distinguishing
> us into types, shapes, or names?
>
> > I find this ("HTTP is a application protocol NOT a middleware
> > protocol") a bit of a strange assertion. First of all, the notion of
> > middleware is to glue software components and / or systems together,
> > and as such an application protocol *is* a middleware protocol. Are we
> > here just battling over semantics that actually are superfluous in
> > nature? Aren't we trying to say that RESTful design removes much of
> > the need for *traditional* middleware whos sole purpose is to glue
> > applications together through structures simulating OO (which REST
> > does for you in better ways through resources [mining from
> > identification management]), so why are you recreating the middleware
> > problem through something that already solves much of it? This
> > misguidance will of course be misinterpreted as commercial bias.
> >
>
> No, that's not what middleware means (it certainly isn't tied to OO).
> There is a lot of good middleware and a lot of good distributed
> architectures that use various types of middleware for communication
> according to various good architectural styles. There is plenty of
> middleware active in the Web as well, like content distribution
> networks,
> name resolvers, load balancers, etc. Middleware are implementation
> packages that may or may not fit a given architecture, and should be
> used only when and where the fit is right. Likewise, there are
> plenty of good RESTful architectures that end at the resource
> interface, behind which is another architecture composed of
> middleware services that are not (and should not be) RESTful.
>
> > People whinge a lot that Red Hat has too much of a bias. Um, computer
> > scientists have a bias too, even people who have no bias have a basic
> > philosophical bias, and these semantics *will* affect (as oppose to
> > mere observatory notions) the work that is done. The argument of bias
> > is mostly moot (but not completely, of course). I personally get the
> > feeling that these REST-* moves were genuinly of good nature, the only
> > business bias that I can see is that they jumped on an idea and went
> > with it. That's more happening in the REST world in a long time, and
> > as such I *really* welcome it.
> >
>
> Would you welcome it if you went to a car dealer, asked to see a
> new Camaro, and he insisted on showing you a Winnebago instead?
>
> Would you welcome it if you went to MacDonald's and ordered a
> filet-o-fish sandwich, and they gave you a Big Mac instead?
>
> WTF is wrong with you people?
>
> Red Hat (or, more accurately, JBoss) unilaterally decided to set
> themselves up as the equivalent of Sun within a JCP-like standards
> organization on a topic for which they not only had NOTHING to
> do with creating and know NOTHING about, but for which they
> actually sell products that are the exact opposite of RESTful
> architecture. The organization claims that it will specify REST
> standards, that such standards will be led by benevolent dictators
> in the form of Spec Leads, that no overlapping proposals would be
> allowed (IOW, the first Spec Lead owns the entire topic as BDFL),
> and that Red Hat would be the only permanent member of the board
> to ensure that never changed.
>
> And, guess what -- the organization starts with a couple Spec Leads
> owning the topics that JBoss wants to SELL YOU as REST, because
> they know damn well that their current market is shrinking: their
> entire product architecture is based on J2EE, the giant black
> hole of Java that is sucking itself into oblivion.
>
> But, no, this won't be the oh-too-formal version of REST that
> is found in my dissertation. This will be the new, "Pragmatic",
> AssHat version of REST, so that JBoss can tell their customers
> that they are the new leaders in "REST" architecture and all
> they need to do get their systems to be RESTful 2.0-compliant
> is to buy their J2EE application server, now with REST-beans.
>
>
> Bill Burke is not evangelizing REST throughout the Java world.
> What he is evangelizing is the same old architectures that he
> evangelized as J2EE and WS-* -- the only difference is that now
> he starts off calling them REST. All he is doing is making
> people like you confused over what is REST and what is not, in
> the hope of making a business out of clambering onto yet
> another buzzword.
>
> No, I see no reason to appreciate that nonsense. Not in the least.
> I don't care if it was an "honest mistake" or a deliberate attempt
> at fraud -- the result is the same. Only a complete idiot would
> participate in such a forum.
>
> If you want a place to collect REST ideas and patterns, then
> use the wiki that Mark Baker established. He actually knows
> what he is talking about, doesn't pretend to be a dictator, and
> actually earned the role of editor long before most of you had
> even heard of the term.
>
> ....Roy
>
>
G`day Roy, On Fri, Sep 18, 2009 at 13:33, Roy T. Fielding <fielding@gbiv.com> wrote: > Yes, we are all a cloud of atoms -- what's the point of distinguishing > us into types, shapes, or names? To point out that people make semantic mistakes all the time, that everything is, indeed, miscellaneous, and that that should be a place of collaboration rather than bickering. And to point out that semantics, albeit small and puny and carries great significance, is easy to get wrong. People do it all the time, in technical terms as in other walks of life. ... > No, that's not what middleware means (it certainly isn't tied to OO). I said "traditional middleware", as in whatever ilk sprung out of the enterprisey world, mostly. Yes, sure, that's not what middleware means for the anal retentive, but I suspect that's what it means to Red Hat et and enterprise developers throughout the world, a rather large group of people. And I was being a bit tounge-in-cheek assessing structures (resources, hierarchies, categories, tags, whatever) to *simulate* OO more than anything, and certainly wasn't saying middleware=OO. Sheesh. ... >> People whinge a lot that Red Hat has too much of a bias. Um, computer >> scientists have a bias too, even people who have no bias have a basic >> philosophical bias, and these semantics *will* affect (as oppose to >> mere observatory notions) the work that is done. The argument of bias >> is mostly moot (but not completely, of course). I personally get the >> feeling that these REST-* moves were genuinly of good nature, the only >> business bias that I can see is that they jumped on an idea and went >> with it. That's more happening in the REST world in a long time, and >> as such I *really* welcome it. > > Would you welcome it if you went to a car dealer, asked to see a > new Camaro, and he insisted on showing you a Winnebago instead? > > Would you welcome it if you went to MacDonald's and ordered a > filet-o-fish sandwich, and they gave you a Big Mac instead? > > WTF is wrong with you people? I think you're jumping your gun and seeing opposition and weird shit where there isn't any, well, not in my camp, anyway. Let's change the allegory to fit better with my point; Would you welcome it if you went shopping and realized you had bought the wrong kind of salami? Eh, screw the allegories. All I was saying is that even if they are misguided, kicking them in the shin is no good way to convince them otherwise. All you're doing is creating enemies, and as much as we all have our buckets of patience with people grasping what REST is all about filled up pretty high, don't let it overflow. Why don't you trademark REST and just be done with it? Should be easy to suss out any disputes if you really wanted to then? You know, just for laughs? Linus did it, and so can you ... > Red Hat (or, more accurately, JBoss) unilaterally decided to set > themselves up as the equivalent of Sun within a JCP-like standards > organization on a topic for which they not only had NOTHING to > do with creating and know NOTHING about, but for which they > actually sell products that are the exact opposite of RESTful > architecture. Yeah, I think most people here is in agreement with that. It was a stupid move, especially without some anchoring with hardcore REST people or organisations. Or, you know, you. > The organization claims that it will specify REST > standards, that such standards will be led by benevolent dictators > in the form of Spec Leads, that no overlapping proposals would be > allowed (IOW, the first Spec Lead owns the entire topic as BDFL), > and that Red Hat would be the only permanent member of the board > to ensure that never changed. Bill has said they're open to change it, and no matter how evil you think RH is, anything has to start somewhere. Why not just see it as a honest starting point? The truth will be flushed out *extremely* quick if we all jumped on it and made it truly RESTful, no? (Assuming fixing the governing model, of course, which Bill says they are happy to do. I don't know these people, and can only take them at face value) > And, guess what -- the organization starts with a couple Spec Leads > owning the topics that JBoss wants to SELL YOU as REST, because > they know damn well that their current market is shrinking: their > entire product architecture is based on J2EE, the giant black > hole of Java that is sucking itself into oblivion. Doesn't a lot of innovation happen this way, though? (And I'm not stating their two specs are innovative; they're not) > But, no, this won't be the oh-too-formal version of REST that > is found in my dissertation. This will be the new, "Pragmatic", > AssHat version of REST Well, yes, unless others get involved. They've thrown out the invitation to us, too. Join it, and make sure it doesn't end up as AssHatMiddleWare(TM) or something. > Bill Burke is not evangelizing REST throughout the Java world. > What he is evangelizing is the same old architectures that he > evangelized as J2EE and WS-* -- the only difference is that now > he starts off calling them REST. All he is doing is making > people like you confused over what is REST and what is not, in > the hope of making a business out of clambering onto yet > another buzzword. People like me confused about REST? I'm a RESTafarian of over 5 years, so no, that's not it. I think you're mistaking my meeker approach to this hubbub as an embracing their ways. I'm not. I'm just saying, don't be a jerk even if you think they are jerks. The RESTafarian movement needs less jerking off, and more stability and acceptance, especially in the enterprise area (IMHO). If this could be tweaked into such a channel, then I'm all for it. > No, I see no reason to appreciate that nonsense. Not in the least. > I don't care if it was an "honest mistake" or a deliberate attempt > at fraud -- the result is the same. Only a complete idiot would > participate in such a forum. Well, ouch, and ouch, indeed. So "honest mistake" is unforgivable and unworkable in your world? Regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
Hey Bill: > Well, that was the idea anyways...Whether it would work in practice, > I don't know. It does, I have empirical evidence :-) Though I wouldn't get so hung on on XML schemas - media types can define processing models for other base formats too. Jim
Solomon Duskis wrote: > > > Would it behoove the REST community to have a quasi-standards body? Do > you think that this kind of initiative, if done right, would be a Good > Thing (tm)? What would this body do?
> Alexander Johannesen wrote: > > > No, I see no reason to appreciate that nonsense. Not in the least. > > I don't care if it was an "honest mistake" or a deliberate attempt > > at fraud -- the result is the same. Only a complete idiot would > > participate in such a forum. > > Well, ouch, and ouch, indeed. So "honest mistake" is unforgivable and > unworkable in your world? As someone already noted, SOAP appeared before the WS-madness, and at the time it seemed a good thing (I though that). Heck, it even had "simple" on it's name... I think the way to try to standardize something will be to start at the grass-roots, with what people trying to understand and apply REST could deal with, and go from there up. And these things are clearly, imo, patterns and best-practices. It's far more easy and logical to deduct standards from something that works in practice, than do the other way around. Is like the old question, "it works in practice, but will it work in theory?" :) Now I don't know of course if REST-* is a "honest mistake or a deliberate attempt at fraud", or something in between like a honest try to promote a standard in favour of their own agenda (nothing wrong with that, we have to live in a capitalist economy after all), but, as they say, " the road to hell is paved with good intentions" and it will be a pity to see REST to follow the same road as WS-hell.
On Fri, Sep 18, 2009 at 21:14, Sam Johnston <samj@...> wrote: > I'm 100% with Roy on this one. It looks more and more than I'm convinced, too. Shame, really. Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
Alexander, I'm 100% with Roy on this one. I pointed out<https://fedorahosted.org/pipermail/deltacloud-devel/2009-September/000138.html>a week or two ago that something was fishy with the REST-* governance, with a view to having it changed so myself and others could get involved: FWIW REST-* sounds like something I could get behind but to be > completely candid (as always) I'm disappointed to see similar governance > shenanigans to those that undermined the WS-I: "Red Hat, as the founder of > REST-*, gets a permanent seat on the board. All other board members must be > elected by the overall membership once a year". If it's not too late then > please reconsider this position. Bill Burke (Chief Architect at JBoss, Inc. last I checked) said<https://fedorahosted.org/pipermail/deltacloud-devel/2009-September/000145.html>they'd consider changes if it was a showstopper: What should it be changed to? So far the governance policies seem pretty > liberal to me (since I wrote it). RHT as a permanent member of the board > seems reasonable to me considering we started it and will be doing most of > the work initially. *Then again, if its a show stopper it will be removed. > * Mark Little (CTO at JBoss, Inc.) chimed in<https://fedorahosted.org/pipermail/deltacloud-devel/2009-September/000142.html>the next morning saying Bill was on vacation and making it quite clear that no such changes would be necessary: Bill's on vacation this week so he may have a different perspective, but I > don't have a problem with the statement as it's made, i.e., Red Hat having a > permanent position on the board. I've been involved with standards for 20+ > years through OMG, OASIS, W3C, WS-I, GGF, JCP and others. What's proposed in > the whole effort around REST-* is far less Evil Empire and far more Benign > Coordinator. *If we have problems with the approach in the future of > course we can re-examine it, but at this stage I think it's fine.* That's all well and good to say but once Red Hat are burnt into the woodwork it can be very difficult (if not impossible) to remove them. What justification is there for them enjoying a privileged position over their competitors, and the creator himself for that matter? Bill sees this as their reward for "start[ing] it and [...] doing most of the work initially" but why not just self-appoint for the interim and then let the community vote you in based on your contributions? Procedural issues aside I think it's safe to say that REST-* does not (and will not) enjoy the support of the greater community so they should "just close the stupid site down". There are plenty of safe and effective ways we can promote REST without resorting to going down the WS-* path by creating structures that unfairly reward some participants at the cost of others. Mark Baker's wiki <http://rest.blueoxen.net/cgi-bin/wiki.pl?FrontPage> could do with a shot in the arm in terms of speed, branding, etc. (things the REST Patterns wiki <http://restpatterns.org/> appears to have under control) but these are things we can easily fix (and should imo). Where we need more rigorous standardisation we can use existing process like IETF Internet-Drafts (as has been done before for e.g. POE<http://www.mnot.net/drafts/draft-nottingham-http-poe-00.txt> ). As a point of reference, Red Hat are doing similar things in the cloud space with the recent launch of Deltacloud<http://press.redhat.com/2009/09/03/introducing-deltacloud/>. A quick look at the libcloud-list archives<https://www.redhat.com/archives/libcloud-list/2009-July/thread.html>shows a swarm of @ redhat.com names working together and either completely ignoring<https://www.redhat.com/archives/libcloud-list/2009-July/msg00028.html>or dismissing<https://www.redhat.com/archives/libcloud-list/2009-August/msg00003.html> invitations for external collaboration. That's not to say that [L]GPL contributions of specs and code aren't a good thing however they come to be, but nothing I've seen so far suggests that Red Hat are genuinely interested in working with the community at large. It's up to them to prove me wrong *first*. Sam On Fri, Sep 18, 2009 at 7:00 AM, Alexander Johannesen < alexander.johannesen@...> wrote: > > > G`day Roy, > > > On Fri, Sep 18, 2009 at 13:33, Roy T. Fielding <fielding@...<fielding%40gbiv.com>> > wrote: > > Yes, we are all a cloud of atoms -- what's the point of distinguishing > > us into types, shapes, or names? > > To point out that people make semantic mistakes all the time, that > everything is, indeed, miscellaneous, and that that should be a place > of collaboration rather than bickering. And to point out that > semantics, albeit small and puny and carries great significance, is > easy to get wrong. People do it all the time, in technical terms as in > other walks of life. > > ... > > > No, that's not what middleware means (it certainly isn't tied to OO). > > I said "traditional middleware", as in whatever ilk sprung out of the > enterprisey world, mostly. Yes, sure, that's not what middleware means > for the anal retentive, but I suspect that's what it means to Red Hat > et and enterprise developers throughout the world, a rather large > group of people. And I was being a bit tounge-in-cheek assessing > structures (resources, hierarchies, categories, tags, whatever) to > *simulate* OO more than anything, and certainly wasn't saying > middleware=OO. Sheesh. > > ... > > >> People whinge a lot that Red Hat has too much of a bias. Um, computer > >> scientists have a bias too, even people who have no bias have a basic > >> philosophical bias, and these semantics *will* affect (as oppose to > >> mere observatory notions) the work that is done. The argument of bias > >> is mostly moot (but not completely, of course). I personally get the > >> feeling that these REST-* moves were genuinly of good nature, the only > >> business bias that I can see is that they jumped on an idea and went > >> with it. That's more happening in the REST world in a long time, and > >> as such I *really* welcome it. > > > > Would you welcome it if you went to a car dealer, asked to see a > > new Camaro, and he insisted on showing you a Winnebago instead? > > > > Would you welcome it if you went to MacDonald's and ordered a > > filet-o-fish sandwich, and they gave you a Big Mac instead? > > > > WTF is wrong with you people? > > I think you're jumping your gun and seeing opposition and weird shit > where there isn't any, well, not in my camp, anyway. Let's change the > allegory to fit better with my point; Would you welcome it if you went > shopping and realized you had bought the wrong kind of salami? > > Eh, screw the allegories. All I was saying is that even if they are > misguided, kicking them in the shin is no good way to convince them > otherwise. All you're doing is creating enemies, and as much as we all > have our buckets of patience with people grasping what REST is all > about filled up pretty high, don't let it overflow. > > Why don't you trademark REST and just be done with it? Should be easy > to suss out any disputes if you really wanted to then? You know, just > for laughs? Linus did it, and so can you ... > > > Red Hat (or, more accurately, JBoss) unilaterally decided to set > > themselves up as the equivalent of Sun within a JCP-like standards > > organization on a topic for which they not only had NOTHING to > > do with creating and know NOTHING about, but for which they > > actually sell products that are the exact opposite of RESTful > > architecture. > > Yeah, I think most people here is in agreement with that. It was a > stupid move, especially without some anchoring with hardcore REST > people or organisations. Or, you know, you. > > > The organization claims that it will specify REST > > standards, that such standards will be led by benevolent dictators > > in the form of Spec Leads, that no overlapping proposals would be > > allowed (IOW, the first Spec Lead owns the entire topic as BDFL), > > and that Red Hat would be the only permanent member of the board > > to ensure that never changed. > > Bill has said they're open to change it, and no matter how evil you > think RH is, anything has to start somewhere. Why not just see it as a > honest starting point? The truth will be flushed out *extremely* quick > if we all jumped on it and made it truly RESTful, no? (Assuming fixing > the governing model, of course, which Bill says they are happy to do. > I don't know these people, and can only take them at face value) > > > And, guess what -- the organization starts with a couple Spec Leads > > owning the topics that JBoss wants to SELL YOU as REST, because > > they know damn well that their current market is shrinking: their > > entire product architecture is based on J2EE, the giant black > > hole of Java that is sucking itself into oblivion. > > Doesn't a lot of innovation happen this way, though? (And I'm not > stating their two specs are innovative; they're not) > > > But, no, this won't be the oh-too-formal version of REST that > > is found in my dissertation. This will be the new, "Pragmatic", > > AssHat version of REST > > Well, yes, unless others get involved. They've thrown out the > invitation to us, too. Join it, and make sure it doesn't end up as > AssHatMiddleWare(TM) or something. > > > Bill Burke is not evangelizing REST throughout the Java world. > > What he is evangelizing is the same old architectures that he > > evangelized as J2EE and WS-* -- the only difference is that now > > he starts off calling them REST. All he is doing is making > > people like you confused over what is REST and what is not, in > > the hope of making a business out of clambering onto yet > > another buzzword. > > People like me confused about REST? I'm a RESTafarian of over 5 years, > so no, that's not it. I think you're mistaking my meeker approach to > this hubbub as an embracing their ways. I'm not. I'm just saying, > don't be a jerk even if you think they are jerks. The RESTafarian > movement needs less jerking off, and more stability and acceptance, > especially in the enterprise area (IMHO). If this could be tweaked > into such a channel, then I'm all for it. > > > No, I see no reason to appreciate that nonsense. Not in the least. > > I don't care if it was an "honest mistake" or a deliberate attempt > > at fraud -- the result is the same. Only a complete idiot would > > participate in such a forum. > > Well, ouch, and ouch, indeed. So "honest mistake" is unforgivable and > unworkable in your world? > > Regards, > > Alex > -- > Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps > --- http://shelter.nu/blog/ ---------------------------------------------- > ------------------ http://www.google.com/profiles/alexander.johannesen --- > > >
Isn't this the same discussion as REST-*? Isn't what the guys at REST-* are trying to do? Jon Hanna wrote: > > > Solomon Duskis wrote: > > > > > > Would it behoove the REST community to have a quasi-standards body? Do > > you think that this kind of initiative, if done right, would be a Good > > Thing (tm)? > > What would this body do? > >
Can anyone tell me what exactly is wrong with RFCs? I mean except for the fact that you need consensus with a community that didn't pay massive amounts of money to suck up to other vendors and put a marketing stamp on their new products... I'm on the Microsoft side of the world, and we suffer vastly from a large amount of misinformation and REST branding. That really hasn't helped us. Anything driven by marketing requirements or market pressure rather than good practice documentation and community consensus is doomed to f*ck us painfully for years to come. S. > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@...m] On Behalf Of António Mota > Sent: 18 September 2009 12:25 > To: Jon Hanna > Cc: REST-Discuss Discussion Group > Subject: Re: [rest-discuss] Body @ REST? > > Isn't this the same discussion as REST-*? Isn't what the guys at REST-* > are trying to do? > > > Jon Hanna wrote: > > > > > > Solomon Duskis wrote: > > > > > > > > > Would it behoove the REST community to have a quasi-standards body? > Do > > > you think that this kind of initiative, if done right, would be a > Good > > > Thing (tm)? > > > > What would this body do? > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Sebastien Lambla wrote: > Can anyone tell me what exactly is wrong with RFCs? Yeah, pretty much my thinking. If you're working on a standard that could use or at least enable REST, then work so that it uses or at least enables REST. I can't see why we would want or need a "REST standard". For analogy; OO people get by fine without having an OO standard - there are plenty of standards for things which use or enable OO, but not for OO itself.
Jon Hanna wrote: > Sebastien Lambla wrote: >> Can anyone tell me what exactly is wrong with RFCs? > > Yeah, pretty much my thinking. > > If you're working on a standard that could use or at least enable > REST, then work so that it uses or at least enables REST. I can't see > why we would want or need a "REST standard". > > For analogy; OO people get by fine without having an OO standard - > there are plenty of standards for things which use or enable OO, but > not for OO itself. > Well, in the "other" discussion I said about the standards that > I think the way to try to standardize something will be to start at > the grass-roots, with what people trying to understand and apply REST > could deal with, and go from there up. And these things are clearly, > imo, patterns and best-practices. So I don't think a Body is needed for that, but if eventually such a community around "patterns and best-practices" would decide to create one (after a critical mass of patterns and best-practices were defined) I don't think that will bring evil to the world...
Hi Sam. Just a minor modification to what you've said here. I did go on in a subsequent email to you to state that: "As the community grows, if there are problems in the way in which it is managed then I would fully expect us to look at them and revisit any decisions. As I said above, this is a community effort. It's meant to be open. We don't want to fall into the same situation as, say, Sun with the JCP." That may have been too subtle a statement to make, but it was meant to convey the notion that I wouldn't necessarily expect a permanent position for anyone eventually (which is not the case for the JCP at present). As to the references concerning Cloud, I'll look into them. Red Hat is committed to working with open source communities. Thanks for bringing that to my attention. Mark. On 18 Sep 2009, at 12:14, Sam Johnston wrote: > > > Alexander, > > I'm 100% with Roy on this one. > > I pointed out a week or two ago that something was fishy with the > REST-* governance, with a view to having it changed so myself and > others could get involved: > > FWIW REST-* sounds like something I could get behind but to be > completely candid (as always) I'm disappointed to see similar > governance shenanigans to those that undermined the WS-I: "Red Hat, > as the founder of REST-*, gets a permanent seat on the board. All > other board members must be elected by the overall membership once a > year". If it's not too late then please reconsider this position. > > Bill Burke (Chief Architect at JBoss, Inc. last I checked) said > they'd consider changes if it was a showstopper: > > What should it be changed to? So far the governance policies seem > pretty liberal to me (since I wrote it). RHT as a permanent member > of the board seems reasonable to me considering we started it and > will be doing most of the work initially. Then again, if its a show > stopper it will be removed. > > Mark Little (CTO at JBoss, Inc.) chimed in the next morning saying > Bill was on vacation and making it quite clear that no such changes > would be necessary: > > Bill's on vacation this week so he may have a different perspective, > but I don't have a problem with the statement as it's made, i.e., > Red Hat having a permanent position on the board. I've been involved > with standards for 20+ years through OMG, OASIS, W3C, WS-I, GGF, JCP > and others. What's proposed in the whole effort around REST-* is far > less Evil Empire and far more Benign Coordinator. If we have > problems with the approach in the future of course we can re-examine > it, but at this stage I think it's fine. > > That's all well and good to say but once Red Hat are burnt into the > woodwork it can be very difficult (if not impossible) to remove > them. What justification is there for them enjoying a privileged > position over their competitors, and the creator himself for that > matter? Bill sees this as their reward for "start[ing] it and [...] > doing most of the work initially" but why not just self-appoint for > the interim and then let the community vote you in based on your > contributions? > > Procedural issues aside I think it's safe to say that REST-* does > not (and will not) enjoy the support of the greater community so > they should "just close the stupid site down". There are plenty of > safe and effective ways we can promote REST without resorting to > going down the WS-* path by creating structures that unfairly reward > some participants at the cost of others. Mark Baker's wiki could do > with a shot in the arm in terms of speed, branding, etc. (things the > REST Patterns wiki appears to have under control) but these are > things we can easily fix (and should imo). Where we need more > rigorous standardisation we can use existing process like IETF > Internet-Drafts (as has been done before for e.g. POE). > > As a point of reference, Red Hat are doing similar things in the > cloud space with the recent launch of Deltacloud. A quick look at > the libcloud-list archives shows a swarm of @... names > working together and either completely ignoring or dismissing > invitations for external collaboration. That's not to say that > [L]GPL contributions of specs and code aren't a good thing however > they come to be, but nothing I've seen so far suggests that Red Hat > are genuinely interested in working with the community at large. > It's up to them to prove me wrong first. > > Sam > > On Fri, Sep 18, 2009 at 7:00 AM, Alexander Johannesen <alexander.johannesen@... > > wrote: > > G`day Roy, > > > > On Fri, Sep 18, 2009 at 13:33, Roy T. Fielding <fielding@...> > wrote: > > Yes, we are all a cloud of atoms -- what's the point of > distinguishing > > us into types, shapes, or names? > > To point out that people make semantic mistakes all the time, that > everything is, indeed, miscellaneous, and that that should be a place > of collaboration rather than bickering. And to point out that > semantics, albeit small and puny and carries great significance, is > easy to get wrong. People do it all the time, in technical terms as in > other walks of life. > > ... > > > > No, that's not what middleware means (it certainly isn't tied to > OO). > > I said "traditional middleware", as in whatever ilk sprung out of the > enterprisey world, mostly. Yes, sure, that's not what middleware means > for the anal retentive, but I suspect that's what it means to Red Hat > et and enterprise developers throughout the world, a rather large > group of people. And I was being a bit tounge-in-cheek assessing > structures (resources, hierarchies, categories, tags, whatever) to > *simulate* OO more than anything, and certainly wasn't saying > middleware=OO. Sheesh. > > ... > > > >> People whinge a lot that Red Hat has too much of a bias. Um, > computer > >> scientists have a bias too, even people who have no bias have a > basic > >> philosophical bias, and these semantics *will* affect (as oppose to > >> mere observatory notions) the work that is done. The argument of > bias > >> is mostly moot (but not completely, of course). I personally get > the > >> feeling that these REST-* moves were genuinly of good nature, the > only > >> business bias that I can see is that they jumped on an idea and > went > >> with it. That's more happening in the REST world in a long time, > and > >> as such I *really* welcome it. > > > > Would you welcome it if you went to a car dealer, asked to see a > > new Camaro, and he insisted on showing you a Winnebago instead? > > > > Would you welcome it if you went to MacDonald's and ordered a > > filet-o-fish sandwich, and they gave you a Big Mac instead? > > > > WTF is wrong with you people? > > I think you're jumping your gun and seeing opposition and weird shit > where there isn't any, well, not in my camp, anyway. Let's change the > allegory to fit better with my point; Would you welcome it if you went > shopping and realized you had bought the wrong kind of salami? > > Eh, screw the allegories. All I was saying is that even if they are > misguided, kicking them in the shin is no good way to convince them > otherwise. All you're doing is creating enemies, and as much as we all > have our buckets of patience with people grasping what REST is all > about filled up pretty high, don't let it overflow. > > Why don't you trademark REST and just be done with it? Should be easy > to suss out any disputes if you really wanted to then? You know, just > for laughs? Linus did it, and so can you ... > > > > Red Hat (or, more accurately, JBoss) unilaterally decided to set > > themselves up as the equivalent of Sun within a JCP-like standards > > organization on a topic for which they not only had NOTHING to > > do with creating and know NOTHING about, but for which they > > actually sell products that are the exact opposite of RESTful > > architecture. > > Yeah, I think most people here is in agreement with that. It was a > stupid move, especially without some anchoring with hardcore REST > people or organisations. Or, you know, you. > > > > The organization claims that it will specify REST > > standards, that such standards will be led by benevolent dictators > > in the form of Spec Leads, that no overlapping proposals would be > > allowed (IOW, the first Spec Lead owns the entire topic as BDFL), > > and that Red Hat would be the only permanent member of the board > > to ensure that never changed. > > Bill has said they're open to change it, and no matter how evil you > think RH is, anything has to start somewhere. Why not just see it as a > honest starting point? The truth will be flushed out *extremely* quick > if we all jumped on it and made it truly RESTful, no? (Assuming fixing > the governing model, of course, which Bill says they are happy to do. > I don't know these people, and can only take them at face value) > > > > And, guess what -- the organization starts with a couple Spec Leads > > owning the topics that JBoss wants to SELL YOU as REST, because > > they know damn well that their current market is shrinking: their > > entire product architecture is based on J2EE, the giant black > > hole of Java that is sucking itself into oblivion. > > Doesn't a lot of innovation happen this way, though? (And I'm not > stating their two specs are innovative; they're not) > > > > But, no, this won't be the oh-too-formal version of REST that > > is found in my dissertation. This will be the new, "Pragmatic", > > AssHat version of REST > > Well, yes, unless others get involved. They've thrown out the > invitation to us, too. Join it, and make sure it doesn't end up as > AssHatMiddleWare(TM) or something. > > > > Bill Burke is not evangelizing REST throughout the Java world. > > What he is evangelizing is the same old architectures that he > > evangelized as J2EE and WS-* -- the only difference is that now > > he starts off calling them REST. All he is doing is making > > people like you confused over what is REST and what is not, in > > the hope of making a business out of clambering onto yet > > another buzzword. > > People like me confused about REST? I'm a RESTafarian of over 5 years, > so no, that's not it. I think you're mistaking my meeker approach to > this hubbub as an embracing their ways. I'm not. I'm just saying, > don't be a jerk even if you think they are jerks. The RESTafarian > movement needs less jerking off, and more stability and acceptance, > especially in the enterprise area (IMHO). If this could be tweaked > into such a channel, then I'm all for it. > > > > No, I see no reason to appreciate that nonsense. Not in the least. > > I don't care if it was an "honest mistake" or a deliberate attempt > > at fraud -- the result is the same. Only a complete idiot would > > participate in such a forum. > > Well, ouch, and ouch, indeed. So "honest mistake" is unforgivable and > unworkable in your world? > > > Regards, > > Alex > -- > Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic > Maps > --- http://shelter.nu/blog/ > ---------------------------------------------- > ------------------ http://www.google.com/profiles/ > alexander.johannesen --- > > > > >
2009/9/18 António Mota <amsmota@...> > > Isn't this the same discussion as REST-*? Isn't what the guys at REST-* > are trying to do? Only the guys at REST-* know what the guys at REST-* are trying to do, and that is half of the problem... The only data points we have include their insistence on having a permanent position on the board (we even need a board?), their disingenuous redirecting of rest-star.org to a site under the jboss.org domain ( http://www.jboss.org/reststar) and pressing on with complete disregard for [mostly] constructive criticism, including a "respectful" request from Roy to "remove RESTfrom the name of [their] site". As for the specs themselves, it's been said that<http://apsblog.burtongroup.com/2009/09/rest-ive-got-a-bad-feeling-about-this.html> "the spec for REST-* Messaging is nothing more than a RESTful facade over JMS" which, while unsurprising given REST-*'s JBoss lineage, is unhelpful. I'd like for them to have given us reason to assume good faith but words without action are just words. As Antonio said: "*if eventually such a community around "patterns and best-practices" would decide to create one (after a critical mass of patterns and best-practices were defined) I don't think that will bring evil to the world*". I tend to agree with that and hope we can all focus on building a repository somewhere neutral and take further steps if and when necessary. Sam
TBD. If this Body@REST would be created, this group would have to decide what it does. This group had quite a few discussions on the "what's" a body like this should do in the context of the REST-* discussions. Here are some ideas from the thread: 1. Define what REST is, including appropriate literature 2. Rank RESTfulness 3. Define current patterns and best practices. 4. Collaborate with appropriate protocol committees I'd personally would like an official description of specific REST architectures, such as ROA, but I'm not tied to that. -Solomon On Fri, Sep 18, 2009 at 6:24 AM, Jon Hanna <jon@...> wrote: > > > Solomon Duskis wrote: > > > > > > Would it behoove the REST community to have a quasi-standards body? Do > > you think that this kind of initiative, if done right, would be a Good > > Thing (tm)? > > What would this body do? > > >
I'm gleaning a general feeling here that the root problem is that a Vendor has started a proposed standards body which standards, while they may be approaching RESTful principles, are not really REST while other Pattern-based approaches exist by people who *do* get REST. In other words, since said Vendor has provided evidence that they do not really get it, the general community doesn't like the idea of that Vendor having a role in leading any such body. I think that Red Hat/JBoss would do better to contribute to the existing wikis and continue to learn from the community before trying to start something that will most likely lead to a fight over the term REST and lead to another schism that will create another new acronym. We already have WS-* and even MEST as available terms for new RPC approaches. Why do we have to corrupt REST, too? I agree with Sebastien in the other thread: "what's wrong with RFC's?" Ryan Riley ryan.riley@... http://panesofglass.org/ http://wizardsofsmart.net/
On Fri, Sep 18, 2009 at 12:54 AM, Jim Webber <jim@...> wrote: > > > Well, that was the idea anyways...Whether it would work in practice, > > I don't know. > > It does, I have empirical evidence :-) Though I wouldn't get so hung > on on XML schemas - media types can define processing models for other > base formats too. I too have experienced positive results using media types to define the processing models (or link semantics; or service contracts; those are all the same thing in my mind). My experiences even include the introduction of non-backwards compatible changes to applications using this model. In practice media types work well for this use case. -- Peter Williams http://barelyenough.org
I would put 3. as 1. It's better to go from practice to theory, from the particular to the general, from the concrete to the abstract. Actually, I think that was what Roy Fielding did in his thesis, he started from what already existed and "theorized" about it in order to put it in "formal" terms... Solomon Duskis wrote: > > TBD. If this Body@REST would be created, this group would have to > decide what it does. This group had quite a few discussions on the > "what's" a body like this should do in the context of the REST-* > discussions. > > Here are some ideas from the thread: > > 1. Define what REST is, including appropriate literature > 2. Rank RESTfulness > 3. Define current patterns and best practices. > 4. Collaborate with appropriate protocol committees > > I'd personally would like an official description of specific REST > architectures, such as ROA, but I'm not tied to that. > > -Solomon > > On Fri, Sep 18, 2009 at 6:24 AM, Jon Hanna <jon@... > <mailto:jon@...>> wrote: > > > Solomon Duskis wrote: > > > > > > Would it behoove the REST community to have a quasi-standards > body? Do > > you think that this kind of initiative, if done right, would be > a Good > > Thing (tm)? > > What would this body do? > > >
__Message Change__ * It is now an open source project. * We will be publishing the final content on IETF as a set of RFCs. * We're still focusing on middleware and middleware services. "REST-* is an open source project dedicated to bringing the architecture of the web to traditional middleware services." "REST has a the potential to re-define how application developers interact with traditional middleware services. The REST-* community aims to re-examine which of these traditional services fits within the REST model by defining new standards, guidelines, and specifications. Where appropriate, any end product will be published at the IETF." __Governance changes__ * No more trying to be a better JCP. We'll let the IETF RFC process govern us when we're ready to submit something. * An open source contributor agreement similar to what Apache, Eclipse or JBoss has to protect users and contributors. (FYI we already required ASL, open source processes, NO-field-of-use restrictions, etc...) If you have any other suggestions, let me know: http://www.jboss.org/reststar/community/gov2.html __RESTful Interfaces for Un-RESTful Services__ Many traditional middleware services do not fit into the RESTful style of development. An example is 2PC transaction management. Still, these services can benefit from having their distributed interface defined RESTfully. The nomenclature will be RESTful Service vs. RESTful Interface. * 2PC transactions would be considered a RESTful interface under REST-*.org. Meaning using it makes your stuff less-RESTful, but at least the service has a restful interface. * Messaging, compensations, and workflow services would be considered "RESTful Services" that fit in the model. __GUIDELINES SECTION__ This is where I want to talk about how existing patterns, RFC's and such fit in with the rest of what we're doing. An example here could be Security. What authentication models are good when? When should you use OAuth and OpenID? How could something like OAuth interact with middleware services? Some of this stuff is already up on the website. (You may have to reload it to see it due to cache-control policies.) Finally, apologies for the jboss.org redirection. It is a problem with our infrastructure. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Sep 18, 2009, at 4:27 PM, Bill Burke wrote: > > If you have any other suggestions, let me know: > I think the term "REST-*" is really conveying the wrong image. Any chance to change that? (Or has it already been branded? ;-) > mailto:rest-discuss-fullfeatured@yahoogroups.com > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Jan Algermissen wrote: > > On Sep 18, 2009, at 4:27 PM, Bill Burke wrote: > >> >> If you have any other suggestions, let me know: >> > > I think the term "REST-*" is really conveying the wrong image. Any > chance to change that? > Are the changes what you wanted? Or was it all you were hung up on was the name? Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Thu, Sep 17, 2009 at 2:44 PM, Bill Burke <bburke@...> wrote: > > The client is guaranteed that a set of link relationships will exist > within the order-entry representation because of the schema backing the > media type. Machine-based clients (at least those that are application > driven) can't guess how to traverse links. They have to know ahead of > time what to do. > Often you do not want to guarantee that all of the links will exist. The client needs to adapt to the availability of the links that it knows about. In an order processing scenario you would not want to provide a link to "process" the order until all of the required information is entered, and you would not want a "cancel" link until after the order has been submitted. Defining a contract in order to "guarantee that a set of link relationships will exists" causes you to lose what I perceive as a significant benefit of HATEOAS. Darrel
Darrel Miller wrote: > On Thu, Sep 17, 2009 at 2:44 PM, Bill Burke <bburke@...> wrote: >> The client is guaranteed that a set of link relationships will exist >> within the order-entry representation because of the schema backing the >> media type. Machine-based clients (at least those that are application >> driven) can't guess how to traverse links. They have to know ahead of >> time what to do. >> > > Often you do not want to guarantee that all of the links will exist. > The client needs to adapt to the availability of the links that it > knows about. In an order processing scenario you would not want to > provide a link to "process" the order until all of the required > information is entered, and you would not want a "cancel" link until > after the order has been submitted. > +1 see my other response...It was just an idea anyways... Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill, On Sep 18, 2009, at 7:20 PM, Bill Burke wrote: > > > Jan Algermissen wrote: >> On Sep 18, 2009, at 4:27 PM, Bill Burke wrote: >>> >>> If you have any other suggestions, let me know: >>> >> I think the term "REST-*" is really conveying the wrong image. Any >> chance to change that? > > Are the changes what you wanted? I have not taken an analytical look at the new wording, but I think it is quite impressive that you were able to come up with these (based on my short look)rather radical changes in a very short time! > Or was it all you were hung up on was the name? > No, not just the name. But the name is really too close to WS-*. Jan > Bill > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
I would be tempted to replace a standards body with a Wiki promoting best practices ...Having a reputable site where one could see restful design patterns laid out might be beneficial to many trying to understand restful design. On Thu, Sep 17, 2009 at 5:57 PM, Solomon Duskis <sduskis@...> wrote: > > > Would it behoove the REST community to have a quasi-standards body? Do you > think that this kind of initiative, if done right, would be a Good Thing > (tm)? > > -Solomon > > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly, Think Lucid www.lucidtechnics.com (p) 202.683.7486 (f) 703.563.6279
The following statement is on the REST-* architectural goals page: "Whenever possible, avoid envelope formats. Examples of envelope formats are SOAP and Atom. Envelope formats encourage tunneling over HTTP instead of leveraging HTTP. They also require additional complexities on both the client and the server. Is this elaborated on somewhere? I don't think I've ever heard the argument made before and I'm not sure I get why an envelope format is intrinsically good or bad in a protocol. It seems orthogonal to whether something is RESTful or not. --Chuck
--- In rest-discuss@yahoogroups.com, Bill Burke <bburke@...> wrote: > > __Message Change__ > * It is now an open source project. > * We will be publishing the final content on IETF as a set of RFCs. > * We're still focusing on middleware and middleware services. > Most notably, however, is what part of the message you are unwilling to change: The name REST-* itself. As Roy Fielding has adequately warned you, with his poignant example of CORBA as "Web Services", this will only confuse customers. It is basically a shame Roy doesn't own some trademark, to block such confusion. At least you've heard his voice.
I believe that the argument comes from SOAP masking capabilities that are found in the HTTP protocol, effectively neutering the leverage you can get with HTTP (Caching, Routing, etc.). WS-Addressing is a good example of a standard that was created to basically replicate URL semantics through HTTP. I'm not sure how Atom got lumped in there except for the fact that content can be stuffed into the entire document feed instead of relying on links to the content. -Noah On Fri, Sep 18, 2009 at 12:15 PM, Chuck Hinson <chuck.hinson@...>wrote: > The following statement is on the REST-* architectural goals page: > > "Whenever possible, avoid envelope formats. Examples of envelope > formats are SOAP and Atom. Envelope formats encourage tunneling over > HTTP instead of leveraging HTTP. They also require additional > complexities on both the client and the server. > > Is this elaborated on somewhere? I don't think I've ever heard the > argument made before and I'm not sure I get why an envelope format is > intrinsically good or bad in a protocol. It seems orthogonal to > whether something is RESTful or not. > > > --Chuck > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Envelope formats, if not designed and used carefully, can reduce the visibility of the uniform interface. An example is an application encoding some "application/foobar" within atom:content. When used like this, the protocol aspects become less useful, which is the same as tunneling. HTTP does include an envelope format, although it is rarely described as such. HTTP messages use a MIME-like format "containing metainformation about the data transferred and modifiers on the request/response semantics" (sec 1.1, RFC-2616). This format is visible and extensible. When you start to design representations based on this characteristic, you may find that there is no need for any other payload format. Subbu On Sep 18, 2009, at 12:15 PM, Chuck Hinson wrote: > The following statement is on the REST-* architectural goals page: > > "Whenever possible, avoid envelope formats. Examples of envelope > formats are SOAP and Atom. Envelope formats encourage tunneling over > HTTP instead of leveraging HTTP. They also require additional > complexities on both the client and the server. > > Is this elaborated on somewhere? I don't think I've ever heard the > argument made before and I'm not sure I get why an envelope format is > intrinsically good or bad in a protocol. It seems orthogonal to > whether something is RESTful or not. > > --Chuck >
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > Or was it all you were hung up on was the name? > > > > No, not just the name. But the name is really too close to WS-*. > > Jan It's not just too close to WS-*. It is too close to the platonic ideal of REST itself. It implies REST is about standardization of data exchange formats within and across industries. It also is not a small leap for people then to think that the data exchange formats are what REST is all about. The name is just bad taste, and confusing. Your content in your message is improving, but your logo is still a puzzling point. So it's not as simple as "_all_ you were hung up on was the name". The bad name suggested, I think, to most of us that you were coercing REST for marketing purposes. We're responding with a fair level of consumer fear, uncerainty and doubt (as opposed to corporate FUD). You're basically now burdening us with explaining to our COO why "REST-*" he heard about in Delta Airlines In-flight Magazine isn't REST.
On Sep 18, 2009, at 9:36 PM, Noah Campbell wrote: > I'm not sure how Atom got lumped in there except for the fact that > content can be stuffed into the entire document feed instead of > relying on links to the content. I used to see Atom as *the* means to bundle documents and links into a single message. Now that the Link header has been revived the need for using Atom as an envelope format has declined. (Not questioning the usefullness of Atom itself here) Bottom line: if you can put your meta data into the HTTP header think hard before using an envelope format. Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Noah Campbell wrote: > I'm not sure how Atom got lumped in there except for the fact that > content can be stuffed into the entire document feed instead of relying > on links to the content. > Atom was lumped in because I see people using it to exchange messages between applications for no other reason other than the hype of the protocol itself. I did prototype a few things with Atom when I added support for it within RESTEasy. For doing the types of applications I'm used to doing, Atom just got in the way. It made more sense to leverage HTTP. Even with links within Atom, you end up screaming "I just want the bleepin message!". Yeah, sure you could have a framework that hides that you're sending Atom around to make things easier for you, but that is an anti-pattern in and of itself. Also, I didn't make this decision lightly. For an analogy I was very skeptical of REST at first, the more I read about it the more I was convinced it was the right approach for many things. I've had quite the opposite experience with Atom. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Jan Algermissen wrote: > > > > On Sep 18, 2009, at 9:36 PM, Noah Campbell wrote: > > > I'm not sure how Atom got lumped in there except for the fact that > > content can be stuffed into the entire document feed instead of > > relying on links to the content. > > I used to see Atom as *the* means to bundle documents and links into a > single message. Now that the Link header has been revived the need for > using Atom as an envelope format has declined. (Not questioning the > usefullness of Atom itself here) > Yes, and considering the revival of Link headers as you say, multipart/* becomes even a nicer format to "bundle documents and links into a single message" as it was designed to support and transfer formats other than text. > Bottom line: if you can put your meta data into the HTTP header think > hard before using an envelope format. > I can't agree more, but then again, you already knew that... -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
But envelops from a business process point of view does sometimes make sense, and then we have to use them, no? We are using SyncML as a envelope in a inter-application app where we only control one side of it, we *have* to use SOAP for the outgoing messages and we *choose* to use MQ to receive them, being the receiving part a JMS connector that connects (!) to our REST infrastructure. No HTTP, no headers other then the ones that are part of the transport layer (we use a lot of the HTTP headers in the all connectors), and for sure no headers that are related to the *business* process. And no, this is not tunneling HTTP over JMS, we just defined our Uniform Interface based on the HTTP uniform interface and use it on other protocols as well. So, a distintion between these two types of envelopes has to be made. _______________________________________________ Melhores cumprimentos / Beir beannacht / Best regards António Manuel dos Santos Mota mobile: +353(0)877718363 mailto: amsmota@... mailto: antonio.mota@meridianglobalservices.com skype: amsmota msn: antoniomsmota@... profile: http://www.linkedin.com/in/amsmota cv: http://docs.google.com/View?id=ddghngm7_24fdw5hmc7 _______________________________________________ 2009/9/18 Jan Algermissen <algermissen1971@...> > > > > On Sep 18, 2009, at 9:36 PM, Noah Campbell wrote: > > > I'm not sure how Atom got lumped in there except for the fact that > > content can be stuffed into the entire document feed instead of > > relying on links to the content. > > I used to see Atom as *the* means to bundle documents and links into a > single message. Now that the Link header has been revived the need for > using Atom as an envelope format has declined. (Not questioning the > usefullness of Atom itself here) > > Bottom line: if you can put your meta data into the HTTP header think > hard before using an envelope format. > > Jan > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@acm.org <algermissen%40acm.org> > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > >
> Now that the Link header has been revived I'm curious to learn more...can you point the way to more info. -Noah On Fri, Sep 18, 2009 at 12:48 PM, Jan Algermissen <algermissen1971@...>wrote: > > On Sep 18, 2009, at 9:36 PM, Noah Campbell wrote: > > I'm not sure how Atom got lumped in there except for the fact that content >> can be stuffed into the entire document feed instead of relying on links to >> the content. >> > > > I used to see Atom as *the* means to bundle documents and links into a > single message. Now that the Link header has been revived the need for using > Atom as an envelope format has declined. (Not questioning the usefullness of > Atom itself here) > > Bottom line: if you can put your meta data into the HTTP header think hard > before using an envelope format. > > Jan > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
On Sep 18, 2009, at 10:31 PM, Noah Campbell wrote: > > > > Now that the Link header has been revived > > I'm curious to learn more...can you point the way to more info. http://tools.ietf.org/html/draft-nottingham-http-link-header-06 Jan > > -Noah > > On Fri, Sep 18, 2009 at 12:48 PM, Jan Algermissen <algermissen1971@... > > wrote: > > On Sep 18, 2009, at 9:36 PM, Noah Campbell wrote: > > I'm not sure how Atom got lumped in there except for the fact that > content can be stuffed into the entire document feed instead of > relying on links to the content. > > > I used to see Atom as *the* means to bundle documents and links into > a single message. Now that the Link header has been revived the need > for using Atom as an envelope format has declined. (Not questioning > the usefullness of Atom itself here) > > Bottom line: if you can put your meta data into the HTTP header > think hard before using an envelope format. > > Jan > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
I'd suggest that you give http://www.dehora.net/journal/2008/10/07/magnificent-seven-the-value-of-atom/another read (being address to you, in fact). That a site purporting to be about REST recommends *against* Atom is simply ludicrous. Of course there are "wrong" ways to use Atom (as with anything), but AtomPub protocol is the best, most concise and well-considered example we have of a RESTful protocol. Misunderstandinding AtomPub means you misunderstand REST. I'm with the folks who suggest that this effort not include "REST" in the name. --peter keane On Fri, Sep 18, 2009 at 2:56 PM, Bill Burke <bburke@...> wrote: > > > > > Noah Campbell wrote: > > I'm not sure how Atom got lumped in there except for the fact that > > content can be stuffed into the entire document feed instead of relying > > on links to the content. > > > > Atom was lumped in because I see people using it to exchange messages > between applications for no other reason other than the hype of the > protocol itself. > > I did prototype a few things with Atom when I added support for it > within RESTEasy. For doing the types of applications I'm used to doing, > Atom just got in the way. It made more sense to leverage HTTP. > > Even with links within Atom, you end up screaming "I just want the > bleepin message!". Yeah, sure you could have a framework that hides > that you're sending Atom around to make things easier for you, but that > is an anti-pattern in and of itself. > > Also, I didn't make this decision lightly. For an analogy I was very > skeptical of REST at first, the more I read about it the more I was > convinced it was the right approach for many things. I've had quite the > opposite experience with Atom. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > >
I think it is a stretch to say that AtomPub is the "best, most concise and well-considered" example of a RESTful protocol. It is just a profile of HTTP for a particular class of resources and use cases. In any case, Bill's comment below as well as the link from the other Bill's blog are about the Atom format. Subbu On Sep 18, 2009, at 1:59 PM, Peter Keane wrote: > I'd suggest that you give http://www.dehora.net/journal/2008/10/07/magnificent-seven-the-value-of-atom/ > another read (being address to you, in fact). > That a site purporting to be about REST recommends *against* Atom is > simply ludicrous. Of course there are "wrong" ways to use Atom (as > with anything), but AtomPub protocol is the best, most concise and > well-considered example we have of a RESTful protocol. > Misunderstandinding AtomPub means you misunderstand REST. I'm with > the folks who suggest that this effort not include "REST" in the name. > > --peter keane > > > > On Fri, Sep 18, 2009 at 2:56 PM, Bill Burke <bburke@...> wrote: > > > > > Noah Campbell wrote: > > I'm not sure how Atom got lumped in there except for the fact that > > content can be stuffed into the entire document feed instead of > relying > > on links to the content. > > > > Atom was lumped in because I see people using it to exchange messages > between applications for no other reason other than the hype of the > protocol itself. > > I did prototype a few things with Atom when I added support for it > within RESTEasy. For doing the types of applications I'm used to > doing, > Atom just got in the way. It made more sense to leverage HTTP. > > Even with links within Atom, you end up screaming "I just want the > bleepin message!". Yeah, sure you could have a framework that hides > that you're sending Atom around to make things easier for you, but > that > is an anti-pattern in and of itself. > > Also, I didn't make this decision lightly. For an analogy I was very > skeptical of REST at first, the more I read about it the more I was > convinced it was the right approach for many things. I've had quite > the > opposite experience with Atom. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > >
Hi guys, I was wondering how to do a PUT/POST/DELETE call on a resource specifying the precondition that it doesn't exist in the first place. Should non-existing resources (those which return a 404 upon GET/HEAD) specify some ETag as well then use it on the former calls? Moreover, upon a successful DELETE, should I issue another ETag as well? As a side question, is it ok to use DELETE when it 'clears a list'. Say, I want to clear my shopping cart, so I delete it. However, when I GET it later on, it simply says that it's empty rather than it doesn't exist. The reason why I don't use PUT is that I don't want to allow direct modification as a result of some assertion to the state of the resource other than clearing it. Jan Vincent Liwanag jvliwanag@...
Indeed. We have message and entity headers. It's like a big elephant in the room that some crowds prentend are not hter because they're headers. If it doesn't fit in an http header, you're probably doing it wrong. > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@yahoogroups.com] On Behalf Of Subbu Allamaraju > Sent: 18 September 2009 20:37 > To: Chuck Hinson > Cc: Rest List > Subject: Re: [rest-discuss] Avoid envelope formats > > Envelope formats, if not designed and used carefully, can reduce the > visibility of the uniform interface. An example is an application > encoding some "application/foobar" within atom:content. When used like > this, the protocol aspects become less useful, which is the same as > tunneling. > > HTTP does include an envelope format, although it is rarely described > as such. HTTP messages use a MIME-like format "containing > metainformation about the data transferred and modifiers on the > request/response semantics" (sec 1.1, RFC-2616). This format is > visible and extensible. When you start to design representations based > on this characteristic, you may find that there is no need for any > other payload format. > > Subbu > > On Sep 18, 2009, at 12:15 PM, Chuck Hinson wrote: > > > The following statement is on the REST-* architectural goals page: > > > > "Whenever possible, avoid envelope formats. Examples of envelope > > formats are SOAP and Atom. Envelope formats encourage tunneling over > > HTTP instead of leveraging HTTP. They also require additional > > complexities on both the client and the server. > > > > Is this elaborated on somewhere? I don't think I've ever heard the > > argument made before and I'm not sure I get why an envelope format is > > intrinsically good or bad in a protocol. It seems orthogonal to > > whether something is RESTful or not. > > > > --Chuck > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
But using AtomPub as the next trendy way to encapsulate any kind of data is ludicrous. AtomPub is good for document exchange, where mapping between a document and well known semantics of such a document in a UA is worth it, but using it as I see it these days, to fetch data access, contact sync etc, is ludicrous. The people that are pushing AtomPub as the answer to all of our problems are, unsurprisingly, the same guys that said soap envelopes would solve world hunger. It’d be great if people stopped using specialized app protocols in the name of “framework reusabilityâ€. That’s exactly what got us into the SOAP mess, and exactly where some vendors (*cough* Microsoft *cough*) are getting to. Sad because yet again, we’re hitting the architects’ intellectual masturbation of framework reuse. From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Peter Keane Sent: 18 September 2009 22:00 To: Bill Burke Cc: Noah Campbell; Chuck Hinson; Rest List Subject: Re: [rest-discuss] Avoid envelope formats I'd suggest that you give http://www.dehora.net/journal/2008/10/07/magnificent-seven-the-value-of-atom/ another read (being address to you, in fact). That a site purporting to be about REST recommends *against* Atom is simply ludicrous. Of course there are "wrong" ways to use Atom (as with anything), but AtomPub protocol is the best, most concise and well-considered example we have of a RESTful protocol. Misunderstandinding AtomPub means you misunderstand REST. I'm with the folks who suggest that this effort not include "REST" in the name. --peter keane On Fri, Sep 18, 2009 at 2:56 PM, Bill Burke <bburke@...> wrote: Noah Campbell wrote: > I'm not sure how Atom got lumped in there except for the fact that > content can be stuffed into the entire document feed instead of relying > on links to the content. > Atom was lumped in because I see people using it to exchange messages between applications for no other reason other than the hype of the protocol itself. I did prototype a few things with Atom when I added support for it within RESTEasy. For doing the types of applications I'm used to doing, Atom just got in the way. It made more sense to leverage HTTP. Even with links within Atom, you end up screaming "I just want the bleepin message!". Yeah, sure you could have a framework that hides that you're sending Atom around to make things easier for you, but that is an anti-pattern in and of itself. Also, I didn't make this decision lightly. For an analogy I was very skeptical of REST at first, the more I read about it the more I was convinced it was the right approach for many things. I've had quite the opposite experience with Atom. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Peter Keane wrote: > I'd suggest that you give > http://www.dehora.net/journal/2008/10/07/magnificent-seven-the-value-of-atom/ > another read (being address to you, in fact). > That a site purporting to be about REST recommends *against* Atom is > simply ludicrous. Of course there are "wrong" ways to use Atom (as with > anything), but AtomPub protocol is the best, most concise and > well-considered example we have of a RESTful protocol. > Misunderstandinding AtomPub means you misunderstand REST. I'm with the > folks who suggest that this effort not include "REST" in the name. > FWIW, it was Anne Thomas Manes who declared that Bill Burke thought Atom was unRESTful, not Bill Burke himself. Even a year after de Hora's blog I still think Atom is overkill for use cases other than what it was designed for. But unRESTful? Please don't put words into my mouth. Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Fri, Sep 18, 2009 at 7:25 PM, Sebastien Lambla <seb@...> wrote:
> But using AtomPub as the next trendy way to encapsulate any kind of data
> is ludicrous. AtomPub is good for document exchange, where mapping between a
> document and well known semantics of such a document in a UA is worth it,
> but using it as I see it these days, to fetch data access, contact sync etc,
> is ludicrous.
>
>
>
Sorry -- that's unfair. There are numerous successful implementations of
AtomPub used in all sorts on non-blogging contexts for which it is perfectly
suitable. There are some who refuse to admit that it has any use outside of
updating a blog, but I'd wholeheartedly disagree. (Again, read Bill
DeHora's piece). And since when is AtomPub trendy?? ("Trendy" seems to be
the trendy put down du jour).
I don't know where you "see it these days" that you find ludicrous. As I
said, there are plenty of bad ways to use Atom. But in my experience Atom
is *way* underused -- folks prefering simple "custom xml"
(non-standardized) , or impenetrable, impossible-to-validate JSON in cases
when Atom would be perfectly suitable.
> The people that are pushing AtomPub as the answer to all of our problems
> are, unsurprisingly, the same guys that said soap envelopes would solve
> world hunger.
>
I'm pushing AtomPub, for sure (I've never had even the slightest interest in
SOAP). We've had incredible good luck with it (we use is as the interface
to a large, widely-used Digital Object repository at UT Austin). It's
allowed us to grow and maintain our system with a very small staff, train
student developers with a modicum of programming experience to build
incredibly media-rich web sites, and give a stable back-end for contract
programmers building higher-end content management applications.
The though occurs -- if you are referring to the CMIS effort (which uses
Atom/AtomPub), I'd agree wholeheartedly. Roy F's frank take on that effort
is at http://roy.gbiv.com/untangled/tag/cmis . It was, last I looked, a
really, really poor (mis)use of Atom (and betrays a half-hearted-at-best
attempt to be RESTful). To suggest in anyway that this is a failing of
Atom/AtomPub is way off base.
> It’d be great if people stopped using specialized app protocols in the name
> of “framework reusabilityâ€. That’s exactly what got us into the SOAP mess,
> and exactly where some vendors (*cough* Microsoft **cough**) are getting
> to.
>
>
>
Sorry, I've lost your point here. What I would suggest, is that anyone
interested in understanding REST, could do much worse that starting with RFC
5023 http://bitworking.org/projects/atom/rfc5023.html (Atom Publishing
Protocol). I'm *not* saying that you go and use Atom for every need (in
fact, if you grok the spec you'd be much less likely to do that). But as I
said before, if you can't or won't try to understand AtomPub, you probably
can't or won't really understand REST.
--peter keane
> Sad because yet again, we’re hitting the architects’ intellectual
> masturbation of framework reuse.
>
>
>
>
>
>
>
> *From:* rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com]
> *On Behalf Of *Peter Keane
> *Sent:* 18 September 2009 22:00
> *To:* Bill Burke
> *Cc:* Noah Campbell; Chuck Hinson; Rest List
> *Subject:* Re: [rest-discuss] Avoid envelope formats
>
>
>
>
>
> I'd suggest that you give
> http://www.dehora.net/journal/2008/10/07/magnificent-seven-the-value-of-atom/another read (being address to you, in fact).
> That a site purporting to be about REST recommends *against* Atom is simply
> ludicrous. Of course there are "wrong" ways to use Atom (as with anything),
> but AtomPub protocol is the best, most concise and well-considered example
> we have of a RESTful protocol. Misunderstandinding AtomPub means you
> misunderstand REST. I'm with the folks who suggest that this effort not
> include "REST" in the name.
>
> --peter keane
>
> On Fri, Sep 18, 2009 at 2:56 PM, Bill Burke <bburke@...> wrote:
>
>
>
>
>
> Noah Campbell wrote:
> > I'm not sure how Atom got lumped in there except for the fact that
> > content can be stuffed into the entire document feed instead of relying
> > on links to the content.
> >
>
> Atom was lumped in because I see people using it to exchange messages
> between applications for no other reason other than the hype of the
> protocol itself.
>
> I did prototype a few things with Atom when I added support for it
> within RESTEasy. For doing the types of applications I'm used to doing,
> Atom just got in the way. It made more sense to leverage HTTP.
>
> Even with links within Atom, you end up screaming "I just want the
> bleepin message!". Yeah, sure you could have a framework that hides
> that you're sending Atom around to make things easier for you, but that
> is an anti-pattern in and of itself.
>
> Also, I didn't make this decision lightly. For an analogy I was very
> skeptical of REST at first, the more I read about it the more I was
> convinced it was the right approach for many things. I've had quite the
> opposite experience with Atom.
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com
>
>
>
>
>
>
>
On Fri, Sep 18, 2009 at 7:55 PM, Bill Burke <bburke@...> wrote: > Peter Keane wrote: > >> I'd suggest that you give >> http://www.dehora.net/journal/2008/10/07/magnificent-seven-the-value-of-atom/another read (being address to you, in fact). >> That a site purporting to be about REST recommends *against* Atom is >> simply ludicrous. Of course there are "wrong" ways to use Atom (as with >> anything), but AtomPub protocol is the best, most concise and >> well-considered example we have of a RESTful protocol. Misunderstandinding >> AtomPub means you misunderstand REST. I'm with the folks who suggest that >> this effort not include "REST" in the name. >> >> > FWIW, it was Anne Thomas Manes who declared that Bill Burke thought Atom > was unRESTful, not Bill Burke himself. > > Even a year after de Hora's blog I still think Atom is overkill for use > cases other than what it was designed for. > > But unRESTful? Please don't put words into my mouth. > Bill- Please don't put words in *my* mouth. I didn't say you said AtomPub was unRESTful -- I said your web site recommended against using it. This is all starting to sound like architectural astronautics (by me, too) -- broad generalizations and value judgements thrown around utterly out of specific contexts. This is exactly what the REST world does *not* need, and I fear what andeffort like REST-* is bound to lead us towards. I'll take instruction on How to get a cup of Coffee http://www.infoq.com/articles/webber-rest-workflow, How to explain REST to a manager http://tomayko.com/writings/rest-to-my-wife, or how to do what needs doing http://www.restful-webservices-cookbook.org/ (struggles & tough design decisions and all) any day over a *marketing* effort. --peter keane > Bill > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
johnzabroski wrote: > > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Jan Algermissen > <algermissen1971@...> wrote: > > > > Or was it all you were hung up on was the name? > > > > > > > No, not just the name. But the name is really too close to WS-*. > > > > Jan > > It's not just too close to WS-*. It is too close to the platonic ideal > of REST itself. It implies REST is about standardization of data > exchange formats within and across industries. It also is not a small > leap for people then to think that the data exchange formats are what > REST is all about. > > The name is just bad taste, and confusing. > > Your content in your message is improving, but your logo is still a > puzzling point. So it's not as simple as "_all_ you were hung up on was > the name". The bad name suggested, I think, to most of us that you were > coercing REST for marketing purposes. We're responding with a fair level > of consumer fear, uncerainty and doubt (as opposed to corporate FUD). > You're basically now burdening us with explaining to our COO why > "REST-*" he heard about in Delta Airlines In-flight Magazine isn't REST. > I am perfectly fine with consumer FUD. But if you do not want us to promote something that is not RESTful, then we're going to have to be engaged on a technical level either here on rest-discuss or on the google groups we've created at REST-*.org. We want our specs to be RESTful, but we're REST-noobs, we will get things wrong. We want our specs to be architecturally sound. FUDing us just because we're a big bad vendor, because we sell middleware, or because we're JBoss just isn't very constructive. We're not going away. Futhermore we launched an effort, not a product. We admitted from the beginning that what we had was old, raw, and unfinished. Our goal is to define RESTful middleware, not to define REST itself. Maybe not clearly stated at first, but I've at least refined the message on the website. Whether or not REST-* continues to be the name is still debatable, but whatever name it ends up being will have "REST" within it. You guys are just going to have to deal with it. If REST is positioned as a paradigm, as an idea, you can't say any one person, company, or organization cannot use it to promote whatever they are doing however good or bad. Imagine if the same tact was taken with the coined phrase Object-Oriented-Programming? Or even worse, as you say, it was trademarked? If it had, OOP wouldn't have been called OOP, it would have been called entirely something else. Roy's role is a good one. IMO, it is Roy's (and other's) job to keep everybody focused. One could question his tactics, but personally I prefer abrasiveness and bluntness. Even though I was pretty demoralized by his initial comments, somebody, expecially the creator, has to hold the banner for RESTful purity and bash people into submission as much as possible. But to say you cannot use REST the name (or REST the brand) if we are not pure, or even worse, because one or two of you don't agree with what somebody is doing is just completely unproductive. You have to let individuals, companies, and organizations come to terms with REST in their own way and at their own pace. As REST goes from the early adopters to mainstream, there is going to be confusion. People will get it completely and utterly wrong, but IMO, this is part of the process. For myself, I am completely open to learning what the "right" way is, but you're just not going to convince me there is no place for middleware or middleware services within REST. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Fri, 08 May 2009 18:09:32 -0400
Bill Burke <bburke@...> wrote:
> Let's say I have an Order resource in a ecommerce Order Entry system.
> How would I implement my service so that I can cancel an order rather
> than delete it? One is to have the cancel state as part of the
> order. THen I can just put a new representation with the cancelled
> state set to true:
>
> PUT /orders/333
> content-type: application/xml
>
> <order id="333">
> <cancelled>false</cancelled>
> ...
> </order>
>
> Seems kinda heavy to me.
>
> Would it still be restful to define a "cancelled" URI that you could
> put or post to to change the state?
>
Absolutely not. When you want to change the state of a resource, you
manipulate that resource -- you don't assign the operation to some
other URL. The proper way of doing this is what you started with, as
myself and others told you. But you are more concerned with calling
what you're doing REST by playing semantics with the terminology, than
you are with learning REST -- this is my opinion from dealing with you
on rest-discuss, and observing your responses to others on rest-discuss.
>
> /orders/333/cancelled
>
> or
>
> /orders/333?cancel=true
>
> You don't even need to send data to change the state in this
> scenario. But the problem with this from a pure RESTful standpoint
> is, isn't this a mini-RPC? My thought at first is YES IT IS....
>
Yes, it is RPC, and your first clue should be, "you don't even need to
send data to change the state..." In REST, state is changed by
manipulating representations of resources. Sending a POST to some
action-URL with no content in it, is the epitome of a non-RESTful
interaction.
>
> .... But, consider if you have cancelling as part of a HATEOAS
>
> <order id="333">
> <atom:link rel="CANCEL"
> href="http://example.com/orders/333/cancelled"/> ...
> </order>
>
>
> Now, I have a CANCEL link that if I follow changes the state of my
> resource. Doesn't seem so RPCish now that I've embedded it as a
> link. Maybe the answer is /orders/333/cancelled isn't very RESTful by
> itself, but when combined with HATEOAS it is?
>
No, providing a link to a resource doesn't make that resource RESTful.
Pretend I've just posted a link to a butt-ugly URL that obviously has
nothing to do with REST. Following the link to get to that butt-ugly
non-RESTful URL doesn't make that URL RESTful.
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com
>
On Sun, 10 May 2009 19:24:47 -0400
Bill Burke <bburke@...> wrote:
>
>
> Eric J. Bowman wrote:
> > Bill Burke wrote:
> >
> >> Seems kinda heavy to me.
> >>
> >
> > But that's the way it's done. ;-)
> >
> >> Now, I have a CANCEL link that if I follow changes the state of my
> >> resource. Doesn't seem so RPCish now that I've embedded it as a
> >> link. Maybe the answer is /orders/333/cancelled isn't very RESTful
> >> by itself, but when combined with HATEOAS it is?
> >>
> >
> > Linking to a procedure call, doesn't make that procedure call a REST
> > resource. What happens if you GET /cancelled?
>
> /orders/{id}/cancelled is a thing. It is a state. It either exists
> or doesn't exist. So, if you do a GET and the state exists:
>
> HTTP/1.1 204, No Content
>
> or even
>
> HTTP/1.1 405, Method Not Allowed
> Allow: PUT, DELETE
>
> If it doesn't exist:
>
> HTTP/1.1 404, Not Found
>
> or even
>
> HTTP/1.1 410, Gone
>
If you do a GET and the resource is a REST resource, you'll receive a
representation of the state of that resource. If you get nothing, you
aren't really using REST. It is certainly confusing to be told that a
resource doesn't exist or is gone via a 404 or a 410 response, when
that resource obviously must exist in order to accept a POST.
>
> > What is it a
> > representation of? The resource? Or some action, i.e. remote
> > procedure? If you aren't transferring representations of resources
> > in order to change their state, then you aren't using REST.
> >
>
> So you're saying a thing can't merely exist? It needs to have a
> representation? I don't think so.
>
No, I'm not saying that. I'm saying that in REST, resources have
representations. That's the whole point. Instead of listening or
asking questions, your response is defensive and argumentative -- this
is not a good path to go down to learn REST.
>
> I think I've just convinced my self that even without the <link> this
> is pretty restful.
>
And therein lies the problem. Myself and others were pointing out what
you were doing wrong, in a polite fashion by listing some criteria you
weren't meeting. Your response was to play semantics with that and
convince yourself that you know better about REST than others trying to
teach it to you. But as I think you've seen from the REST-* debacle,
you haven't convinced _others_ that what you come up with is RESTful.
>
> Damn the URLS are here:
>
> http://groups.google.com/group/reststar-messaging/web/submission-2-draft-restful-queue
> http://groups.google.com/group/reststar-messaging/web/submission-2-draft-restful-pub-sub
>
You're still not even close. How many more people need to tell you,
and how emphatically should they tell you, that your design patterns
are RPC, and not REST by any stretch?
>
> Since the state of the queue changes when reading this message, a
> POST should be performed.
>
No. There can be nothing more clear than the semantics of GET. To
read a message, GET the message. POST is not GET. GETting one
resource may very well cause another resource to be changed, no
biggie. To retrieve a representation of a resource in REST, the verb
is GET. Want a message from a queue? GET the message, don't POST to
the queue.
>
> Send--->
> POST /queues/myqueue/pollers
>
This is an RPC endpoint, not a REST resource. Once again, the issue is
what happens when you GET this resource? Does it return a
representation? No? Then you're not using REST.
>
> <---Response:
> HTTP/1.1 200 Ok
> Content-Location: /queues/myqueue/messages/3332222
> Content-Type: application/json
>
> <the consumed json message>
>
> If the response is successfully delivered to the client, then the
> message will be removed from the queue.
>
Why don't you leave that up to the client? Once the client GETs a
message, the client should verify that it was received intact by
comparing the message body to the Content-Md5 header. If it's intact,
then the client can DELETE the message or POST some sort of
representation to the queue, indicating which message may be removed --
the interaction I describe is driven by hypertext, i.e. HEAS.
Your interactions only work with foreknowledge of specific actions to
take on specific URIs as derived from a spec. Not through HEAS.
-Eric
Eric J. Bowman wrote: > On Fri, 08 May 2009 18:09:32 -0400 > Bill Burke <bburke@...> wrote: > >> Let's say I have an Order resource in a ecommerce Order Entry system. >> How would I implement my service so that I can cancel an order rather >> than delete it? One is to have the cancel state as part of the >> order. THen I can just put a new representation with the cancelled >> state set to true: >> >> PUT /orders/333 >> content-type: application/xml >> >> <order id="333"> >> <cancelled>false</cancelled> >> ... >> </order> >> >> Seems kinda heavy to me. >> >> Would it still be restful to define a "cancelled" URI that you could >> put or post to to change the state? >> > > Absolutely not. When you want to change the state of a resource, you > manipulate that resource -- you don't assign the operation to some > other URL. The proper way of doing this is what you started with, as > myself and others told you. But you are more concerned with calling > what you're doing REST by playing semantics with the terminology, than > you are with learning REST -- this is my opinion from dealing with you > on rest-discuss, and observing your responses to others on rest-discuss. > If you want, I can send you hundreds of other emails I've sent that you can use to discredit me. One particularly juicy one I sent a few years ago is where I described REST as "pretty" URLs. Ping me offline if you're interested. Seriously though, considering the plethora of different responses to this thread, a lot of people are performing similar thought exercises. >> /orders/333/cancelled >> >> or >> >> /orders/333?cancel=true >> >> You don't even need to send data to change the state in this >> scenario. But the problem with this from a pure RESTful standpoint >> is, isn't this a mini-RPC? My thought at first is YES IT IS.... >> > > Yes, it is RPC, and your first clue should be, "you don't even need to > send data to change the state..." In REST, state is changed by > manipulating representations of resources. Sending a POST to some > action-URL with no content in it, is the epitome of a non-RESTful > interaction. > >> .... But, consider if you have cancelling as part of a HATEOAS >> >> <order id="333"> >> <atom:link rel="CANCEL" >> href="http://example.com/orders/333/cancelled"/> ... >> </order> >> >> >> Now, I have a CANCEL link that if I follow changes the state of my >> resource. Doesn't seem so RPCish now that I've embedded it as a >> link. Maybe the answer is /orders/333/cancelled isn't very RESTful by >> itself, but when combined with HATEOAS it is? >> > > No, providing a link to a resource doesn't make that resource RESTful. > Pretend I've just posted a link to a butt-ugly URL that obviously has > nothing to do with REST. Following the link to get to that butt-ugly > non-RESTful URL doesn't make that URL RESTful. > Well consider a different scenario that I had posted earlier. You are modeling a data cache. Adding and retrieving data and representations from the cache is pretty easy to model restfully. But consider the act of purging a cache or running some kind of eviction policy. While purging a cache changes the state of the cache, it is in and of itself *not* state of the cache. Pure operations do exist within applications. Futhermore, if links can't be mechanisms to modify the state of a resource, you've pretty much discounted half of the Web itself. I guarantee you 90% of Web applications out there that are accepting any kind of input are receiving that input via a Form and through an action URL. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Between the options discussed in this thread below, neither is wrong but both options are simplistic. > > PUT /orders/333 > > content-type: application/xml > > > > <order id="333"> > > <cancelled>false</cancelled> > > ... > > </order> > > > > /orders/333/cancelled > > > > or > > > > /orders/333?cancel=true In reality, canceling an order by simply flipping a switch to true or false via (PUT + resource URI) or a (POST + some other URI) does not cut it. There may be some complex business process that may govern order cancellation. Using POST gives the server an opportunity to provide a decent abstraction for the order cancellation process provided that the body of the representation for POST includes information about the cancellation request. It is okay to argue that the first is RESTful while the second is not, but it does not help very much. There are two competing requirements here. One is visibility, and the other is separation of concerns. Using PUT maintains visibility, but will most likely force the client to know who/when it is valid to flip the flag to true or false. Using POST reduces visibility, but maintains a cleaner separation of concerns. A lot of problems being discussed on this thread often come down to this point. You can have absolute visibility with poor separation of concerns, or partial visibility with better separation of concerns. Since these are networked/distributed systems, I would go with the latter. Subbu
Al-
Thanks -- I'm very glad to hear that. I will definitely take another look.
--peter keane
On Sat, Sep 19, 2009 at 11:35 AM, Al Brown <albertcbrown@...> wrote:
> Roy's article was based on initial contribution before oasis.
>
> In the past year the spec has changed a lot with help from feedback from
> Roy, the atom lists as well as the tc members.
>
> cmis is preparing for public review so if you still feel some concerns are
> still valid, please express them to the cmis tc.
>
> Al
> Sent from BlackBerry.
> ------------------------------
>
> * From: *Peter Keane [pkeane@...exas.edu]
> * Sent: *09/18/2009 08:16 PM EST
> * To: *Sebastien Lambla <seb@...>
> * Cc: *Bill Burke <bburke@...>; Noah Campbell <
> noahcampbell@...>; Chuck Hinson <chuck.hinson@...>; Rest List
> <rest-discuss@yahoogroups.com>
>
> * Subject: *Re: [rest-discuss] Avoid envelope formats
>
>
>
>
>
> On Fri, Sep 18, 2009 at 7:25 PM, Sebastien Lambla <seb@...>wrote:
>
>> But using AtomPub as the next trendy way to encapsulate any kind of data
>> is ludicrous. AtomPub is good for document exchange, where mapping between a
>> document and well known semantics of such a document in a UA is worth it,
>> but using it as I see it these days, to fetch data access, contact sync etc,
>> is ludicrous.
>>
>>
>>
> Sorry -- that's unfair. There are numerous successful implementations of
> AtomPub used in all sorts on non-blogging contexts for which it is perfectly
> suitable. There are some who refuse to admit that it has any use outside of
> updating a blog, but I'd wholeheartedly disagree. (Again, read Bill
> DeHora's piece). And since when is AtomPub trendy?? ("Trendy" seems to be
> the trendy put down du jour).
>
> I don't know where you "see it these days" that you find ludicrous. As I
> said, there are plenty of bad ways to use Atom. But in my experience Atom
> is *way* underused -- folks prefering simple "custom xml"
> (non-standardized) , or impenetrable, impossible-to-validate JSON in cases
> when Atom would be perfectly suitable.
>
>
>> The people that are pushing AtomPub as the answer to all of our problems
>> are, unsurprisingly, the same guys that said soap envelopes would solve
>> world hunger.
>>
>
> I'm pushing AtomPub, for sure (I've never had even the slightest interest
> in SOAP). We've had incredible good luck with it (we use is as the
> interface to a large, widely-used Digital Object repository at UT Austin).
> It's allowed us to grow and maintain our system with a very small staff,
> train student developers with a modicum of programming experience to build
> incredibly media-rich web sites, and give a stable back-end for contract
> programmers building higher-end content management applications.
>
> The though occurs -- if you are referring to the CMIS effort (which uses
> Atom/AtomPub), I'd agree wholeheartedly. Roy F's frank take on that effort
> is at http://roy.gbiv.com/untangled/tag/cmis . It was, last I looked, a
> really, really poor (mis)use of Atom (and betrays a half-hearted-at-best
> attempt to be RESTful). To suggest in anyway that this is a failing of
> Atom/AtomPub is way off base.
>
>> It’d be great if people stopped using specialized app protocols in the
>> name of “framework reusabilityâ€. That’s exactly what got us into the SOAP
>> mess, and exactly where some vendors (*cough* Microsoft **cough**) are
>> getting to.
>>
>>
>>
> Sorry, I've lost your point here. What I would suggest, is that anyone
> interested in understanding REST, could do much worse that starting with RFC
> 5023 http://bitworking.org/projects/atom/rfc5023.html (Atom Publishing
> Protocol). I'm *not* saying that you go and use Atom for every need (in
> fact, if you grok the spec you'd be much less likely to do that). But as I
> said before, if you can't or won't try to understand AtomPub, you probably
> can't or won't really understand REST.
>
> --peter keane
>
>
>> Sad because yet again, we’re hitting the architects’ intellectual
>> masturbation of framework reuse.
>>
>>
>>
>>
>>
>>
>>
>> *From:* rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com]
>> *On Behalf Of *Peter Keane
>> *Sent:* 18 September 2009 22:00
>> *To:* Bill Burke
>> *Cc:* Noah Campbell; Chuck Hinson; Rest List
>> *Subject:* Re: [rest-discuss] Avoid envelope formats
>>
>>
>>
>>
>>
>> I'd suggest that you give
>> http://www.dehora.net/journal/2008/10/07/magnificent-seven-the-value-of-atom/another read (being address to you, in fact).
>> That a site purporting to be about REST recommends *against* Atom is
>> simply ludicrous. Of course there are "wrong" ways to use Atom (as with
>> anything), but AtomPub protocol is the best, most concise and
>> well-considered example we have of a RESTful protocol. Misunderstandinding
>> AtomPub means you misunderstand REST. I'm with the folks who suggest that
>> this effort not include "REST" in the name.
>>
>> --peter keane
>>
>> On Fri, Sep 18, 2009 at 2:56 PM, Bill Burke <bburke@...> wrote:
>>
>>
>>
>>
>>
>> Noah Campbell wrote:
>> > I'm not sure how Atom got lumped in there except for the fact that
>> > content can be stuffed into the entire document feed instead of relying
>> > on links to the content.
>> >
>>
>> Atom was lumped in because I see people using it to exchange messages
>> between applications for no other reason other than the hype of the
>> protocol itself.
>>
>> I did prototype a few things with Atom when I added support for it
>> within RESTEasy. For doing the types of applications I'm used to doing,
>> Atom just got in the way. It made more sense to leverage HTTP.
>>
>> Even with links within Atom, you end up screaming "I just want the
>> bleepin message!". Yeah, sure you could have a framework that hides
>> that you're sending Atom around to make things easier for you, but that
>> is an anti-pattern in and of itself.
>>
>> Also, I didn't make this decision lightly. For an analogy I was very
>> skeptical of REST at first, the more I read about it the more I was
>> convinced it was the right approach for many things. I've had quite the
>> opposite experience with Atom.
>>
>> --
>> Bill Burke
>> JBoss, a division of Red Hat
>> http://bill.burkecentral.com
>>
>>
>>
>>
>>
>>
>
>
Subbu Allamaraju wrote: > > In reality, canceling an order by simply flipping a switch to true > or false via (PUT + resource URI) or a (POST + some other URI) does > not cut it. There may be some complex business process that may > govern order cancellation. Using POST gives the server an opportunity > to provide a decent abstraction for the order cancellation process > provided that the body of the representation for POST includes > information about the cancellation request. > The key point being, "provided that the body of the representation for POST includes information about the cancellation request." > > It is okay to argue that the first is RESTful while the second is > not, but it does not help very much. There are two competing > requirements here. One is visibility, and the other is separation of > concerns. Using PUT maintains visibility, but will most likely force > the client to know who/when it is valid to flip the flag to true or > false. Using POST reduces visibility, but maintains a cleaner > separation of concerns. > I think it helps a lot, for the very reason you mentioned. The second option is just plain wrong because there's no HEAS invovled. Which is not to say that the first option is the only way to go -- there are some ideas in this thread that could be turned into a RESTful interaction, like /cancelled/333, but they aren't fleshed out. > > A lot of problems being discussed on this thread often come down to > this point. You can have absolute visibility with poor separation of > concerns, or partial visibility with better separation of concerns. > Since these are networked/distributed systems, I would go with the > latter. > I don't follow, Subbu. How does the first approach fail re: separation of concerns? Also, why isn't the full visibility of a RESTful approach not favorable in a distributed system? -Eric
Bill Burke wrote: > > Eric J. Bowman wrote: > > On Fri, 08 May 2009 18:09:32 -0400 > > Bill Burke <bburke@...> wrote: > > > >> Let's say I have an Order resource in a ecommerce Order Entry > >> system. How would I implement my service so that I can cancel an > >> order rather than delete it? One is to have the cancel state as > >> part of the order. THen I can just put a new representation with > >> the cancelled state set to true: > >> > >> PUT /orders/333 > >> content-type: application/xml > >> > >> <order id="333"> > >> <cancelled>false</cancelled> > >> ... > >> </order> > >> > >> Seems kinda heavy to me. > >> > >> Would it still be restful to define a "cancelled" URI that you > >> could put or post to to change the state? > >> > > > > Absolutely not. When you want to change the state of a resource, > > you manipulate that resource -- you don't assign the operation to > > some other URL. The proper way of doing this is what you started > > with, as myself and others told you. But you are more concerned > > with calling what you're doing REST by playing semantics with the > > terminology, than you are with learning REST -- this is my opinion > > from dealing with you on rest-discuss, and observing your responses > > to others on rest-discuss. > > > > If you want, I can send you hundreds of other emails I've sent that > you can use to discredit me. One particularly juicy one I sent a few > years ago is where I described REST as "pretty" URLs. Ping me > offline if you're interested. > I'm not interested in discrediting you. I dredged up this thread because you're still making the same fundamental mistake to this day -- not because I'm looking to embarass you. Everyone here but Roy has posted stuff that they're embarassed by, in retrospect, after learning better, I'm sure. I've been very polite in pointing out that your solutions are RPC, not REST, in an effort to help you learn -- because it was obvious you needed some help. Your response was argumentative and defensive, so I let this thread drop, figuring it didn't matter that you weren't interested in learning from your mistakes. Well, my attitude there changed with the introduction of REST-*. Now it does matter. I'm here to learn what I don't know, and teach what I do know. Is your purpose here only to sow unREST? Your other message this morning seems to indicate that you're going to go on calling what you do REST, whether it is or not. Bumping this thread is a precursor to my response to that other post, precipitated by that other post. > > Seriously though, considering the plethora of different responses to > this thread, a lot of people are performing similar thought exercises. > Yes, there certainly is a lot of confusion out there surrounding REST. The fact that some people agreed with your approach doesn't amount to some sort of consensus. REST isn't so abstract that there's no wrong answer. If you're here to learn, then you need to accept (and question) what you're being told, not keep insisting that you're right. > > > No, providing a link to a resource doesn't make that resource > > RESTful. Pretend I've just posted a link to a butt-ugly URL that > > obviously has nothing to do with REST. Following the link to get > > to that butt-ugly non-RESTful URL doesn't make that URL RESTful. > > > > Well consider a different scenario that I had posted earlier. You > are modeling a data cache. Adding and retrieving data and > representations from the cache is pretty easy to model restfully. > But consider the act of purging a cache or running some kind of > eviction policy. While purging a cache changes the state of the > cache, it is in and of itself *not* state of the cache. Pure > operations do exist within applications. > Sure they do. But in REST, these operations are abstracted to fit the Uniform Interface, not turned into an endpoint for RPC operations. My assessment of your character would improve considerably if you could accept that just because a URI is linked to, doesn't make it a REST resource. This is a fundamental truth, not a matter open for debate. > > Futhermore, if links can't be mechanisms to modify the state of a > resource, you've pretty much discounted half of the Web itself. I > guarantee you 90% of Web applications out there that are accepting > any kind of input are receiving that input via a Form and through an > action URL. > Who said links can't be such a mechanism? Also, who said that 90% of Web applications are RESTful? Or that using HTML forms is unRESTful? Why is your response to my magnanimous assistance, to put words in my mouth as a means of arguing against what I have to say instead of trying to learn from it, or asking me to elaborate? You're the noob, I'm the guy who's been learning and applying REST for 11 years. You might consider the possibility that I actually know a good deal about what I'm talking about. Folks might be more comfortable with REST-* if you were to show an interest in accepting the help you've been offered here, instead of turning every issue into some semantic explanation of why your way is right. Anyway, If I perform a GET against a URI, and receive a representation with a form allowing various parts of the resource to be changed, change some data, and submit the form as a POST to the same URI, then I've just modified the state of a resource, driven by hypermedia, by transferring a representation of the resource I received back to the server. The submitted data may be sent either as an entity of some media-type, or as a query string of name-value pairs as 'application/x-www-form- urlencoded'. Perfectly RESTful, nothing wrong with query strings despite what many people say. But, if that POST contains no content and/or is directed at some other resource instead of the one I did a GET on, then it may well be hypermedia-driven, but it isn't REST, as REST has more constraints than just HEAS. -Eric
On Sat, Sep 19, 2009 at 1:39 PM, Eric J. Bowman <eric@...> wrote: > > > > Bill Burke wrote: > > > > > Eric J. Bowman wrote: > > > On Fri, 08 May 2009 18:09:32 -0400 > > > Bill Burke <bburke@...> wrote: > > > > > >> Let's say I have an Order resource in a ecommerce Order Entry > > >> system. How would I implement my service so that I can cancel an > > >> order rather than delete it? One is to have the cancel state as > > >> part of the order. THen I can just put a new representation with > > >> the cancelled state set to true: > > >> > > >> PUT /orders/333 > > >> content-type: application/xml > > >> > > >> <order id="333"> > > >> <cancelled>false</cancelled> > > >> ... > > >> </order> > > >> > > >> Seems kinda heavy to me. > > >> > > >> Would it still be restful to define a "cancelled" URI that you > > >> could put or post to to change the state? > > >> > > > > > > Absolutely not. When you want to change the state of a resource, > > > you manipulate that resource -- you don't assign the operation to > > > some other URL. The proper way of doing this is what you started > > > with, as myself and others told you. But you are more concerned > > > with calling what you're doing REST by playing semantics with the > > > terminology, than you are with learning REST -- this is my opinion > > > from dealing with you on rest-discuss, and observing your responses > > > to others on rest-discuss. > > > > > > > If you want, I can send you hundreds of other emails I've sent that > > you can use to discredit me. One particularly juicy one I sent a few > > years ago is where I described REST as "pretty" URLs. Ping me > > offline if you're interested. > > > > I'm not interested in discrediting you. I dredged up this thread > because you're still making the same fundamental mistake to this day -- > not because I'm looking to embarass you. Everyone here but Roy has > posted stuff that they're embarassed by, in retrospect, after learning > better, I'm sure. > > I've been very polite in pointing out that your solutions are RPC, not > REST, in an effort to help you learn -- because it was obvious you > needed some help. Your response was argumentative and defensive, so I > let this thread drop, figuring it didn't matter that you weren't > interested in learning from your mistakes. Well, my attitude there > changed with the introduction of REST-*. Now it does matter. > > I'm here to learn what I don't know, and teach what I do know. Is your > purpose here only to sow unREST? Your other message this morning seems > to indicate that you're going to go on calling what you do REST, > whether it is or not. Bumping this thread is a precursor to my > response to that other post, precipitated by that other post. > > > > > Seriously though, considering the plethora of different responses to > > this thread, a lot of people are performing similar thought exercises. > > > > Yes, there certainly is a lot of confusion out there surrounding REST. > The fact that some people agreed with your approach doesn't amount to > some sort of consensus. REST isn't so abstract that there's no wrong > answer. If you're here to learn, then you need to accept (and question) > what you're being told, not keep insisting that you're right. I agree w/ Eric. In case it's of some use, I find this particuicularly useful: http://rajith.2rlabs.com/2007/11/14/the-value-of-principled-design-rest-is-just-one-example/ REST is a *specific* set of constraints -- just because some alternate approach is good or useful or practical does *not* mean it is still REST, and you simply should go around calling RPC interactions "REST-*." In addition, REST is imbued with "principled design" (illustrated perfectly by Subbu & Eric's exchange on this thread) -- balancing specific constraints with specific resulting characteristics. That's what makes REST and the community around it so compelling to me. It also makes it particularly difficult to "market," which is what the REST-* effort seems to be about. -- peter keane > > > > > > No, providing a link to a resource doesn't make that resource > > > RESTful. Pretend I've just posted a link to a butt-ugly URL that > > > obviously has nothing to do with REST. Following the link to get > > > to that butt-ugly non-RESTful URL doesn't make that URL RESTful. > > > > > > > Well consider a different scenario that I had posted earlier. You > > are modeling a data cache. Adding and retrieving data and > > representations from the cache is pretty easy to model restfully. > > But consider the act of purging a cache or running some kind of > > eviction policy. While purging a cache changes the state of the > > cache, it is in and of itself *not* state of the cache. Pure > > operations do exist within applications. > > > > Sure they do. But in REST, these operations are abstracted to fit the > Uniform Interface, not turned into an endpoint for RPC operations. My > assessment of your character would improve considerably if you could > accept that just because a URI is linked to, doesn't make it a REST > resource. This is a fundamental truth, not a matter open for debate. > > > > > Futhermore, if links can't be mechanisms to modify the state of a > > resource, you've pretty much discounted half of the Web itself. I > > guarantee you 90% of Web applications out there that are accepting > > any kind of input are receiving that input via a Form and through an > > action URL. > > > > Who said links can't be such a mechanism? Also, who said that 90% of > Web applications are RESTful? Or that using HTML forms is unRESTful? > Why is your response to my magnanimous assistance, to put words in my > mouth as a means of arguing against what I have to say instead of > trying to learn from it, or asking me to elaborate? You're the noob, > I'm the guy who's been learning and applying REST for 11 years. You > might consider the possibility that I actually know a good deal about > what I'm talking about. Folks might be more comfortable with REST-* if > you were to show an interest in accepting the help you've been offered > here, instead of turning every issue into some semantic explanation of > why your way is right. > > Anyway, > > If I perform a GET against a URI, and receive a representation with a > form allowing various parts of the resource to be changed, change some > data, and submit the form as a POST to the same URI, then I've just > modified the state of a resource, driven by hypermedia, by transferring > a representation of the resource I received back to the server. The > submitted data may be sent either as an entity of some media-type, or > as a query string of name-value pairs as 'application/x-www-form- > urlencoded'. Perfectly RESTful, nothing wrong with query strings > despite what many people say. > > But, if that POST contains no content and/or is directed at some other > resource instead of the one I did a GET on, then it may well be > hypermedia-driven, but it isn't REST, as REST has more constraints than > just HEAS. > > -Eric >
On Sat, Sep 19, 2009 at 2:14 PM, Peter Keane <pkeane@...> wrote: > On Sat, Sep 19, 2009 at 1:39 PM, Eric J. Bowman <eric@...> wrote: >> >> >> >> Bill Burke wrote: >> >> > >> > Eric J. Bowman wrote: >> > > On Fri, 08 May 2009 18:09:32 -0400 >> > > Bill Burke <bburke@...m> wrote: >> > > >> > >> Let's say I have an Order resource in a ecommerce Order Entry >> > >> system. How would I implement my service so that I can cancel an >> > >> order rather than delete it? One is to have the cancel state as >> > >> part of the order. THen I can just put a new representation with >> > >> the cancelled state set to true: >> > >> >> > >> PUT /orders/333 >> > >> content-type: application/xml >> > >> >> > >> <order id="333"> >> > >> <cancelled>false</cancelled> >> > >> ... >> > >> </order> >> > >> >> > >> Seems kinda heavy to me. >> > >> >> > >> Would it still be restful to define a "cancelled" URI that you >> > >> could put or post to to change the state? >> > >> >> > > >> > > Absolutely not. When you want to change the state of a resource, >> > > you manipulate that resource -- you don't assign the operation to >> > > some other URL. The proper way of doing this is what you started >> > > with, as myself and others told you. But you are more concerned >> > > with calling what you're doing REST by playing semantics with the >> > > terminology, than you are with learning REST -- this is my opinion >> > > from dealing with you on rest-discuss, and observing your responses >> > > to others on rest-discuss. >> > > >> > >> > If you want, I can send you hundreds of other emails I've sent that >> > you can use to discredit me. One particularly juicy one I sent a few >> > years ago is where I described REST as "pretty" URLs. Ping me >> > offline if you're interested. >> > >> >> I'm not interested in discrediting you. I dredged up this thread >> because you're still making the same fundamental mistake to this day -- >> not because I'm looking to embarass you. Everyone here but Roy has >> posted stuff that they're embarassed by, in retrospect, after learning >> better, I'm sure. >> >> I've been very polite in pointing out that your solutions are RPC, not >> REST, in an effort to help you learn -- because it was obvious you >> needed some help. Your response was argumentative and defensive, so I >> let this thread drop, figuring it didn't matter that you weren't >> interested in learning from your mistakes. Well, my attitude there >> changed with the introduction of REST-*. Now it does matter. >> >> I'm here to learn what I don't know, and teach what I do know. Is your >> purpose here only to sow unREST? Your other message this morning seems >> to indicate that you're going to go on calling what you do REST, >> whether it is or not. Bumping this thread is a precursor to my >> response to that other post, precipitated by that other post. >> >> > >> > Seriously though, considering the plethora of different responses to >> > this thread, a lot of people are performing similar thought exercises. >> > >> >> Yes, there certainly is a lot of confusion out there surrounding REST. >> The fact that some people agreed with your approach doesn't amount to >> some sort of consensus. REST isn't so abstract that there's no wrong >> answer. If you're here to learn, then you need to accept (and question) >> what you're being told, not keep insisting that you're right. > > > I agree w/ Eric. In case it's of some use, I find this particuicularly useful: > > http://rajith.2rlabs.com/2007/11/14/the-value-of-principled-design-rest-is-just-one-example/ > > REST is a *specific* set of constraints -- just because some alternate > approach is good or useful or practical does *not* mean it is still > REST, and you simply should go around calling RPC interactions (obviously, should be: "you simply should *not* go around calling RPC interactions REST.) > "REST-*." Â In addition, REST is imbued with "principled design" > (illustrated perfectly by Subbu & Eric's exchange on this thread) -- > balancing specific constraints with specific resulting > characteristics. Â That's what makes REST and the community around it > so compelling to me. It also makes it particularly difficult to > "market," which is what the REST-* effort seems to be about. > > -- peter keane > > >> >> > >> > > No, providing a link to a resource doesn't make that resource >> > > RESTful. Pretend I've just posted a link to a butt-ugly URL that >> > > obviously has nothing to do with REST. Following the link to get >> > > to that butt-ugly non-RESTful URL doesn't make that URL RESTful. >> > > >> > >> > Well consider a different scenario that I had posted earlier. You >> > are modeling a data cache. Adding and retrieving data and >> > representations from the cache is pretty easy to model restfully. >> > But consider the act of purging a cache or running some kind of >> > eviction policy. While purging a cache changes the state of the >> > cache, it is in and of itself *not* state of the cache. Pure >> > operations do exist within applications. >> > >> >> Sure they do. But in REST, these operations are abstracted to fit the >> Uniform Interface, not turned into an endpoint for RPC operations. My >> assessment of your character would improve considerably if you could >> accept that just because a URI is linked to, doesn't make it a REST >> resource. This is a fundamental truth, not a matter open for debate. >> >> > >> > Futhermore, if links can't be mechanisms to modify the state of a >> > resource, you've pretty much discounted half of the Web itself. I >> > guarantee you 90% of Web applications out there that are accepting >> > any kind of input are receiving that input via a Form and through an >> > action URL. >> > >> >> Who said links can't be such a mechanism? Also, who said that 90% of >> Web applications are RESTful? Or that using HTML forms is unRESTful? >> Why is your response to my magnanimous assistance, to put words in my >> mouth as a means of arguing against what I have to say instead of >> trying to learn from it, or asking me to elaborate? You're the noob, >> I'm the guy who's been learning and applying REST for 11 years. You >> might consider the possibility that I actually know a good deal about >> what I'm talking about. Folks might be more comfortable with REST-* if >> you were to show an interest in accepting the help you've been offered >> here, instead of turning every issue into some semantic explanation of >> why your way is right. >> >> Anyway, >> >> If I perform a GET against a URI, and receive a representation with a >> form allowing various parts of the resource to be changed, change some >> data, and submit the form as a POST to the same URI, then I've just >> modified the state of a resource, driven by hypermedia, by transferring >> a representation of the resource I received back to the server. The >> submitted data may be sent either as an entity of some media-type, or >> as a query string of name-value pairs as 'application/x-www-form- >> urlencoded'. Perfectly RESTful, nothing wrong with query strings >> despite what many people say. >> >> But, if that POST contains no content and/or is directed at some other >> resource instead of the one I did a GET on, then it may well be >> hypermedia-driven, but it isn't REST, as REST has more constraints than >> just HEAS. >> >> -Eric >> >
* Jan Vincent <jvliwanag@...> [2009-09-19 00:35]: > I was wondering how to do a PUT/POST/DELETE call on a resource > specifying the precondition that it doesn't exist in the first > place. I can’t think of a mechanism in HTTP to do that, right now. > Should non-existing resources (those which return a 404 upon > GET/HEAD) specify some ETag as well then use it on the former > calls? An entity tag for an entity that doesn’t exist doesn’t seem to make much sense. > Moreover, upon a successful DELETE, should I issue another ETag > as well? There’s nothing there to which the ETag could apply. > As a side question, is it ok to use DELETE when it 'clears > a list'. Say, I want to clear my shopping cart, so I delete it. > However, when I GET it later on, it simply says that it's empty > rather than it doesn't exist. No. Don’t overload the meaning of verbs to use them for “something similarâ€. DELETE has narrow semamtics; if your use does not preserve them, then you should not be using DELETE. > The reason why I don't use PUT is that I don't want to allow > direct modification as a result of some assertion to the state > of the resource other than clearing it. There is no reason you have to accept all PUTs. HTTP is not a filesystem. Off the top of my head I would say you could implement this as a PUT that returns 409 for (at least semantically) non-empty bodies (except the ones that would be no-ops, possibly). ---- All in all it sounds to me like you are trying to model your problem in a way that goes somewhat against the grain of HTTP. What is it you are actually trying to do? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On Sep 19, 2009, at 3:17 PM, Bill Burke wrote: > > Futhermore we launched an effort, not a product. We admitted from the > beginning that what we had was old, raw, and unfinished. Our goal > is to > define RESTful middleware, I am still wondering what it is you think needs to be defined as part of your effort? "RESTful middleware" is a contradiction in itself; if you'd build a system using Web architecture you simple need no middleware because it is already globally deployed in the form of TCP/IP, DNS, Intermediaries... Or would you say that middleware is needed when you perform a shopping session at Amazon? To repeat the question (so it does not get lost): What is it you think is missing in order to apply REST to enterprise IT (and therefore should be defined as part of your effort)? Jan > not to define REST itself. Maybe not clearly > stated at first, but I've at least refined the message on the website. > > Whether or not REST-* continues to be the name is still debatable, but > whatever name it ends up being will have "REST" within it. You guys > are > just going to have to deal with it. If REST is positioned as a > paradigm, as an idea, you can't say any one person, company, or > organization cannot use it to promote whatever they are doing however > good or bad. Imagine if the same tact was taken with the coined > phrase > Object-Oriented-Programming? Or even worse, as you say, it was > trademarked? If it had, OOP wouldn't have been called OOP, it would > have been called entirely something else. > > Roy's role is a good one. IMO, it is Roy's (and other's) job to keep > everybody focused. One could question his tactics, but personally I > prefer abrasiveness and bluntness. Even though I was pretty > demoralized > by his initial comments, somebody, expecially the creator, has to hold > the banner for RESTful purity and bash people into submission as > much as > possible. But to say you cannot use REST the name (or REST the brand) > if we are not pure, or even worse, because one or two of you don't > agree > with what somebody is doing is just completely unproductive. You have > to let individuals, companies, and organizations come to terms with > REST > in their own way and at their own pace. As REST goes from the early > adopters to mainstream, there is going to be confusion. People will > get > it completely and utterly wrong, but IMO, this is part of the process. > For myself, I am completely open to learning what the "right" way is, > but you're just not going to convince me there is no place for > middleware or middleware services within REST. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Aristotle Pagaltzis wrote: > * Jan Vincent <jvliwanag@...> [2009-09-19 00:35]: > > I was wondering how to do a PUT/POST/DELETE call on a resource > > specifying the precondition that it doesn't exist in the first > > place. If the server guarantees that every variant of the resource will have an ETag, then "If-None-Match: *" will do what you want. Regards, Brian
This presentation has a segment where they discuss their successful use of media types for versioning of information being passed between systems within a financial trading company. http://www.infoq.com/presentations/restful-financial-systems-integration
On Sat, Sep 19, 2009 at 4:23 PM, Aristotle Pagaltzis <pagaltzis@...> wrote: >> As a side question, is it ok to use DELETE when it 'clears >> a list'. Say, I want to clear my shopping cart, so I delete it. >> However, when I GET it later on, it simply says that it's empty >> rather than it doesn't exist. > > No. Don’t overload the meaning of verbs to use them for > “something similar”. DELETE has narrow semamtics; if your use > does not preserve them, then you should not be using DELETE. He's not overloading the meaning; DELETE requests the resource's representations be removed, and the server does that so is able to respond 2xx to it. Nothing says the server can't then immediately - or at any time of its choosing - make new representations of that resource available. Mark.
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Sep 19, 2009, at 3:17 PM, Bill Burke wrote: > > > > > Futhermore we launched an effort, not a product. We admitted from the > > beginning that what we had was old, raw, and unfinished. Our goal > > is to > > define RESTful middleware, > > I am still wondering what it is you think needs to be defined as part > of your effort? > > "RESTful middleware" is a contradiction in itself; if you'd build a > system using Web architecture you simple need no middleware because it > is already globally deployed in the form of TCP/IP, DNS, > Intermediaries... > > Or would you say that middleware is needed when you perform a shopping > session at Amazon? > > To repeat the question (so it does not get lost): What is it you think > is missing in order to apply REST to enterprise IT (and therefore > should be defined as part of your effort)? > > > Jan > > > > > > not to define REST itself. Maybe not clearly > > stated at first, but I've at least refined the message on the website. > > > > Whether or not REST-* continues to be the name is still debatable, but > > whatever name it ends up being will have "REST" within it. You guys > > are > > just going to have to deal with it. If REST is positioned as a > > paradigm, as an idea, you can't say any one person, company, or > > organization cannot use it to promote whatever they are doing however > > good or bad. Let me know what your charitable contribution to the Derek Zoolander School for Kids Who Can't Read Good is... I'm not stopping you, but just telling you "this reeks of misleading jerk". The other day some guy tried to convince our sound guy at church we should be $5,000 more "because we provide service", and apparently intimated we were stupid. Yeah, thanks dude, like I really need your "service" to show me how to plug in a cable. Same principal applies here. You don't need to form an organization to do what you are doing, and you don't need REST sticker-tagged all over it. It's like your asking, "Can I get away with this?" And everyone is telling you "No!" and then your like, "Okay, look, I really want to still do this, how about I change a whole bunch of words around, and invite you all over for a party to discuss how I can do something you don't approve of (but no guarantees I'll listen to you)." Your biggest problem is you are stuck in "Fortune 500" mindset, where this behavior is par for the course. "We don't like the numbers you engineers came up with. We think building this factory in this economy is a great idea. Change the numbers to prove us right." I had no idea Redhat culture had succumb to this.
--- In rest-discuss@yahoogroups.com, Bill Burke <bburke@...> wrote: > Imagine if the same tact was taken with the coined phrase > Object-Oriented-Programming? Or even worse, as you say, it was > trademarked? If it had, OOP wouldn't have been called OOP, it would > have been called entirely something else. A nascent point, apart from the rest of this discussion, is that Alan Kay never defined the term he coined (and later regretted it). Roy Fielding, however, did define REST. It is an architectural style, not a set of protocols for middleware. Speaking of Kay, he has a great quote, something like, "Once an idea grows to the speed of spreading through pop culture, education will never keep up." My educated guess is that Roy is doomed; things like REST-* will infect such pop culture, because all business people care about is marketing, not whether people actually spend the time to understand REST. If they did, they wouldn't be using your transaction manager interface to "do REST". REST-* very much suggests to a CEO "here is what you need to do REST", which is not the case as others have pointed out. Again, Roy's comparison to IBM web services is spot-on. Anyway, I doubt very much this crap matters at the bits and bytes level, and what most businesses would benefit from is an open source, free/libre service bus that competes with billion-dollar stalwarts like Progress. I just don't see your middleware standard being a huge deal, certainly not important enough to earn the name REST-*. At least the Amazon folks are like "REST-DB". No, it's SimpleDB. Why not SimpleMiddleware or something...
* Mark Baker <distobj@...> [2009-09-20 06:50]: > Nothing says the server can't then immediately - or at any time > of its choosing - make new representations of that resource > available. I sense a tacit expectation that the client will know of this recreation semantic, ie. that despite deleting the resource, it will know that it can GET it again at the same location immediately afterwards, without having discovered this “new†location via hypermedia. That would be overloading in my book. In contrast, with the empty PUT approach, that assumption would be completely natural – the client never deleted the resource so has no reason to assume it has gone away. If the client had to rediscover the location of the resource via hypermedia after a DELETE, I would have no problem with that approach. That just reduces to “HTTP is not a filesystemâ€. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Nick Gall mentioned the paper he submitted to the W3C Workshop on Services for Enterprise Computing. Here's the link to the workshop report:
http://www.w3.org/2007/04/wsec_report
As the organizer of the Workshop, my goal was to find out what more, if anything, could be done to standardize enterprise software. My interest in this topic dates to 1990 and the Multivendor Integration Architecture project sponsored by NTT. The ideas is that if enterprise software could achieve a comparable level of standardization as the Web, the benefits to business, industry, and society would be even more significant than what's been achieved with the Web. In retrospect, this may have been a slightly exaggerated view, since the Web has been shown to significantly benefit society beyond the simple economic equations involved in reducing the cost of email, publishing, and information sharing.
It started during a W3C Advisory Committee meeting when I participated in a panel discussion on where should W3C be in 2010 ( this was Dec. '05). I was trying to make the case for increased investment in Web services, since my company (IONA at the time) had joined the W3C in early 2000 specifically for the purpose of supporting the standardization of SOAP. I had also served as an editor of the W3C Web Services Architecture specification, and in that context participated in the intense "HTTP vs SOAP" discussions.
I made sure to invite as many REST proponents to the Workshop as Web services proponents, since the REST proponents were (and still are of course) making strong arguments that WS-* is unnecessary.
The result was a few recommendations to improve the Web services standardization process (adopting a JMS mapping, which has happened, and initiating a core WS working group, which I think still hasn't), and a general feeling that the worlds of REST and Web services were very distinct and separate.
Some of the presenters, most notably Noah Mendelsohn, described co-existence strategies. It was also noted that WS-Addressing had features that broke REST, and that WSDL 2 and SOAP 1.2 had included features that were more REST-friendly. But no vendors seemed interested in implementing them.
This is where I would suggest Red Hat/JBoss start if they are truly interested in REST. Implement the RESTful capabilities of WSDL 2 and SOAP 1.2. That will bring Web services closer to REST.
The other suggestion, i.e., to add WS-* style specifications to REST was debated also, but there was not strong interest in this. The feeling was more that these capabilities have a place in the overall scheme of things, but on the other side of the RESTful interfaces. They are not things to be added to HTTP for example.
Personally I tend to look at the world of IT as distinguished between "scale up" or mainframe style application architectures, and "scale out" or HTTP based (RESTful more or less) architectures. I believe the IT world will eventually move to the HTTP scale out architectures, but existing systems will need to be redesigned, and this will take a long time. Meanwhile, WS-* seem to provide value in this environment, as several Web services users confirmed during the workshop.
Finally, as someone who helped get Web services started, I have to say their adoption has been somewhat lacking, in particular the document-oriented style has not been widely implemented, and the result of vendor adoption has been to design annotations and attributes embedded in .NET languages and Java, so developers don't have to use XML directly. This seems a result of vendors continuing to focus on competing with each other over for developer hearts and minds rather than focusing on delivering the benefits of a new, and potentially breaking approach to distributed computing.
At least two standards bodies already exist for dealing with extensions to HTTP and proposal for new specifications, and they have not done so. I don't see the need for a new organization in this area, and I don't see industry consensus around the need for adding Web services style spec to REST.
I also sympathize with Roy and others who have worked so diligently to promote and educate the industry to think differently about distributed computing - this should be supported rather than diluted.
Eric
________________________________
From: johnzabroski <johnzabroski@...>
To: rest-discuss@yahoogroups.com
Sent: Sunday, September 20, 2009 2:44:19 AM
Subject: [rest-discuss] Re: We're listening: REST-* changes
--- In rest-discuss@ yahoogroups. com, Bill Burke <bburke@...> wrote:
> Imagine if the same tact was taken with the coined phrase
> Object-Oriented- Programming? Or even worse, as you say, it was
> trademarked? If it had, OOP wouldn't have been called OOP, it would
> have been called entirely something else.
A nascent point, apart from the rest of this discussion, is that Alan Kay never defined the term he coined (and later regretted it). Roy Fielding, however, did define REST. It is an architectural style, not a set of protocols for middleware.
Speaking of Kay, he has a great quote, something like, "Once an idea grows to the speed of spreading through pop culture, education will never keep up." My educated guess is that Roy is doomed; things like REST-* will infect such pop culture, because all business people care about is marketing, not whether people actually spend the time to understand REST. If they did, they wouldn't be using your transaction manager interface to "do REST".
REST-* very much suggests to a CEO "here is what you need to do REST", which is not the case as others have pointed out. Again, Roy's comparison to IBM web services is spot-on.
Anyway, I doubt very much this crap matters at the bits and bytes level, and what most businesses would benefit from is an open source, free/libre service bus that competes with billion-dollar stalwarts like Progress. I just don't see your middleware standard being a huge deal, certainly not important enough to earn the name REST-*. At least the Amazon folks are like "REST-DB". No, it's SimpleDB. Why not SimpleMiddleware or something...
Bill, A few quickies. I think that many of us are at the point with where it will be useful to start moving forwards with REST-aligned specifications and supporting standards bodies that is targeted "off the web" at enterprises. This could roughly take one of two forms: 1. A set of specifications based on a range of technologies that focus on REST constraint compliance, or 2. A more HTTP-centric set of specifications working on providing features not available on the Web and probably not consistent with REST constraints such as pub/sub, transactions, reliable POST, etc. Methinks a little from column A, and a little from column B. The two seconds I have read reading descriptions on your site suggest more (2) than (1). While this is a good thing to be paying standards attention to, the first thing you will want to start to do is break the stateless constraint of REST. This will take what is potentially a major frustration for Roy in ripping off the name of his architectural style to use as the codeword for a specific non-web architecture and turn it into something much worse where the so-called REST architecture actually doesn't comply with REST style. I really would suggest a name like HTTP-* or Web-* rather than REST-* might go a long way towards heading off a significant amount of angst. That's a naming problem, and not a technical problem. Personally I am looking for a forum where I can do exactly the kinds of things you seem to be looking at: Focus on enterprise use of HTTP, and fill in gaps that can be filled in at small scales in ways that are sympathetic to HTTP but don't comply with REST architecture or are for some other reason not immediately applicable to the Web. I'm ready to talk pub/sub if you are :) Moving on to Restful Services vs RESTful Interfaces, well I wish I could show you some early work from the upcoming SOA with REST book ;) I think there is some common ground we could come up with. Perhaps I'll see members of REST-* at the SOA Symposium next month? Benjamin. 2009/9/19 Bill Burke <bburke@...> > > > __Message Change__ > * It is now an open source project. > * We will be publishing the final content on IETF as a set of RFCs. > * We're still focusing on middleware and middleware services. > > "REST-* is an open source project dedicated to bringing the architecture > of the web to traditional middleware services." > > "REST has a the potential to re-define how application developers > interact with traditional middleware services. The REST-* community > aims to re-examine which of these traditional services fits within the > REST model by defining new standards, guidelines, and specifications. > Where appropriate, any end product will be published at the IETF." > > __Governance changes__ > * No more trying to be a better JCP. We'll let the IETF RFC process > govern us when we're ready to submit something. > * An open source contributor agreement similar to what Apache, Eclipse > or JBoss has to protect users and contributors. > > (FYI we already required ASL, open source processes, NO-field-of-use > restrictions, etc...) > > If you have any other suggestions, let me know: > > http://www.jboss.org/reststar/community/gov2.html > > __RESTful Interfaces for Un-RESTful Services__ > > Many traditional middleware services do not fit into the RESTful style > of development. An example is 2PC transaction management. Still, these > services can benefit from having their distributed interface defined > RESTfully. The nomenclature will be RESTful Service vs. RESTful Interface. > > * 2PC transactions would be considered a RESTful interface under > REST-*.org. Meaning using it makes your stuff less-RESTful, but at > least the service has a restful interface. > > * Messaging, compensations, and workflow services would be considered > "RESTful Services" that fit in the model. > > __GUIDELINES SECTION__ > > This is where I want to talk about how existing patterns, RFC's and such > fit in with the rest of what we're doing. An example here could be > Security. What authentication models are good when? When should you > use OAuth and OpenID? How could something like OAuth interact with > middleware services? > > Some of this stuff is already up on the website. (You may have to reload > it to see it due to cache-control policies.) > > Finally, apologies for the jboss.org redirection. It is a problem with > our infrastructure. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > >
2009/8/31 Jan Algermissen <algermissen1971@...>: > What I am currently trying to get my head around is this: > > When viewing a REST API as essentially a set of link semantics how > can we version such APIs? And do we need to version them at all? > > I looked at the Atom Publishing Protocol and it does not say that it > is a particular version. Suppose we'd add another top level document > type that brings in new capabilities - would that lead to APP 2.0? And > how would one communicate this to clients? Governance of a REST architecture is applied at a uniform contract level and at a service interface description level. Version control of a uniform contract is broken up into several facets: 1. A syntax for resource identifiers that can 1.1 Be resolved to the point where requests can be issued based on the identifier 1.2 Includes enough characters to allow a service to defer state back to its consumers within these identifiers. Resource identifiers act as messages from the service to itself when state has been returned to the consumer between requests. 2. A set of methods that are abstractions capable of expressing a range of different service capabilities. This may be one specification (eg rfc2616 defining the methods - including response codes - of HTTP/1.1) or split into multiple specifications to cover all of the methods and the fundamental communication patterns they permit 3. A set of media types, which will almost certainly have corresponding individually versioned specifications Each service itself has a description of its interface in terms of a set of resources and methods on those resources that correspond to the capabilities of the service. This is versioned independently of the uniform contract but contains references to the uniform contract for method and media type definitions. At any particular time there will (should) be a small number of ways of moving information around the architecture (the methods) that while they may appear low-level (eg get, put, delete) are each high-level abstractions of a significant number of service capabilities. For each kind of information that can be exchanged in the architecture there are a small number of ways of encoding that information. In general, each resource is expected to understand all of the elements of the uniform contract that are relevant to it and which correspond to service capabilities the service wishes to express. The outcome is a high level of integration maturity. One URL can be substituted for another in the architecture at runtime. Regardless of the specific URL or service the consumer knows what kind of message to construct. The service knows how to interpret the request and how to return an appropriate response in a form the consumer understands. The uniform interface of each resource enables communication and then gets out of the way, permitting dynamic reconfiguration to occur as required.
Jan Algermissen wrote: > > On Sep 19, 2009, at 3:17 PM, Bill Burke wrote: > >> >> Futhermore we launched an effort, not a product. We admitted from the >> beginning that what we had was old, raw, and unfinished. Our goal is to >> define RESTful middleware, > > I am still wondering what it is you think needs to be defined as part of > your effort? > > "RESTful middleware" is a contradiction in itself; if you'd build a > system using Web architecture you simple need no middleware because it > is already globally deployed in the form of TCP/IP, DNS, Intermediaries... > The only way I can answer this is that middleware isn't just about protocols. Middleware is also about services (think Amazon S3 or SQS and even a search engine is a middleware service). (Middleware is also about frameworks (think Wicket, Struts, Object-Relationa-Mappin (Hibernate, JPA)), but that isn't important in this discussion). > Or would you say that middleware is needed when you perform a shopping > session at Amazon? > No, but it may be needed by back-end systems to coordinate order fulfillment. > To repeat the question (so it does not get lost): What is it you think > is missing in order to apply REST to enterprise IT (and therefore should > be defined as part of your effort)? > The answer is, I'm not sure yet, but I have a few ideas. This is what I want REST-* to discover. Specifically, I'm on the fence on whether a compensation service or a messaging service is truly RESTful or not. For workflow/bpm though, I think REST (specifically HATEOAS) can have *HUGE* benefits (I'll be posting some of my thoughts and initial specs next week). BUT... For those that don't fit in a RESTful architecture (i.e. I think we all agree a Tranaction Manager is one of those that doesn't) can these services benefit from a RESTful interface? This is a question I also want to answer. REST-*.org will not be an academic exercise though. It will be defining middleware service specifications that are meant to be used to solve problems. The designs will not be perfect or fully RESTful at first, we will need to iterate. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Benjamin Carlyle wrote: > Bill, > > A few quickies. > > I think that many of us are at the point with where it will be useful to > start moving forwards with REST-aligned specifications and supporting > standards bodies that is targeted "off the web" at enterprises. This > could roughly take one of two forms: > 1. A set of specifications based on a range of technologies that focus > on REST constraint compliance, or > 2. A more HTTP-centric set of specifications working on providing > features not available on the Web and probably not consistent with REST > constraints such as pub/sub, transactions, reliable POST, etc. > > Methinks a little from column A, and a little from column B. The two > seconds I have read reading descriptions on your site suggest more (2) > than (1). > > While this is a good thing to be paying standards attention to, the > first thing you will want to start to do is break the stateless > constraint of REST. This will take what is potentially a major > frustration for Roy in ripping off the name of his architectural style > to use as the codeword for a specific non-web architecture and turn it > into something much worse where the so-called REST architecture actually > doesn't comply with REST style. I really would suggest a name like > HTTP-* or Web-* rather than REST-* might go a long way towards heading > off a significant amount of angst. > > That's a naming problem, and not a technical problem. Personally I am > looking for a forum where I can do exactly the kinds of things you seem > to be looking at: Focus on enterprise use of HTTP, and fill in gaps that > can be filled in at small scales in ways that are sympathetic to HTTP > but don't comply with REST architecture or are for some other reason not > immediately applicable to the Web. I'm ready to talk pub/sub if you are :) > These are all great points. The output of REST-* is not meant to be an academic exercise. We want to create specifications that can be implemented and solve specific problems. While our goal is to be architecturally pure, software, in general, is very rarely architecturely pure when in its final form. As a result, initial iterations (and even final ones) maybe a mix of both HTTP and REST-centric designs. Remember, WE JUST STARTED! This, IMO, does not mean we should change the name of the site or to coin a new buzzword. The goal we are striving for is to be RESTful. REST is our ideal. I think what we can do is to make it clear what parts of each specification aren't restful and more HTTP-centric. Maybe even a specific very visible link "What's RESTful, What's just HTTP?" for each specification would be warranted. This link is where the spec designers could state what they think is restful or not, and where people could post links to blogs and articles that discuss what is and isn't restful about our specifications. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Sep 20, 2009, at 5:34 PM, Bill Burke wrote: > I think what we can do is to make it clear what parts of each > specification aren't restful and more HTTP-centric. Maybe even a > specific very visible link "What's RESTful, What's just HTTP?" for > each > specification would be warranted. This link is where the spec > designers > could state what they think is restful or not, I think this is dangerous as it leads people to think that they can break certain constraints of REST as they wish as long as they put a red warning sign on the specs. The thing that makes REST superior to any other practiced approach towards enterprise distributed systems is not the specific style it specifies but that it specifies an architectural style *at all*. The constraints of REST all contribute to a set of *predictable* properties of a RESTful system and enables system designers to reason about the system before they create it. If you randomly ignore constraints of REST without first going through the same exercise Roy's went through in his thesis you create the same style-less mess you have in existing enterprise IT systems. If the propsed effort intends to use a modified REST it must first create the theoretical framework for eavluating the modifications. This is far more important than actually specifiying anything new. ----- OTH, I am having problems to determine whether you do want to stick with REST or not. The former means 'stick with REST and not a modification of it' the latter will only bring value if you investigate the resulting architectural style first. Jan > and where people could > post links to blogs and articles that discuss what is and isn't > restful > about our specifications. -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Sun, Sep 20, 2009 at 11:34 AM, Bill Burke <bburke@...> wrote: > > These are all great points. The output of REST-* is not meant to be an > academic exercise. We want to create specifications that can be > implemented and solve specific problems. While our goal is to be > architecturally pure, software, in general, is very rarely > architecturely pure when in its final form. As a result, initial > iterations (and even final ones) maybe a mix of both HTTP and > REST-centric designs. Remember, WE JUST STARTED! > What middleware problems are you trying to solve? I only browsed the REST-* website, but I didn't see a list of problems the specs are designed to solve. The overview is the closest thing, but that seems to just be broad statements like avoiding envelope data formats. -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek
2009/9/20 Bill Burke <bburke@...> > > These are all great points. The output of REST-* is not meant to be an > academic exercise. We want to create specifications that can be > implemented and solve specific problems. While our goal is to be > architecturally pure, software, in general, is very rarely > architecturely pure when in its final form. As a result, initial > iterations (and even final ones) maybe a mix of both HTTP and > REST-centric designs. Remember, WE JUST STARTED! > > This, IMO, does not mean we should change the name of the site or to > coin a new buzzword. The goal we are striving for is to be RESTful. > REST is our ideal. > > I think what we can do is to make it clear what parts of each > specification aren't restful and more HTTP-centric. Maybe even a > specific very visible link "What's RESTful, What's just HTTP?" for each > specification would be warranted. This link is where the spec designers > could state what they think is restful or not, and where people could > post links to blogs and articles that discuss what is and isn't restful > about our specifications. > In your own words, you are expecting that "even final ones maybe a mix of both HTTP and REST-centric designs" and you'll "make it clear what parts of each specification aren't restfull" by even put in a "very visible link "What's RESTful, What's just HTTP?"" So it seems clear to you that the intention of your effort is about to create something that will be a mix(!) of REST and non-REST HTTP specifications to some kind of midleware architecture design(!). So why do you insist in a name like REST-* ??? You know of course that in IT jargon, the "*" is interpreted as "everything" [http://www.googleguide.com/wildcard_operator.html] so your name will appear to mean something like "Everything REST", which is, in the light of your own words, misleading if not fraudulent, as you know that at the very best you aim to try to specify something "Restish+Httpish". REST-* is without a doubt a marketing buzzword, created as that, and where you (not you personally, I mean, but the authors of the project) will try to squeeze everything that fits your purpose of creating some "standards" that could put RedHat as a market lead in a area where your products fit, and you don't care if it's REST or just HTTPish as long it is perceived as "Everything REST". Why don't you reflect in the name that dual personality of REST + HTTP-centric designs that you clearly assume it is? Coin a new buzzword if you have to, at least you won't be misleading. Or call REST-- meaning REST-less-less.
On Sep 20, 2009, at 4:16 PM, Benjamin Carlyle wrote: > 2009/8/31 Jan Algermissen <algermissen1971@...>: >> What I am currently trying to get my head around is this: >> >> When viewing a REST API as essentially a set of link semantics how >> can we version such APIs? And do we need to version them at all? >> >> I looked at the Atom Publishing Protocol and it does not say that it >> is a particular version. Suppose we'd add another top level document >> type that brings in new capabilities - would that lead to APP 2.0? >> And >> how would one communicate this to clients? > > Governance of a REST architecture is applied at a uniform contract > level and at a service interface description level. Can you explain what you mean by "uniform contract level" and "service interface description level" and how governance is applied to them? > Version control of > a uniform contract is broken up into several facets: > 1. A syntax for resource identifiers that can > 1.1 Be resolved to the point where requests can be issued based on the > identifier > 1.2 Includes enough characters to allow a service to defer state back > to its consumers within these identifiers. Resource identifiers act as > messages from the service to itself when state has been returned to > the consumer between requests. > 2. A set of methods that are abstractions capable of expressing a > range of different service capabilities. This may be one specification > (eg rfc2616 defining the methods - including response codes - of > HTTP/1.1) or split into multiple specifications to cover all of the > methods and the fundamental communication patterns they permit Sorry, I seem unable to see what you mean, can you put these thoughts in different words? > 3. A set of media types, which will almost certainly have > corresponding individually versioned specifications > Each service itself has a description of its interface in terms of a > set of resources and methods on those resources that correspond to the > capabilities of the service. But in REST you do not describe that but let the client discover it. Or am I misunderstanding you? > This is versioned 'This' being what exactly? > independently of the > uniform contract but contains references to the uniform contract for > method and media type definitions. > > At any particular time there will (should) be a small number of ways > of moving information around the architecture (the methods) that while > they may appear low-level (eg get, put, delete) are each high-level > abstractions of a significant number of service capabilities. For each > kind of information that can be exchanged in the architecture there > are a small number of ways of encoding that information. In general, > each resource is expected to understand all of the elements of the > uniform contract that are relevant to it and which correspond to > service capabilities the service wishes to express. > > The outcome is a high level of integration maturity. One URL can be > substituted for another in the architecture at runtime. Regardless of > the specific URL or service the consumer knows what kind of message to > construct. The service knows how to interpret the request and how to > return an appropriate response in a form the consumer understands. The > uniform interface of each resource enables communication and then gets > out of the way, permitting dynamic reconfiguration to occur as > required. Hmmm - and how does all that adsress the problem of versioning the set of semantics that make up a certain RESTful API? In the enterprise, it is simply not enough to implement a service that uses a bunch of media types and some extension elements and some link relations without providing a means to *manage* the bundle of these semantics. The communication between clients and servers may well work, but inside an enterprise there is undisputably a need to manage and plan software evolution, e.g. because you need to assess resources and budget. Hence my question regarding versioning of the set of semantics that constitute a REST API. Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Hi, in order to divert the discussion away from REST-* and because noone else has mentioned it yet I would like to point you to another intellectual challenge: Computational REST by Justin R. Erenkrantz. He just filed his dissertation on "A New Model for Decentralized, Internet-Scale Applications". If you are interested, use your hypermedia device to go to: http://www.erenkrantz.com/CREST/ Regards, Nicolai
In theory, versioning using media types sounds like a good idea, but in practice, it brings in some operational challenges. a. Media type versioning assumes that the same server instance supports all versions. But larger systems may not be able to support multiple versions on the same runtime. b. It further assumes versioning changes can be represented by representations. In reality, versioning changes do bring in new resources and new processing flows. c. The purported benefit of versioning by media types is that client- side databases don't need to be changed since the URIs are the same. This is fine in small systems, but migrating a client from one version to another version may require not just code changes, but database upgrades. This may be due to changes in the information content of representations that the clients need to store,. d. Not every HTTP level software can distinguish between representation of a resource. Given all these, even though URI based versioning looks inelegant, URI based versioning is more pragmatic, and is proven to work. Subbu On Sep 19, 2009, at 6:08 PM, Darrel Miller wrote: > > This presentation has a segment where they discuss their successful > use of media types for versioning of information being passed > between systems within a financial trading company. > > http://www.infoq.com/presentations/restful-financial-systems-integration > > > >
> While our goal is to be > architecturally pure, software, in general, is very rarely > architecturely pure when in its final form. As a result, initial > iterations (and even final ones) maybe a mix of both HTTP and > REST-centric designs. Remember, WE JUST STARTED! Standards are a lot of things, but changing quickly in an iterative manner certainly isn't one of them. That's why there's entities like the TAG, to prevent people from throwing specs that break the rest of the web architecture through a quick release, getting vendors to push products, and be stuck with crap for years and years to come. Seb
My 2 cents. 1. What are you exposing and how? As I see it, you don't actually need to expose every single element on your domain model, since that may no only require the client to actually manage much of the business logic, but also will create a too much finer granularity. So, in this case you have chosen to expose individual orders as resources. That means you need to manage their state individually, directly. That is, I don't know what other impact would have to cancel an order (rollback inventory changes? Notifications?) all those other things you need to do manually. Then you need to change the state in the order, manually. That is too much business logic in the client, don't you think? An alternative is not to expose individual order as resources, but an ordering system that is modeled as a collection. You post order data and the collection will create its orders and manage all other little things. Delete an order id from the collection will cancel it. Or, you can even post a cancel order. Be careful with these, it is not an RPC call. YOu are posting an order cancellation TO THE COLLECTION which will be added to the collection, and the collection process it, that will change the state of the active order. You still have both orders there, you have then history. (A note here: a collection may have multiple types of resources, and controls them even with the power of changing them. What do you think?) Sincerely, exposing the state of an order as a resource is going too far. 2. Any indication of action in the URL makes it RPC. URLs are just to indicate which resource receives your operation. If it is the collection, use POST to add something to it. If you expose the order as a resource (finer grain), and that has a state, the best way would be to post to that resource a state change, which will affect the resource. In this case, I don't see it is correct that a change of one resource should change other resources, so the rollback and any notification should be handled carefully. Another point against exposing such details is the idea of resources not living in one server. At any point, the server that receives your operations may not be the same, and thus you must assure the resource being affected is the same for all servers. That is best managed using coarse grained resources. William Martinez Pomares --- In rest-discuss@yahoogroups.com, Bill Burke <bburke@...> wrote: > > Let's say I have an Order resource in a ecommerce Order Entry system. > How would I implement my service so that I can cancel an order rather > than delete it? One is to have the cancel state as part of the order. > THen I can just put a new representation with the cancelled state set to > true: > > PUT /orders/333 > content-type: application/xml > > <order id="333"> > <cancelled>false</cancelled> > ... > </order> > > Seems kinda heavy to me. > > Would it still be restful to define a "cancelled" URI that you could put > or post to to change the state? > > /orders/333/cancelled > > or > > /orders/333?cancel=true > > You don't even need to send data to change the state in this scenario. > But the problem with this from a pure RESTful standpoint is, isn't this > a mini-RPC? My thought at first is YES IT IS.... > > .... But, consider if you have cancelling as part of a HATEOAS > > <order id="333"> > <atom:link rel="CANCEL" href="http://example.com/orders/333/cancelled"/> > ... > </order> > > > Now, I have a CANCEL link that if I follow changes the state of my > resource. Doesn't seem so RPCish now that I've embedded it as a link. > Maybe the answer is /orders/333/cancelled isn't very RESTful by itself, > but when combined with HATEOAS it is? > > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
> Here I have to disagree - I don't have any idea what an "official REST > project" might be doing. Hi, I have to second Stefan. REST is the name for this architectural style and with Roy's thesis well defined. Not much left to do. We/you could further investigate architectures that are so to speak "RESTful" (follow the style) - but we cannot call them REST. Hence a "REST project" does not make sense. Regards, Nicolai
--- In rest-discuss@yahoogroups.com, Luke Crouch <luke.crouch@...> wrote:
>
> I think modified option 1 for the new forms - i.e.,
>
> GET /profiles/new
>
> And for edits:
>
> GET /profiles/{profile-id}/edit
>
> -L
>
Hello.
This post to an old thread is more for testing (since my first post was not published). Still this topic is interesting.
The new and edit words in the URLs, even if not thought that way, sound too much as RPC.
A principle is URL should be used to locate things, to identify resources.
Maybe
GET /profiles/newForm
and
GET /profiles/editForm
or even
GET /profiles/forms
that returns a list of forms where you choose the actual URL for the form you need.
Then, you fill out the form and post it to profiles
POST /profiles
profiles is a resource that knows what to do with the form, either creating a new profile or editing one already existing.
Cheers!
William Martinez Pomares
Hello. Reading through all this good material about REST, I find some old time discussions around. Someone suggested naming things is not so good, but I love doing so somethings to know what am I referring to. So, having all of you as REST fans, I wanted to present a classification I did two days ago while riding the bus to work. Silly? It may be, but I guess it helps understand where are we standing in terms of REST usability and knowledge. API Makers: I find them everywhere. They have a system, usually not built thinking on REST, and they want an API created. They usually think REST is an API making technique or recipe, for the web. Subcategories: - URI Jugglers. This are the ones that think REST is all about creating URIs, and nothing more. So their discussions are solely focused on URIs, and their presentations are about URIs definitions. - RPCers. Bad group that think REST is a way to map RPC in disguise using URIs in a web API. The most of them don't know they speak RPC at all. - Exposers: This type is repeated below. Those are the guys that think you need to expose things in REST using resources. So REST is an API for exposing things on the web. - CRUDers: Another repeated group. They think REST is a web api for CRUD. Simple. Mappers: This other category may use the API idea, but they actually thing REST is a representation type and the work to be done is to map all that is know used to that new type. Interesting? - CRUDers. Again, the idea is that CRUD can be mapped naturally to HTTP operations, and that makes it RESTful. - HTTPers. They believe REST is HTTP. Deep enough. - Exposers. Again too. They usually try to map all classes, data entities, elements into resources, and then call their systems RESTful. FAD followers?: This is a group of the reminders of the types. Usually, they tend to follow a lead. - Standard Haters: Here you have all those that think Standards are evil and that REST is an anarchy where you have the freedom to do whatever you like, so they follow REST doing whatever they want. - KISS lovers. This are the ones that like thinks to be simple. And someone told them REST is easy, so they follow doing easy things with URIs. There are lots of URI jugglers in this group. - Servicers. They think Services is good, and someone told them REST is a way to do services without SOAP. So they follow. - BuzzWorders. This is a vast majority. They like buzz words, so they follow REST just because it is cool and all people talk about it. There are some Buzz creators too, with thinks like ROA and REST in WOA. No pun intended on REST-* Is there some one I'm missing? Well, yes, probably the group that knows REST as it actually is and understands it. That may be a one person group (yes Roy). I'm may not be saying all those believes up there are wrong. I'm NOT saying they are good, at all. What do you think? Do you find yourself in any of those groups? William Martinez Pomares.
--- In rest-discuss@yahoogroups.com, Stefan Tilkov <stefan.tilkov@...> wrote:
>
> What do you call the concept of "classes" or "types" of resources in
> your RESTful designs? E.g. when you decide to turn each "customer"
> into its own identifiable resource - http://example.com/customers/1234
> - what does http://example.com/customers/{id} describe? Both "resource
> class" and "resource type" would work, but don't seem really convincing.
>
> Stefan
> --
> Stefan Tilkov, http://www.innoq.com/blog/st/
>
Hi. I'm reposting this to the whole group.
----
Hi Stefan.
As it has been said in the long list of messages here, a resources is a typeless thing. The problem I see implicit in the question, is the confusion of REST design with Domain design.
There is a mapping of domain objects or elements into resources, which leads the designer to map properties of the elements to the resources, just like the class or type. That is not a good practice.
First, URI definition does not have to bear any semantic for the user! So, http://example.com/customers could not mean customers at all, and the client should not base its action on the customers word. It could be the same http://example.com/pas9132ad where pas9132ad means customers to the server.
In the same line, knowing you are accessing a resource is surely enough. There is not need for the client to know that resource is a client, neither its type or class in general. That type should be discovered, the same as the actions that can be performed upon it.
So, how do you describe that in a design? Well, the discussion may start by asking what a RESTfull design is! You can model your inner systems as you like, you can have types, classes, and use any other styles not REST for your applications. Then, you may want to go web, and create a full architecture in top of all those little systems. You then simply define resources (typeless) and then the operations upon them, and the way all that is discovered. See? I see no type thinking there.
Same if your application is architected using REST. Then, your components are the REST components, and the resources definition and operations is a lower level (tactic design) process, not at architecture level, and thus not a REST level-
William Martinez Pomares.
LOL I love: 'URI Jugglers' ... have met some very insistent ones recently ... they are among the hardest to talk to :-) Jan On Sep 20, 2009, at 5:12 PM, willmarpo wrote: > > > Hello. > Reading through all this good material about REST, I find some old > time discussions around. Someone suggested naming things is not so > good, but I love doing so somethings to know what am I referring to. > > So, having all of you as REST fans, I wanted to present a > classification I did two days ago while riding the bus to work. > Silly? It may be, but I guess it helps understand where are we > standing in terms of REST usability and knowledge. > > API Makers: I find them everywhere. They have a system, usually not > built thinking on REST, and they want an API created. They usually > think REST is an API making technique or recipe, for the web. > Subcategories: > - URI Jugglers. This are the ones that think REST is all about > creating URIs, and nothing more. So their discussions are solely > focused on URIs, and their presentations are about URIs definitions. > - RPCers. Bad group that think REST is a way to map RPC in disguise > using URIs in a web API. The most of them don't know they speak RPC > at all. > - Exposers: This type is repeated below. Those are the guys that > think you need to expose things in REST using resources. So REST is > an API for exposing things on the web. > - CRUDers: Another repeated group. They think REST is a web api for > CRUD. Simple. > > Mappers: This other category may use the API idea, but they actually > thing REST is a representation type and the work to be done is to > map all that is know used to that new type. Interesting? > - CRUDers. Again, the idea is that CRUD can be mapped naturally to > HTTP operations, and that > makes it RESTful. > - HTTPers. They believe REST is HTTP. Deep enough. > - Exposers. Again too. They usually try to map all classes, data > entities, elements into resources, and then call their systems > RESTful. > > FAD followers?: This is a group of the reminders of the types. > Usually, they tend to follow a lead. > - Standard Haters: Here you have all those that think Standards are > evil and that REST is an anarchy where you have the freedom to do > whatever you like, so they follow REST doing whatever they want. > - KISS lovers. This are the ones that like thinks to be simple. And > someone told them REST is easy, so they follow doing easy things > with URIs. There are lots of URI jugglers in this group. > - Servicers. They think Services is good, and someone told them > REST is a way to do services without SOAP. So they follow. > - BuzzWorders. This is a vast majority. They like buzz words, so > they follow REST just because it is cool and all people talk about > it. There are some Buzz creators too, with thinks like ROA and REST > in WOA. No pun intended on REST-* > > Is there some one I'm missing? Well, yes, probably the group that > knows REST as it actually is and understands it. That may be a one > person group (yes Roy). > > I'm may not be saying all those believes up there are wrong. I'm NOT > saying they are good, at all. > > What do you think? Do you find yourself in any of those groups? > > William Martinez Pomares. > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Hello Bill, > The answer is, I'm not sure yet, but I have a few ideas. This is > what I > want REST-* to discover. Specifically, I'm on the fence on whether a > compensation service or a messaging service is truly RESTful or not. > For workflow/bpm though, I think REST (specifically HATEOAS) can have > *HUGE* benefits (I'll be posting some of my thoughts and initial specs > next week). I'd agree with that sentiment - hypermedia formats and protocols are awesome for projecting workflows over the Web. But what "specs" do you think we need? We already have hypermedia-friendly formats and protocols with lovely link relations to-boot. Folks are already building systems using them too. Jim
Huh? So now it's "HEADERS as the engine of application state"? ;-) I can see a link header replacing the <link> elements of an Atom entry document or I suppose any other hypermedia link that applies to the entire document/body. But hypermedia allows you to structure your information and apply links to specific parts of that structure. For example, an Atom feed document has individual entries, each of which may have <link> elements that apply only to the containing entry. You couldn't pull all those links out into the HTTP headers could you? Nor could you pull all of the hyperlinks out of an HTML document. Hypermedia also allows you to attach other instructions on using a link (e.g. Forms). I don't see that being carried into HTTP headers very easily. So I'm with you that an entry document on it's own could be replaced by straight content with link headers. I have a harder time seeing Atom service documents or feed documents working that way. And then entry documents just become a matter of consistency with the feed format -- having to work with multiple encodings of the same metadata would be a bit painful. I just don't get the Atom hating here as I find it very useful. AtomPub provides a very constrained hypermedia-driven state machine model that is easy for machine-driven clients to interpret and follow. Saying a URL is the location of a collection drives the state machine. Saying that a URL is an entry's edit link drives the state machine. You need Atom (or something like it) to complement HTTP in a RESTful system even if you are just implementing CRUD (Yes I know REST is not restricted to CRUD, but it works for a lot of use cases). Hypermedia needs to define the "collections of things you can POST to" and the media types they accept. You need some way to represent the collection -- links to and descriptions of the things that were created and can now be retrieved, updated and deleted. One analogy might be that HTTP brings the file operations and Atom brings the directory structure. Atom is REST perhaps slightly refactored -- the hypermedia format is a combination two distinct formats: one, Atom, that is driving the state machine, and the content format that represents resource state. Whereas in the Web, HTML plays both these roles (though an HTML repository of say Word documents yields a similar division responsiblity as with Atom). An Atom client is a two stage processor -- one stage for the Atom/AtomPub processing and one stage for the content processing. Because the Atom semantics of collections and entries are more general than a single service, the Atom stage is decoupled from individual services and reusable across all Atom services. Just as the browser is decoupled from web sites (even sites that serve it content in proprietary formats that it has to hand off to other programs). Proprietary content formats do provide a form of coupling. There, open, general (standard or non-service specific) formats are best. So say an Atom feed of vCards for representing a set of person records supporting CRUD operations. (I still don't get why Google didn't do exactly that for their contacts API.) For me, the decoupling and reuse are the key reasons REST is interesting. Atom doesn't get you all the way there if it is being used with a proprietary content format, but it gets better decoupling than most of the alternative approaches I've seen. It's not the only way to do REST though -- even for machine-driven clients. But I have a hard time with it being recommended against outright as it's a very useful hypermedia format. Regards, Andrew Wahbe --- In rest-discuss@yahoogroups.com, "Sebastien Lambla" <seb@...> wrote: > > Indeed. We have message and entity headers. It's like a big elephant in the > room that some crowds prentend are not hter because they're headers. > > If it doesn't fit in an http header, you're probably doing it wrong. > > > -----Original Message----- > > From: rest-discuss@yahoogroups.com [mailto:rest- > > discuss@yahoogroups.com] On Behalf Of Subbu Allamaraju > > Sent: 18 September 2009 20:37 > > To: Chuck Hinson > > Cc: Rest List > > Subject: Re: [rest-discuss] Avoid envelope formats > > > > Envelope formats, if not designed and used carefully, can reduce the > > visibility of the uniform interface. An example is an application > > encoding some "application/foobar" within atom:content. When used like > > this, the protocol aspects become less useful, which is the same as > > tunneling. > > > > HTTP does include an envelope format, although it is rarely described > > as such. HTTP messages use a MIME-like format "containing > > metainformation about the data transferred and modifiers on the > > request/response semantics" (sec 1.1, RFC-2616). This format is > > visible and extensible. When you start to design representations based > > on this characteristic, you may find that there is no need for any > > other payload format. > > > > Subbu > > > > On Sep 18, 2009, at 12:15 PM, Chuck Hinson wrote: > > > > > The following statement is on the REST-* architectural goals page: > > > > > > "Whenever possible, avoid envelope formats. Examples of envelope > > > formats are SOAP and Atom. Envelope formats encourage tunneling over > > > HTTP instead of leveraging HTTP. They also require additional > > > complexities on both the client and the server. > > > > > > Is this elaborated on somewhere? I don't think I've ever heard the > > > argument made before and I'm not sure I get why an envelope format is > > > intrinsically good or bad in a protocol. It seems orthogonal to > > > whether something is RESTful or not. > > > > > > --Chuck > > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > >
On Sep 20, 2009, at 8:42 PM, wahbedahbe wrote: > Huh? > So now it's "HEADERS as the engine of application state"? ;-) > I was wondering the same thing, but couldn't quite articulate it. > I can see a link header replacing the <link> elements of an Atom > entry document or I suppose any other hypermedia link that applies > to the entire document/body. > > But hypermedia allows you to structure your information and apply > links to specific parts of that structure. For example, an Atom feed > document has individual entries, each of which may have <link> > elements that apply only to the containing entry. You couldn't pull > all those links out into the HTTP headers could you? Nor could you > pull all of the hyperlinks out of an HTML document. Hypermedia also > allows you to attach other instructions on using a link (e.g. > Forms). I don't see that being carried into HTTP headers very easily. > > So I'm with you that an entry document on it's own could be replaced > by straight content with link headers. I have a harder time seeing > Atom service documents or feed documents working that way. And then > entry documents just become a matter of consistency with the feed > format -- having to work with multiple encodings of the same > metadata would be a bit painful. > > I just don't get the Atom hating here as I find it very useful. > AtomPub provides a very constrained hypermedia-driven state machine > model that is easy for machine-driven clients to interpret and follow. > Saying a URL is the location of a collection drives the state > machine. Saying that a URL is an entry's edit link drives the state > machine. > The edit links, to me, represented the first lesson in HATEOAS. Once I realized I could set the URL to anything and the client would following the link (and the edit semantics) it was as if a light blub turned on. Clients didn't have to know my URL structure to work. > You need Atom (or something like it) to complement HTTP in a RESTful > system even if you are just implementing CRUD (Yes I know REST is > not restricted to CRUD, but it works for a lot of use cases). > Hypermedia needs to define the "collections of things you can POST > to" and the media types they accept. You need some way to represent > the collection -- links to and descriptions of the things that were > created and can now be retrieved, updated and deleted. One analogy > might be that HTTP brings the file operations and Atom brings the > directory structure. > > Atom is REST perhaps slightly refactored -- the hypermedia format is > a combination two distinct formats: one, Atom, that is driving the > state machine, and the content format that represents resource > state. Whereas in the Web, HTML plays both these roles (though an > HTML repository of say Word documents yields a similar division > responsiblity as with Atom). > > An Atom client is a two stage processor -- one stage for the Atom/ > AtomPub processing and one stage for the content processing. Because > the Atom semantics of collections and entries are more general than > a single service, the Atom stage is decoupled from individual > services and reusable across all Atom services. Just as the browser > is decoupled from web sites (even sites that serve it content in > proprietary formats that it has to hand off to other programs). > > Proprietary content formats do provide a form of coupling. There, > open, general (standard or non-service specific) formats are best. > So say an Atom feed of vCards for representing a set of person > records supporting CRUD operations. (I still don't get why Google > didn't do exactly that for their contacts API.) > > For me, the decoupling and reuse are the key reasons REST is > interesting. Atom doesn't get you all the way there if it is being > used with a proprietary content format, but it gets better > decoupling than most of the alternative approaches I've seen. It's > not the only way to do REST though -- even for machine-driven > clients. But I have a hard time with it being recommended against > outright as it's a very useful hypermedia format. > I second that motion. AtomPub/Atom is extremely useful. > Regards, > > Andrew Wahbe > > > > --- In rest-discuss@yahoogroups.com, "Sebastien Lambla" <seb@...> > wrote: >> >> Indeed. We have message and entity headers. It's like a big >> elephant in the >> room that some crowds prentend are not hter because they're headers. >> >> If it doesn't fit in an http header, you're probably doing it wrong. >> >>> -----Original Message----- >>> From: rest-discuss@yahoogroups.com [mailto:rest- >>> discuss@yahoogroups.com] On Behalf Of Subbu Allamaraju >>> Sent: 18 September 2009 20:37 >>> To: Chuck Hinson >>> Cc: Rest List >>> Subject: Re: [rest-discuss] Avoid envelope formats >>> >>> Envelope formats, if not designed and used carefully, can reduce the >>> visibility of the uniform interface. An example is an application >>> encoding some "application/foobar" within atom:content. When used >>> like >>> this, the protocol aspects become less useful, which is the same as >>> tunneling. >>> >>> HTTP does include an envelope format, although it is rarely >>> described >>> as such. HTTP messages use a MIME-like format "containing >>> metainformation about the data transferred and modifiers on the >>> request/response semantics" (sec 1.1, RFC-2616). This format is >>> visible and extensible. When you start to design representations >>> based >>> on this characteristic, you may find that there is no need for any >>> other payload format. >>> >>> Subbu >>> >>> On Sep 18, 2009, at 12:15 PM, Chuck Hinson wrote: >>> >>>> The following statement is on the REST-* architectural goals page: >>>> >>>> "Whenever possible, avoid envelope formats. Examples of envelope >>>> formats are SOAP and Atom. Envelope formats encourage tunneling >>>> over >>>> HTTP instead of leveraging HTTP. They also require additional >>>> complexities on both the client and the server. >>>> >>>> Is this elaborated on somewhere? I don't think I've ever heard the >>>> argument made before and I'm not sure I get why an envelope >>>> format is >>>> intrinsically good or bad in a protocol. It seems orthogonal to >>>> whether something is RESTful or not. >>>> >>>> --Chuck >>>> >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Sep 20, 2009, at 8:42 PM, wahbedahbe wrote: > Huh? > So now it's "HEADERS as the engine of application state"? ;-) > Headers are part of a representation. (That is not to say that Link headers are equivalent to links in the body of representation with a well-defined media type.) Subbu
On Sun, Sep 20, 2009 at 12:45 PM, Subbu Allamaraju <subbu@...> wrote: > In theory, versioning using media types sounds like a good idea, but > in practice, it brings in some operational challenges. > > a. Media type versioning assumes that the same server instance > supports all versions. But larger systems may not be able to support > multiple versions on the same runtime. Many times a single cluster can serve all the versions that are supported. On the other hand, there is no reason requests could not be dispatched to different environments based on the mime type. > b. It further assumes versioning changes can be represented by > representations. In reality, versioning changes do bring in new > resources and new processing flows. Of course, many changes require the introduction of new flavors of resources. That would usually indicate the need for a new media type because new flavors of resources obviously mean new (or significantly changed) process flows. I don't really see how either media type or URI based versioning would be better or worse in this situation. > c. The purported benefit of versioning by media types is that client- > side databases don't need to be changed since the URIs are the same. > This is fine in small systems, but migrating a client from one version > to another version may require not just code changes, but database > upgrades. This may be due to changes in the information content of > representations that the clients need to store,. Migrating a client from one version of an API to another does often require many changes. However, many changes will not necessarily invalidate of all the bookmarks (ie, persisted references to resources) that clients have collected. URI based versioning effectively locks clients that require bookmarking into the version they started with. > d. Not every HTTP level software can distinguish between > representation of a resource. Maybe, fortunately all the HTTP software i am familiar with has support for specifying and retrieving the values of HTTP header fields. Any software that does not support this very basic feature does not really support HTTP regardless of its claims. With that capability you can implement content negotiation pretty trivially. > Given all these, even though URI based versioning looks inelegant, URI > based versioning is more pragmatic, and is proven to work. Things often look inelegant because they are. URI based versioning can be made to work for some situations and many applications, but not without a disproportionate level of effort. URI based version is not more pragmatic, just more common. It has many downsides and the only thing it has going for it is that it is more common. Despite the implication otherwise, media type based versioning has been used successfully in the real world. -- Peter Williams http://barelyenough.org
On Sun, Sep 20, 2009 at 11:21 PM, Peter Williams <pezra@...> wrote: > > > > On Sun, Sep 20, 2009 at 12:45 PM, Subbu Allamaraju <subbu@subbu.org> wrote: > > In theory, versioning using media types sounds like a good idea, but > > in practice, it brings in some operational challenges. > > > > a. Media type versioning assumes that the same server instance > > supports all versions. But larger systems may not be able to support > > multiple versions on the same runtime. > > Many times a single cluster can serve all the versions that are > supported. On the other hand, there is no reason requests could not > be dispatched to different environments based on the mime type. > > > b. It further assumes versioning changes can be represented by > > representations. In reality, versioning changes do bring in new > > resources and new processing flows. > > Of course, many changes require the introduction of new flavors of > resources. That would usually indicate the need for a new media type > because new flavors of resources obviously mean new (or significantly > changed) process flows. I don't really see how either media type or > URI based versioning would be better or worse in this situation. > > > c. The purported benefit of versioning by media types is that client- > > side databases don't need to be changed since the URIs are the same. > > This is fine in small systems, but migrating a client from one version > > to another version may require not just code changes, but database > > upgrades. This may be due to changes in the information content of > > representations that the clients need to store,. > > Migrating a client from one version of an API to another does often > require many changes. However, many changes will not necessarily > invalidate of all the bookmarks (ie, persisted references to > resources) that clients have collected. URI based versioning > effectively locks clients that require bookmarking into the version > they started with. > > > d. Not every HTTP level software can distinguish between > > representation of a resource. > > Maybe, fortunately all the HTTP software i am familiar with has > support for specifying and retrieving the values of HTTP header > fields. Any software that does not support this very basic feature > does not really support HTTP regardless of its claims. With that > capability you can implement content negotiation pretty trivially. While it may be somehat tangential to this thread, the fact that some common browsers (IE and Firefox if memory serves??) send bad Accept headers ended up being a deal breaker for conneg in our case. --peter > > > Given all these, even though URI based versioning looks inelegant, URI > > based versioning is more pragmatic, and is proven to work. > > Things often look inelegant because they are. URI based versioning > can be made to work for some situations and many applications, but not > without a disproportionate level of effort. URI based version is not > more pragmatic, just more common. It has many downsides and the only > thing it has going for it is that it is more common. Despite the > implication otherwise, media type based versioning has been used > successfully in the real world. > > -- > Peter Williams > http://barelyenough.org >
> > a. Media type versioning assumes that the same server instance > > supports all versions. But larger systems may not be able to support > > multiple versions on the same runtime. > > Many times a single cluster can serve all the versions that are > supported. On the other hand, there is no reason requests could not > be dispatched to different environments based on the mime type. Yes, except that things like load balancers do IP based routing. They don't go so far as looking at HTTP headers. > > b. It further assumes versioning changes can be represented by > > representations. In reality, versioning changes do bring in new > > resources and new processing flows. > > Of course, many changes require the introduction of new flavors of > resources. That would usually indicate the need for a new media type > because new flavors of resources obviously mean new (or significantly > changed) process flows. I don't really see how either media type or > URI based versioning would be better or worse in this situation. I mean new resources, not flavors of existing resources. > > c. The purported benefit of versioning by media types is that > client- > > side databases don't need to be changed since the URIs are the same. > > This is fine in small systems, but migrating a client from one > version > > to another version may require not just code changes, but database > > upgrades. This may be due to changes in the information content of > > representations that the clients need to store,. > > Migrating a client from one version of an API to another does often > require many changes. However, many changes will not necessarily > invalidate of all the bookmarks (ie, persisted references to > resources) that clients have collected. URI based versioning > effectively locks clients that require bookmarking into the version > they started with. I wouldn't say locking - URIs will need to be replaced. This is not as bad as it sounds. There are ways to tackle this. > > d. Not every HTTP level software can distinguish between > > representation of a resource. > > Maybe, fortunately all the HTTP software i am familiar with has > support for specifying and retrieving the values of HTTP header > fields. Any software that does not support this very basic feature > does not really support HTTP regardless of its claims. With that > capability you can implement content negotiation pretty trivially. It does not matter what such software claims and how we judge them. It is reality, and can't be ignored. When it comes to operational aspects like log analysis, monitoring, routing, and security, tools currently don't deal well with media types. > > Given all these, even though URI based versioning looks inelegant, > URI > > based versioning is more pragmatic, and is proven to work. > > Things often look inelegant because they are. URI based versioning > can be made to work for some situations and many applications, but not > without a disproportionate level of effort. URI based version is not > more pragmatic, just more common. It has many downsides and the only > thing it has going for it is that it is more common. Despite the > implication otherwise, media type based versioning has been used > successfully in the real world. I won't dispute that, but also not take such a strong position. Media type based versioning is not a one-size-fits-all solution. There are a number of cases where treating representations as resources has operational advantages. Versioning is one of those. Finally, media type based versioning does require complete control of all media types that the server has to deal with. Not all media types are in your control. So, the moment the server is faced with versioning a well-known media type, it will have to mint new URIs for new versions. Subbu
If you read carefully he says "because encrypted pages are not stored by shared caches," In general, encrypted pages will be set as Cache: Private so they will not be stored by intermediate proxies. Thats not actually mandated anywhere, ie you have to set those headers; mostly SSL connections will bypass intermediate caches anyway, but not always. Justin On 17 Sep 2009, at 14:18, Tim Williams wrote: > Is there a reason a client shouldn't respect the origin server's > cache-control if it's over SSL? I don't immediately see anything in > HTTP or TLS that indicates I can't but I came across Mark's cache > tutorial[1] where he says, "If the request is authenticated or secure > (i.e., HTTPS), it won’t be cached." and now I'm wondering if I've > missed something. I'm hoping he's simply describing the way things > happen to be inside browsers rather than implying the way thing should > be in service clients. > > Thanks, > --tim > > [1] - http://www.mnot.net/cache_docs/ > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hi guys, this is an HTTP question, if you feel it's OT please discard :) The original URLs used + as a shorthand for spaces in querystrings. Browsers still implement this feature. Sadly, neither http nor html (except for app/www-url-formencoded used as querystring in the HTML5 spec) imply this should still apply. Hence my question. Should an HTTP framework decode those + by default for any http URI? I'm a bit split on the issue, as I don't want to implement non-standard features, but I also don't want to p*ss off my users. Any suggestions? Seb _________________________________________________________________ View your other email accounts from your Hotmail inbox. Add them now. http://clk.atdmt.com/UKM/go/167688463/direct/01/
Grumble grumble ... Yahoo's message formatting makes it tedious to do nested responses in GMail ... grumble grumble. On Sun, Sep 20, 2009 at 10:31 PM, Subbu Allamaraju <subbu@...> wrote: > > > a. Media type versioning assumes that the same server instance > > > supports all versions. But larger systems may not be able to support > > > multiple versions on the same runtime. > > > > Many times a single cluster can serve all the versions that are > > supported. On the other hand, there is no reason requests could not > > be dispatched to different environments based on the mime type. > Yes, except that things like load balancers do IP based routing. They > don't go so far as looking at HTTP headers. However, the server instance who sees a request for a non-supported version is still free to redirect the client request to a server instance that does not how to respond to that version. > > > b. It further assumes versioning changes can be represented by > > > representations. In reality, versioning changes do bring in new > > > resources and new processing flows. > > > > Of course, many changes require the introduction of new flavors of > > resources. That would usually indicate the need for a new media type > > because new flavors of resources obviously mean new (or significantly > > changed) process flows. I don't really see how either media type or > > URI based versioning would be better or worse in this situation. > I mean new resources, not flavors of existing resources. This whole area is why I think that versioning representations is too fine grained to be sufficient. What you really want is for the client to be able to say "I am programmed to assume version X.Y of this entire interface", which can trigger a fairly complex set of semantic adaptations (deleting deprecated representations, and adding fields to existing ones, as well as adding new ones). In my experience, having the client specify a "spec version" dependency in an HTTP header (without including version information in the media types) has made possible fairly robust support for *all* the kinds of changes you might encounter in API-level version changes. > > > c. The purported benefit of versioning by media types is that client- > > > side databases don't need to be changed since the URIs are the same. > > > This is fine in small systems, but migrating a client from one version > > > to another version may require not just code changes, but database > > > upgrades. This may be due to changes in the information content of > > > representations that the clients need to store,. > > > > Migrating a client from one version of an API to another does often > > require many changes. However, many changes will not necessarily > > invalidate of all the bookmarks (ie, persisted references to > > resources) that clients have collected. URI based versioning > > effectively locks clients that require bookmarking into the version > > they started with. > I wouldn't say locking - URIs will need to be replaced. This is not as > bad as it sounds. There are ways to tackle this. In a fully discoverable HATEOAS API, the details of URI construction should be opaque to the clients, so this should not be an issue. As long as the server understands the version preferences of the client, it can construct appropriate URIs (or return appropriate errors if the client preferences cannot be satisfied). > > > d. Not every HTTP level software can distinguish between > > > representation of a resource. > > > > Maybe, fortunately all the HTTP software i am familiar with has > > support for specifying and retrieving the values of HTTP header > > fields. Any software that does not support this very basic feature > > does not really support HTTP regardless of its claims. With that > > capability you can implement content negotiation pretty trivially. > It does not matter what such software claims and how we judge them. It > is reality, and can't be ignored. When it comes to operational aspects > like log analysis, monitoring, routing, and security, tools currently > don't deal well with media types. Why should they have to? > > > Given all these, even though URI based versioning looks inelegant, URI > > > based versioning is more pragmatic, and is proven to work. > > > > Things often look inelegant because they are. URI based versioning > > can be made to work for some situations and many applications, but not > > without a disproportionate level of effort. URI based version is not > > more pragmatic, just more common. It has many downsides and the only > > thing it has going for it is that it is more common. Despite the > > implication otherwise, media type based versioning has been used > > successfully in the real world. > I won't dispute that, but also not take such a strong position. Media > type based versioning is not a one-size-fits-all solution. There are a > number of cases where treating representations as resources has > operational advantages. Versioning is one of those. > Finally, media type based versioning does require complete control of > all media types that the server has to deal with. Not all media types > are in your control. So, the moment the server is faced with > versioning a well-known media type, it will have to mint new URIs for > new versions. As stated above, I have found versioning media types to be insufficient to deal with the kinds of semantic changes to a service that often go along with representation changes -- to say nothing of the fact that services can change their functionality *without* necessarily changing the representations being exchanged. It would be useful if clients could deal with that kind of change too. > Subbu Craig McClanahan
Sebastien Lambla wrote: > The original URLs used + as a shorthand for spaces in querystrings. > Browsers still implement this feature. Sadly, neither http nor html > (except for app/www-url-formencoded used as querystring in the HTML5 > spec) imply this should still apply. Where else would application/www-url-formencoded be defined?
I meant that the plus as an alias to the space character is a specificity of app/www, not of URIs or of the HTTP spec (which defines http URIs). -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jon Hanna Sent: 21 September 2009 10:00 To: Rest List Subject: Re: [rest-discuss] Slightly OT: Plus sign in querystrings Sebastien Lambla wrote: > The original URLs used + as a shorthand for spaces in querystrings. > Browsers still implement this feature. Sadly, neither http nor html > (except for app/www-url-formencoded used as querystring in the HTML5 > spec) imply this should still apply. Where else would application/www-url-formencoded be defined? ------------------------------------ Yahoo! Groups Links
wahbedahbe wrote: > > > Huh? > So now it's "HEADERS as the engine of application state"? ;-) > > I can see a link header replacing the <link> elements of an Atom entry > document or I suppose any other hypermedia link that applies to the > entire document/body. > IMO, its a compliment not a replacement. I definitely see your point that media type + links allows you to compose things. ALso, for some of the stuff I'm doing, I'm modeling basic relationships as link headers and using different media types to expand/extend what relationships exist. So the media type is not only a mechanism to transfer state, but a mechanism to extend the relationship model. This was my reasoning for saying that REST-* should "isolate data formats to extensions". Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Sun, Sep 20, 2009 at 2:45 PM, Subbu Allamaraju <subbu@...> wrote: > In theory, versioning using media types sounds like a good idea, but > in practice, it brings in some operational challenges. > <snip/> > > Given all these, even though URI based versioning looks inelegant, URI > based versioning is more pragmatic, and is proven to work. > I don't really feel like getting into a "my way is better than your way" debate. Did you get a chance to watch the video? What I wanted to draw attention to was there are some scenarios where media-type versioning does seem to be the most appropriate choice. Kirk does a way better job of explaining why than I ever could so I'll leave that to him. The fact that this video is about a real implementation that actually worked is the significant thing in this case. I'm not trying to say there is only one way of doing versioning and we should always use that way, I'm just saying let's not throw out conneg and media-type versioning as a viable solution. The excuse that RESTful systems can't use media-type versioning because some web browsers don't deliver the right accept header is bogus because if we are going to dictate restful architectures based on existing client behaviour then we had better toss out PUT and DELETE too. Sure there will be some systems where this is a deal breaker, but that needs to be decided on a case by case basis. Out of curiosity, do you have pointers to the scenarios where URI based versioning has worked well. I would be interested to compare the contexts. Darrel
Should I expose the operations a resources supports & If so how should I expose the operations? Cheers Ollie
Sebastien Lambla wrote: > I meant that the plus as an alias to the space character is a specificity of app/www, not of URIs or of the HTTP spec (which defines http URIs). I think app/www is the only case where this escaping takes place, (well, and perhaps in other such formats that use the same technique). If your code is before such cases then it's only necessary to make sure that users can obtain the unescaped query string (because then +, %20 and %2B can be distinguished, as indeed can = and %3D, ; and %3B and & and %26, also necessary for processing app/www query strings). Convenience functions for escaping to such a commonly use format would be, well convenient, and relatively easy to write in many cases.
I wouldn't say you _should_ expose the operations a resource supports, but if you _want_ to, that's what the HTTP OPTIONS method is for. -Eric On Mon, 21 Sep 2009 11:22:14 -0000 "Ollie" wrote: > Should I expose the operations a resources supports & If so how > should I expose the operations? > > > > Cheers > > Ollie > >
Ollie, On Sep 21, 2009, at 1:22 PM, Ollie wrote: > Should I expose the operations a resources supports & If so how > should I expose the operations? > What do you mean by 'operations'? HTTP methods or descriptions of expected resource semantics (e.g. that you create an entry in APP by POSTing to a collection)? Jan > > > Cheers > > Ollie > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Mon, Sep 21, 2009 at 3:40 AM, Sebastien Lambla <seb@...> wrote: > > The original URLs used + as a shorthand for spaces in querystrings. Browsers still implement this feature. Sadly, neither http nor html (except for app/www-url-formencoded used as querystring in the HTML5 spec) imply this should still apply. > > Hence my question. Should an HTTP framework decode those + by default for any http URI? I'm a bit split on the issue, as I don't want to implement non-standard features, but I also don't want to p*ss off my users. > I would think you'd want to decode + to space ONLY when it is part of a query string. http://www.foo.com/a+b?c=d+e The plus sign in "a+b" is literal, the client really did intend to send a plus sign. The second one in "d+e" is an escaped space.
So is the end clients meant to determine the usage of the service by the status code returns when trying an operation on a resource? i.e. if a resource is read only - no POST, PUT or DELETE (when using HTTP) then corresponding HTTP error code is returned and the end user interprets this as required... Cheers Ollie --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > I wouldn't say you _should_ expose the operations a resource supports, > but if you _want_ to, that's what the HTTP OPTIONS method is for. > > -Eric > > On Mon, 21 Sep 2009 11:22:14 -0000 > "Ollie" wrote: > > > Should I expose the operations a resources supports & If so how > > should I expose the operations? > > > > > > > > Cheers > > > > Ollie > > > > >
For some of the resources I'm working with don't have a method\operation that maps on to PUT, POST, DELETE if I'm using HTTP as the transport. Should the end user of the service have some way to find out the operations supported by a reosurce or should I just return the appropriate status code? Cheers Ollie --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > Ollie, > > On Sep 21, 2009, at 1:22 PM, Ollie wrote: > > > Should I expose the operations a resources supports & If so how > > should I expose the operations? > > > > What do you mean by 'operations'? HTTP methods or descriptions of > expected resource semantics (e.g. that you create an entry in APP by > POSTing to a collection)? > > Jan > > > > > > > > > Cheers > > > > Ollie > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- >
Ollie, On Sep 21, 2009, at 3:33 PM, Ollie wrote: > For some of the resources I'm working with don't have a method > \operation that maps on to PUT, POST, DELETE Think differently: design your resources in a way that you can achive the goals with the uniform interface (GET,PUT,POST,DELETE). > if I'm using HTTP as the transport. Bbzzzz - sorry, this rings the buzzer :-) You have to make sure you understand that HTTP is not a transport protocol but an application protocol. You do not layer application semantics on top of HTTP. HTTP is used to trans*fer* resource state between client and server. > > Should the end user of the service have some way to find out the > operations supported by a reosurce or should I just return the > appropriate status code? Think differently: a client understands the semantics of the links it encounters in representations it receives from the server and follows the appropriate links to proceed through the (Web-) application. It is really not different than you making a purchase at Amazon, except that in the machine to machine case the client code needs to be aware of the link semantics, e.g. where to POST an order. HTH, Jan > > Cheers > > Ollie > > --- In rest-discuss@yahoogroups.com, Jan Algermissen > <algermissen1971@...> wrote: >> >> Ollie, >> >> On Sep 21, 2009, at 1:22 PM, Ollie wrote: >> >>> Should I expose the operations a resources supports & If so how >>> should I expose the operations? >>> >> >> What do you mean by 'operations'? HTTP methods or descriptions of >> expected resource semantics (e.g. that you create an entry in APP by >> POSTing to a collection)? >> >> Jan >> >> >> >>> >>> >>> Cheers >>> >>> Ollie >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Mon, Sep 21, 2009 at 2:17 AM, Craig McClanahan <craigmcc@...> wrote: > > This whole area is why I think that versioning representations is > too fine grained to be sufficient. What you really want is for the > client to be able to say "I am programmed to assume version X.Y of > this entire interface", which can trigger a fairly complex set of > semantic adaptations (deleting deprecated representations, and > adding fields to existing ones, as well as adding new ones). I complete agree. Regardless of the versioning approach used it should specify version of the entire interface, not just the representations. However, the HEAS constraint usually (maybe always?) reduces that distinction to nothing. Media types specify not just the format of the representation, but also the semantics of that representation. Those include the semantics of traversing the links in that representation. When you take all that together you are actually talking about the version of the application as a whole. -- Peter Williams http://barelyenough.org
It means that if you want to let clients discover if an operation *can* be done, using the OPTIONS method is the way to go. If the returned message tells you teh resource only supports GET, you now know that it is read-only. If a client still tries an operation that is not allowed, issue a 405. S -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of ollie.riches@... Sent: 21 September 2009 14:32 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: REST exposing supported operations... So is the end clients meant to determine the usage of the service by the status code returns when trying an operation on a resource? i.e. if a resource is read only - no POST, PUT or DELETE (when using HTTP) then corresponding HTTP error code is returned and the end user interprets this as required... Cheers Ollie --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > I wouldn't say you _should_ expose the operations a resource supports, > but if you _want_ to, that's what the HTTP OPTIONS method is for. > > -Eric > > On Mon, 21 Sep 2009 11:22:14 -0000 > "Ollie" wrote: > > > Should I expose the operations a resources supports & If so how > > should I expose the operations? > > > > > > > > Cheers > > > > Ollie > > > > > ------------------------------------ Yahoo! Groups Links
>Think differently: a client understands the semantics of the links it encounters in representations it receives from the >server and follows the appropriate links to proceed through the (Web-) application. >It is really not different than you making a purchase at Amazon, except that in the machine to machine case the client > code needs to be aware of the link semantics, e.g. where to POST an order. I just don't get this... I'am not interested in how the end-user uses the links I've provide in my XML\JSON representation of a resource - I don't have anything to do with presentation I just provide links to access resources and not the operation that is acceptable. So I don't understand how they know what they can do... plus when I delete an order from Amazon is it doing a http DELETE operation with the link directly or is it doing a http POST on a button action which is then interpreted as a http DELETE on the server and forward to the service... Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: Jan Algermissen [mailto:algermissen1971@...] Sent: 21 September 2009 14:48 To: RICHES, Oliver, GBM Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: REST exposing supported operations... Ollie, On Sep 21, 2009, at 3:33 PM, Ollie wrote: > For some of the resources I'm working with don't have a method > \operation that maps on to PUT, POST, DELETE Think differently: design your resources in a way that you can achive the goals with the uniform interface (GET,PUT,POST,DELETE). > if I'm using HTTP as the transport. Bbzzzz - sorry, this rings the buzzer :-) You have to make sure you understand that HTTP is not a transport protocol but an application protocol. You do not layer application semantics on top of HTTP. HTTP is used to trans*fer* resource state between client and server. > > Should the end user of the service have some way to find out the > operations supported by a reosurce or should I just return the > appropriate status code? Think differently: a client understands the semantics of the links it encounters in representations it receives from the server and follows the appropriate links to proceed through the (Web-) application. It is really not different than you making a purchase at Amazon, except that in the machine to machine case the client code needs to be aware of the link semantics, e.g. where to POST an order. HTH, Jan > > Cheers > > Ollie > > --- In rest-discuss@yahoogroups.com, Jan Algermissen > <algermissen1971@...> wrote: >> >> Ollie, >> >> On Sep 21, 2009, at 1:22 PM, Ollie wrote: >> >>> Should I expose the operations a resources supports & If so how >>> should I expose the operations? >>> >> >> What do you mean by 'operations'? HTTP methods or descriptions of >> expected resource semantics (e.g. that you create an entry in APP by >> POSTing to a collection)? >> >> Jan >> >> >> >>> >>> >>> Cheers >>> >>> Ollie >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com -------------------------------------- *********************************************************************************** The Royal Bank of Scotland plc. Registered in Scotland No 90312. Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. Authorised and regulated by the Financial Services Authority. This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer. Internet e-mails are not necessarily secure. The Royal Bank of Scotland plc does not accept responsibility for changes made to this message after it was sent. Whilst all reasonable care has been taken to avoid the transmission of viruses, it is the responsibility of the recipient to ensure that the onward transmission, opening or use of this message and any attachments will not adversely affect its systems or data. No responsibility is accepted by The Royal Bank of Scotland plc in this regard and the recipient should carry out such virus and other checks as it considers appropriate. Visit our website at www.rbs.com ***********************************************************************************
thanks seb for the clarity... Any recommendations for a REST framework for .Net that's not WCF... Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: Sebastien Lambla [mailto:seb@...] Sent: 21 September 2009 15:14 To: RICHES, Oliver, GBM; rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... It means that if you want to let clients discover if an operation *can* be done, using the OPTIONS method is the way to go. If the returned message tells you teh resource only supports GET, you now know that it is read-only. If a client still tries an operation that is not allowed, issue a 405. S -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of ollie.riches@... Sent: 21 September 2009 14:32 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: REST exposing supported operations... So is the end clients meant to determine the usage of the service by the status code returns when trying an operation on a resource? i.e. if a resource is read only - no POST, PUT or DELETE (when using HTTP) then corresponding HTTP error code is returned and the end user interprets this as required... Cheers Ollie --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > I wouldn't say you _should_ expose the operations a resource supports, > but if you _want_ to, that's what the HTTP OPTIONS method is for. > > -Eric > > On Mon, 21 Sep 2009 11:22:14 -0000 > "Ollie" wrote: > > > Should I expose the operations a resources supports & If so how > > should I expose the operations? > > > > > > > > Cheers > > > > Ollie > > > > > ------------------------------------ Yahoo! Groups Links *********************************************************************************** The Royal Bank of Scotland plc. Registered in Scotland No 90312. Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. Authorised and regulated by the Financial Services Authority. This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer. Internet e-mails are not necessarily secure. The Royal Bank of Scotland plc does not accept responsibility for changes made to this message after it was sent. Whilst all reasonable care has been taken to avoid the transmission of viruses, it is the responsibility of the recipient to ensure that the onward transmission, opening or use of this message and any attachments will not adversely affect its systems or data. No responsibility is accepted by The Royal Bank of Scotland plc in this regard and the recipient should carry out such virus and other checks as it considers appropriate. Visit our website at www.rbs.com ***********************************************************************************
Sometimes I wonder if the people on this list actually work, as in doing practical things and not theoretically ones, because sometimes I see such a complicated answers to so simple questions. In HTTP, if you send a OPTIONS to a URI, you get a answer with the verbs that it supports. In your case, OPTIONS will return just GET (from the four you mentioned) If a user-agent send a PUT, or POST, or DELETE to that URI the server will respond with 405 Method Not Allowed. That's it, the REST theorists will tell you that shouldn't happen because a REST application will have all those URI/Verbs being driven by Hipertext, meaning you don't know in advance the URI and/or the Verbs but you should only follow the links that the server send back to you. But in fact, sometimes things work in practice and not in theory, so there are situations where that can not happen, for example in the "few well know URI's that are the entry point of a application", but there are more... Now, to be REST and not HTTP, where I said URI you should read Resource, where I said "links" you should read "hipertext embedded in the representation of the resource that should drive the application state changes" and I'm sure that the good theorists in here will have a way of adding at least "Media-Type" or "content-negotiation" and other valuable concepts, but for me, that actually have to use these things in practice, I found much better to start from the bottom with simple things and try to go up from there... Ollie wrote: > > > For some of the resources I'm working with don't have a > method\operation that maps on to PUT, POST, DELETE if I'm using HTTP > as the transport. > > Should the end user of the service have some way to find out the > operations supported by a reosurce or should I just return the > appropriate status code? > > Cheers > > Ollie > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Jan Algermissen > <algermissen1971@...> wrote: > > > > Ollie, > > > > On Sep 21, 2009, at 1:22 PM, Ollie wrote: > > > > > Should I expose the operations a resources supports & If so how > > > should I expose the operations? > > > > > > > What do you mean by 'operations'? HTTP methods or descriptions of > > expected resource semantics (e.g. that you create an entry in APP by > > POSTing to a collection)? > > > > Jan > > > > > > > > > > > > > > > Cheers > > > > > > Ollie > > > > > > > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > -------------------------------------- > > Jan Algermissen > > > > Mail: algermissen@... > > Blog: http://algermissen.blogspot.com/ > <http://algermissen.blogspot.com/> > > Home: http://www.jalgermissen.com <http://www.jalgermissen.com> > > -------------------------------------- > > > >
On Sun, Sep 20, 2009 at 11:31 PM, Subbu Allamaraju <subbu@...> wrote: > >> Migrating a client from one version of an API to another does often >> require many changes. However, many changes will not necessarily >> invalidate of all the bookmarks (ie, persisted references to >> resources) that clients have collected. URI based versioning >> effectively locks clients that require bookmarking into the version >> they started with. > > I wouldn't say locking - URIs will need to be replaced. This is not as bad > as it sounds. There are ways to tackle this. How exactly would you tackle it? Cross system referencing is often the result of human interaction (eg, search and pick from a list) rather than some deterministic process. In that situation the only way that i can think of for a client to move from one version to another is for the origin system to provide a mechanism to map URIs from one version to URIs another. While that might be feasible it does not strike me as a good time. Particularly is you had a large number of persisted URIs that needed to be mapped. -- Peter Williams http://barelyenough.org
Quality answer... I don't understand when the 'theorists' say 'the REST theorists will tell you that shouldn't happen because a REST application will have all those URI/Verbs being driven by Hipertext,' but how am I meant to insert\assocatite the verb with\into URI when returning a XML\JSON representation of a resource? Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: António Mota [mailto:amsmota@...] Sent: 21 September 2009 15:21 To: RICHES, Oliver, GBM Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: REST exposing supported operations... Sometimes I wonder if the people on this list actually work, as in doing practical things and not theoretically ones, because sometimes I see such a complicated answers to so simple questions. In HTTP, if you send a OPTIONS to a URI, you get a answer with the verbs that it supports. In your case, OPTIONS will return just GET (from the four you mentioned) If a user-agent send a PUT, or POST, or DELETE to that URI the server will respond with 405 Method Not Allowed. That's it, the REST theorists will tell you that shouldn't happen because a REST application will have all those URI/Verbs being driven by Hipertext, meaning you don't know in advance the URI and/or the Verbs but you should only follow the links that the server send back to you. But in fact, sometimes things work in practice and not in theory, so there are situations where that can not happen, for example in the "few well know URI's that are the entry point of a application", but there are more... Now, to be REST and not HTTP, where I said URI you should read Resource, where I said "links" you should read "hipertext embedded in the representation of the resource that should drive the application state changes" and I'm sure that the good theorists in here will have a way of adding at least "Media-Type" or "content-negotiation" and other valuable concepts, but for me, that actually have to use these things in practice, I found much better to start from the bottom with simple things and try to go up from there... Ollie wrote: > > > For some of the resources I'm working with don't have a > method\operation that maps on to PUT, POST, DELETE if I'm using HTTP > as the transport. > > Should the end user of the service have some way to find out the > operations supported by a reosurce or should I just return the > appropriate status code? > > Cheers > > Ollie > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Jan Algermissen > <algermissen1971@...> wrote: > > > > Ollie, > > > > On Sep 21, 2009, at 1:22 PM, Ollie wrote: > > > > > Should I expose the operations a resources supports & If so how > > > should I expose the operations? > > > > > > > What do you mean by 'operations'? HTTP methods or descriptions of > > expected resource semantics (e.g. that you create an entry in APP by > > POSTing to a collection)? > > > > Jan > > > > > > > > > > > > > > > Cheers > > > > > > Ollie > > > > > > > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > -------------------------------------- > > Jan Algermissen > > > > Mail: algermissen@... > > Blog: http://algermissen.blogspot.com/ > <http://algermissen.blogspot.com/> > > Home: http://www.jalgermissen.com <http://www.jalgermissen.com> > > -------------------------------------- > > > > *********************************************************************************** The Royal Bank of Scotland plc. Registered in Scotland No 90312. Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. Authorised and regulated by the Financial Services Authority. This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer. Internet e-mails are not necessarily secure. The Royal Bank of Scotland plc does not accept responsibility for changes made to this message after it was sent. Whilst all reasonable care has been taken to avoid the transmission of viruses, it is the responsibility of the recipient to ensure that the onward transmission, opening or use of this message and any attachments will not adversely affect its systems or data. No responsibility is accepted by The Royal Bank of Scotland plc in this regard and the recipient should carry out such virus and other checks as it considers appropriate. Visit our website at www.rbs.com ***********************************************************************************
You can try OpenRasta <http://openrasta.com/>. Ryan Riley ryan.riley@panesofglass.org http://panesofglass.org/ http://wizardsofsmart.net/ On Mon, Sep 21, 2009 at 9:18 AM, <oliver.riches@...> wrote: > > > thanks seb for the clarity... > > Any recommendations for a REST framework for .Net that's not WCF... > > > Ollie Riches > RBS Global Banking & Markets > Office: +44 203 361 4071 > > -----Original Message----- > From: Sebastien Lambla [mailto:seb@... <seb%40serialseb.com>] > Sent: 21 September 2009 15:14 > To: RICHES, Oliver, GBM; rest-discuss@yahoogroups.com<rest-discuss%40yahoogroups.com> > Subject: RE: [rest-discuss] Re: REST exposing supported operations... > > It means that if you want to let clients discover if an operation *can* be > done, using the OPTIONS method is the way to go. If the returned message > tells you teh resource only supports GET, you now know that it is read-only. > > If a client still tries an operation that is not allowed, issue a 405. > > S > > -----Original Message----- > From: rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>[mailto: > rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>] On Behalf > Of ollie.riches@... <ollie.riches%40btinternet.com> > Sent: 21 September 2009 14:32 > To: rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com> > Subject: [rest-discuss] Re: REST exposing supported operations... > > So is the end clients meant to determine the usage of the service by the > status code returns when trying an operation on a resource? > > i.e. if a resource is read only - no POST, PUT or DELETE (when using HTTP) > then corresponding HTTP error code is returned and the end user interprets > this as required... > > Cheers > > Ollie > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > "Eric J. Bowman" <eric@...> wrote: > > > > I wouldn't say you _should_ expose the operations a resource supports, > > but if you _want_ to, that's what the HTTP OPTIONS method is for. > > > > -Eric > > > > On Mon, 21 Sep 2009 11:22:14 -0000 > > "Ollie" wrote: > > > > > Should I expose the operations a resources supports & If so how > > > should I expose the operations? > > > > > > > > > > > > Cheers > > > > > > Ollie > > > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > *********************************************************************************** > The Royal Bank of Scotland plc. Registered in Scotland No 90312. Registered > Office: 36 St Andrew Square, Edinburgh EH2 2YB. > Authorised and regulated by the Financial Services Authority. > > This e-mail message is confidential and for use by the > addressee only. If the message is received by anyone other > than the addressee, please return the message to the sender > by replying to it and then delete the message from your > computer. Internet e-mails are not necessarily secure. The > Royal Bank of Scotland plc does not accept responsibility for > changes made to this message after it was sent. > > Whilst all reasonable care has been taken to avoid the > transmission of viruses, it is the responsibility of the recipient to > ensure that the onward transmission, opening or use of this > message and any attachments will not adversely affect its > systems or data. No responsibility is accepted by The > Royal Bank of Scotland plc in this regard and the recipient should carry > out such virus and other checks as it considers appropriate. > > Visit our website at www.rbs.com > > > *********************************************************************************** > > >
The "+" style encoding is specified by HTML for the "application/x-www- form-urlencoded" media type. Both HTML4.01 (http://www.w3.org/TR/html401/interact/forms.html#h-17.13.4 ) and HTML5 (http://www.w3.org/TR/html5/forms.html#application-x-www-form-urlencoded-encoding-algorithm ) describe this. When you use forms with method GET, the "+" character becomes part of the URI. For query parameter data, the framework will have to use HTML encoding rules. Subbu On Sep 21, 2009, at 12:40 AM, Sebastien Lambla wrote: > Hi guys, > > this is an HTTP question, if you feel it's OT please discard :) > > The original URLs used + as a shorthand for spaces in querystrings. > Browsers still implement this feature. Sadly, neither http nor html > (except for app/www-url-formencoded used as querystring in the HTML5 > spec) imply this should still apply. > > Hence my question. Should an HTTP framework decode those + by > default for any http URI? I'm a bit split on the issue, as I don't > want to implement non-standard features, but I also don't want to > p*ss off my users. > > Any suggestions? > > Seb > > > View your other email accounts from your Hotmail inbox. Add them now. > >
I can see link headers being useful if you need to use a representation format that doesn't support linking. But preferring HTTP link headers over links in your hypermedia format seems backwards to me. It goes against the established practice of the Web doesn't it? It also seems like the first step in collapsing the protocol space down to just HTTP. REST allows other protocols (e.g. FTP and in the future Waka?) to be used. Isn't that in part why the URI starts with a scheme identifier? Pulling the links out of the hypermedia and into the protocol headers makes it harder to use your hypermedia format with other protocols and is in conflict with some of the design principles of the web, exemplified by URI, no? Andrew On Mon, Sep 21, 2009 at 7:30 AM, Bill Burke <bburke@...> wrote: > > > wahbedahbe wrote: > >> >> Huh? >> So now it's "HEADERS as the engine of application state"? ;-) >> >> I can see a link header replacing the <link> elements of an Atom entry >> document or I suppose any other hypermedia link that applies to the entire >> document/body. >> >> > IMO, its a compliment not a replacement. I definitely see your point that > media type + links allows you to compose things. ALso, for some of the > stuff I'm doing, I'm modeling basic relationships as link headers and using > different media types to expand/extend what relationships exist. So the > media type is not only a mechanism to transfer state, but a mechanism to > extend the relationship model. > > This was my reasoning for saying that REST-* should "isolate data formats > to extensions". > > Bill > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > -- Andrew Wahbe
Well, afaik, there is no "standard" way to do that, it depends on what
you and the users/clients of your app agree too.
There is such a thing as a Uniform Interface (GET, POST, PUT, DELETE in
HTTP, these ones and a few more in WebDAV, or actually you can define
your owns) that basically is Uniform because once you define what they
mean/ what they do, you are bound to always use them with that meaning
and to do what it's defined.
This is the first part. The second is to agree to a common format that
has also it's own meaning, or semantics as they say in REST. Now I'm not
a specialist, far from that, and I will recommend to read some blogs
from people on this list (the Subbu comes from the top of my head, but
there are several others) but basicaly, or you use a already existing
format like Atom or you (and you users/clients) define your own. In JSON
saw something like a data structure like this to define a "link"
|{
"name":"Get a example",
"verb": "GET",
"uri":"http://example.com/resources/123",
"media_type":"application/vnd.com.example.Resource+json"
}|
I hope I didn't said nothing too wrongly, I hope someone corrects me if
that happened.
oliver.riches@... wrote:
>
>
> Quality answer...
>
> I don't understand when the 'theorists' say
>
> 'the REST theorists will tell you that shouldn't happen because a REST
> application will have all those URI/Verbs being driven by Hipertext,'
>
> but how am I meant to insert\assocatite the verb with\into URI when
> returning a XML\JSON representation of a resource?
>
> Ollie Riches
> RBS Global Banking & Markets
> Office: +44 203 361 4071
>
> -----Original Message-----
> From: Ant�nio Mota [mailto:amsmota@...
> <mailto:amsmota%40gmail.com>]
> Sent: 21 September 2009 15:21
> To: RICHES, Oliver, GBM
> Cc: rest-discuss@yahoogroups.com <mailto:rest-discuss%40yahoogroups.com>
> Subject: Re: [rest-discuss] Re: REST exposing supported operations...
>
> Sometimes I wonder if the people on this list actually work, as in
> doing practical things and not theoretically ones, because sometimes I
> see such a complicated answers to so simple questions.
>
> In HTTP, if you send a OPTIONS to a URI, you get a answer with the
> verbs that it supports. In your case, OPTIONS will return just GET
> (from the four you mentioned)
>
> If a user-agent send a PUT, or POST, or DELETE to that URI the server
> will respond with 405 Method Not Allowed.
> That's it, the REST theorists will tell you that shouldn't happen
> because a REST application will have all those URI/Verbs being driven
> by Hipertext, meaning you don't know in advance the URI and/or the
> Verbs but you should only follow the links that the server send back
> to you.
>
> But in fact, sometimes things work in practice and not in theory, so
> there are situations where that can not happen, for example in the
> "few well know URI's that are the entry point of a application", but
> there are more...
>
> Now, to be REST and not HTTP, where I said URI you should read
> Resource, where I said "links" you should read "hipertext embedded in
> the representation of the resource that should drive the application
> state changes" and I'm sure that the good theorists in here will have
> a way of adding at least "Media-Type" or "content-negotiation" and
> other valuable concepts, but for me, that actually have to use these
> things in practice, I found much better to start from the bottom with
> simple things and try to go up from there...
>
> Ollie wrote:
> >
> >
> > For some of the resources I'm working with don't have a
> > method\operation that maps on to PUT, POST, DELETE if I'm using HTTP
> > as the transport.
> >
> > Should the end user of the service have some way to find out the
> > operations supported by a reosurce or should I just return the
> > appropriate status code?
> >
> > Cheers
> >
> > Ollie
> >
> > --- In rest-discuss@yahoogroups.com
> <mailto:rest-discuss%40yahoogroups.com>
> > <mailto:rest-discuss%40yahoogroups.com>, Jan Algermissen
> > <algermissen1971@...> wrote:
> > >
> > > Ollie,
> > >
> > > On Sep 21, 2009, at 1:22 PM, Ollie wrote:
> > >
> > > > Should I expose the operations a resources supports & If so how
> > > > should I expose the operations?
> > > >
> > >
> > > What do you mean by 'operations'? HTTP methods or descriptions of
> > > expected resource semantics (e.g. that you create an entry in APP by
> > > POSTing to a collection)?
> > >
> > > Jan
> > >
> > >
> > >
> > > >
> > > >
> > > > Cheers
> > > >
> > > > Ollie
> > > >
> > > >
> > > >
> > > > ------------------------------------
> > > >
> > > > Yahoo! Groups Links
> > > >
> > > >
> > > >
> > >
> > > --------------------------------------
> > > Jan Algermissen
> > >
> > > Mail: algermissen@...
> > > Blog: http://algermissen.blogspot.com/
> <http://algermissen.blogspot.com/>
> > <http://algermissen.blogspot.com/ <http://algermissen.blogspot.com/>>
> > > Home: http://www.jalgermissen.com <http://www.jalgermissen.com>
> <http://www.jalgermissen.com <http://www.jalgermissen.com>>
> > > --------------------------------------
> > >
> >
> >
>
> ***********************************************************************************
> The Royal Bank of Scotland plc. Registered in Scotland No 90312.
> Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB.
> Authorised and regulated by the Financial Services Authority.
>
> This e-mail message is confidential and for use by the
> addressee only. If the message is received by anyone other
> than the addressee, please return the message to the sender
> by replying to it and then delete the message from your
> computer. Internet e-mails are not necessarily secure. The
> Royal Bank of Scotland plc does not accept responsibility for
> changes made to this message after it was sent.
>
> Whilst all reasonable care has been taken to avoid the
> transmission of viruses, it is the responsibility of the recipient to
> ensure that the onward transmission, opening or use of this
> message and any attachments will not adversely affect its
> systems or data. No responsibility is accepted by The
> Royal Bank of Scotland plc in this regard and the recipient should carry
> out such virus and other checks as it considers appropriate.
>
> Visit our website at www.rbs.com
>
> ***********************************************************************************
>
>
> The excuse that RESTful systems can't use media-type versioning > because some web browsers don't deliver the right accept header is > bogus because if we are going to dictate restful architectures based > on existing client behaviour then we had better toss out PUT and > DELETE too. Sure there will be some systems where this is a deal > breaker, but that needs to be decided on a case by case basis. Yes, we continue to toss out PUT and DELETE for web apps. What's the point of an architecture if all tradeoffs are ignored? REST is not a non-negotiable style. When all the media types are application specific (i.e. custom), and all bits of the infrastructure can deal with it those media types, then media type based versioning works just fine. But these conditions don't hold good always. > Out of curiosity, do you have pointers to the scenarios where URI > based versioning has worked well. I would be interested to compare > the contexts. Sorry, I don't have one on the public internet. Subbu
On Sep 21, 2009, at 4:15 PM, oliver.riches@... wrote: >> Think differently: a client understands the semantics of the links >> it encounters in representations it receives from the >server and >> follows the appropriate links to proceed through the (Web-) >> application. > >> It is really not different than you making a purchase at Amazon, >> except that in the machine to machine case the client >> code needs to be aware of the link semantics, e.g. where to POST an >> order. > > I just don't get this... > > I'am not interested in how the end-user uses the links I've provide > in my XML\JSON representation of a resource - I don't have anything > to do with presentation I just provide links to access resources and > not the operation that is acceptable. > > So I don't understand how they know what they can do... Take a look at the Atom Publishing Protocol specification[1] as an example. The specification of mediatypes and link semantics tell the client implementor what to look for in responses and what HTTP calls to make on which resources. If the media type you are using does not have the necessary semantics (e.g. application/xml or application/json do not, while application/ atomsrv does) you cannot convey the necessary semantics to the client. > > plus when I delete an order from Amazon is it doing a http DELETE > operation with the link directly or is it doing a http POST on a > button action which is then interpreted as a http DELETE on the > server and forward to the service... > The use of POST partly has to do with browser capabilities and with the collaboration style. Canceling an order is more than just deleting the order resource. Such things are better done with POSTs and explicit documents (e.g. order cancelation). Jan [1] http://tools.ietf.org/html/rfc5023 > > > > > Ollie Riches > RBS Global Banking & Markets > Office: +44 203 361 4071 > > -----Original Message----- > From: Jan Algermissen [mailto:algermissen1971@...] > Sent: 21 September 2009 14:48 > To: RICHES, Oliver, GBM > Cc: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Re: REST exposing supported operations... > > Ollie, > > On Sep 21, 2009, at 3:33 PM, Ollie wrote: > >> For some of the resources I'm working with don't have a method >> \operation that maps on to PUT, POST, DELETE > > Think differently: design your resources in a way that you can > achive the goals with the uniform interface (GET,PUT,POST,DELETE). > > >> if I'm using HTTP as the transport. > > Bbzzzz - sorry, this rings the buzzer :-) You have to make sure you > understand that HTTP is not a transport protocol but an application > protocol. You do not layer application semantics on top of HTTP. > HTTP is used to trans*fer* resource state between client and server. > > >> >> Should the end user of the service have some way to find out the >> operations supported by a reosurce or should I just return the >> appropriate status code? > > Think differently: a client understands the semantics of the links > it encounters in representations it receives from the server and > follows the appropriate links to proceed through the (Web-) > application. > > It is really not different than you making a purchase at Amazon, > except that in the machine to machine case the client code needs to > be aware of the link semantics, e.g. where to POST an order. > > HTH, > Jan > >> >> Cheers >> >> Ollie >> >> --- In rest-discuss@yahoogroups.com, Jan Algermissen >> <algermissen1971@...> wrote: >>> >>> Ollie, >>> >>> On Sep 21, 2009, at 1:22 PM, Ollie wrote: >>> >>>> Should I expose the operations a resources supports & If so how >>>> should I expose the operations? >>>> >>> >>> What do you mean by 'operations'? HTTP methods or descriptions of >>> expected resource semantics (e.g. that you create an entry in APP by >>> POSTing to a collection)? >>> >>> Jan >>> >>> >>> >>>> >>>> >>>> Cheers >>>> >>>> Ollie >>>> >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>> >>> -------------------------------------- >>> Jan Algermissen >>> >>> Mail: algermissen@... >>> Blog: http://algermissen.blogspot.com/ >>> Home: http://www.jalgermissen.com >>> -------------------------------------- >>> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > *********************************************************************************** > The Royal Bank of Scotland plc. Registered in Scotland No 90312. > Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. > Authorised and regulated by the Financial Services Authority. > > This e-mail message is confidential and for use by the > addressee only. If the message is received by anyone other > than the addressee, please return the message to the sender > by replying to it and then delete the message from your > computer. Internet e-mails are not necessarily secure. The > Royal Bank of Scotland plc does not accept responsibility for > changes made to this message after it was sent. > > Whilst all reasonable care has been taken to avoid the > transmission of viruses, it is the responsibility of the recipient to > ensure that the onward transmission, opening or use of this > message and any attachments will not adversely affect its > systems or data. No responsibility is accepted by The > Royal Bank of Scotland plc in this regard and the recipient should > carry > out such virus and other checks as it considers appropriate. > > Visit our website at www.rbs.com > > *********************************************************************************** > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Thanks Jan I will look Atom, you're info has been very helpful... I'm starting to think the use of a DELETE operation is not as common as it appears it might be - because you can POST a resource that happen to encapsulate the operation - 'order cancelation' resource posted to cancel an order in a booking system... Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: Jan Algermissen [mailto:algermissen1971@...] Sent: 21 September 2009 15:55 To: RICHES, Oliver, GBM Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: REST exposing supported operations... On Sep 21, 2009, at 4:15 PM, oliver.riches@rbs.com wrote: >> Think differently: a client understands the semantics of the links it >> encounters in representations it receives from the >server and >> follows the appropriate links to proceed through the (Web-) >> application. > >> It is really not different than you making a purchase at Amazon, >> except that in the machine to machine case the client code needs to >> be aware of the link semantics, e.g. where to POST an order. > > I just don't get this... > > I'am not interested in how the end-user uses the links I've provide in > my XML\JSON representation of a resource - I don't have anything to do > with presentation I just provide links to access resources and not the > operation that is acceptable. > > So I don't understand how they know what they can do... Take a look at the Atom Publishing Protocol specification[1] as an example. The specification of mediatypes and link semantics tell the client implementor what to look for in responses and what HTTP calls to make on which resources. If the media type you are using does not have the necessary semantics (e.g. application/xml or application/json do not, while application/ atomsrv does) you cannot convey the necessary semantics to the client. > > plus when I delete an order from Amazon is it doing a http DELETE > operation with the link directly or is it doing a http POST on a > button action which is then interpreted as a http DELETE on the server > and forward to the service... > The use of POST partly has to do with browser capabilities and with the collaboration style. Canceling an order is more than just deleting the order resource. Such things are better done with POSTs and explicit documents (e.g. order cancelation). Jan [1] http://tools.ietf.org/html/rfc5023 > > > > > Ollie Riches > RBS Global Banking & Markets > Office: +44 203 361 4071 > > -----Original Message----- > From: Jan Algermissen [mailto:algermissen1971@...] > Sent: 21 September 2009 14:48 > To: RICHES, Oliver, GBM > Cc: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Re: REST exposing supported operations... > > Ollie, > > On Sep 21, 2009, at 3:33 PM, Ollie wrote: > >> For some of the resources I'm working with don't have a method >> \operation that maps on to PUT, POST, DELETE > > Think differently: design your resources in a way that you can achive > the goals with the uniform interface (GET,PUT,POST,DELETE). > > >> if I'm using HTTP as the transport. > > Bbzzzz - sorry, this rings the buzzer :-) You have to make sure you > understand that HTTP is not a transport protocol but an application > protocol. You do not layer application semantics on top of HTTP. > HTTP is used to trans*fer* resource state between client and server. > > >> >> Should the end user of the service have some way to find out the >> operations supported by a reosurce or should I just return the >> appropriate status code? > > Think differently: a client understands the semantics of the links it > encounters in representations it receives from the server and follows > the appropriate links to proceed through the (Web-) application. > > It is really not different than you making a purchase at Amazon, > except that in the machine to machine case the client code needs to be > aware of the link semantics, e.g. where to POST an order. > > HTH, > Jan > >> >> Cheers >> >> Ollie >> >> --- In rest-discuss@yahoogroups.com, Jan Algermissen >> <algermissen1971@...> wrote: >>> >>> Ollie, >>> >>> On Sep 21, 2009, at 1:22 PM, Ollie wrote: >>> >>>> Should I expose the operations a resources supports & If so how >>>> should I expose the operations? >>>> >>> >>> What do you mean by 'operations'? HTTP methods or descriptions of >>> expected resource semantics (e.g. that you create an entry in APP by >>> POSTing to a collection)? >>> >>> Jan >>> >>> >>> >>>> >>>> >>>> Cheers >>>> >>>> Ollie >>>> >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>> >>> -------------------------------------- >>> Jan Algermissen >>> >>> Mail: algermissen@... >>> Blog: http://algermissen.blogspot.com/ >>> Home: http://www.jalgermissen.com >>> -------------------------------------- >>> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > ********************************************************************** > ************* The Royal Bank of Scotland plc. Registered in Scotland > No 90312. > Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. > Authorised and regulated by the Financial Services Authority. > > This e-mail message is confidential and for use by the addressee only. > If the message is received by anyone other than the addressee, please > return the message to the sender by replying to it and then delete the > message from your computer. Internet e-mails are not necessarily > secure. The Royal Bank of Scotland plc does not accept responsibility > for changes made to this message after it was sent. > > Whilst all reasonable care has been taken to avoid the transmission of > viruses, it is the responsibility of the recipient to ensure that the > onward transmission, opening or use of this message and any > attachments will not adversely affect its systems or data. No > responsibility is accepted by The Royal Bank of Scotland plc in this > regard and the recipient should carry out such virus and other checks > as it considers appropriate. > > Visit our website at www.rbs.com > > ********************************************************************** > ************* > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Mon, Sep 21, 2009 at 9:53 AM, Subbu Allamaraju <subbu@...> wrote:
>
>
>
> > The excuse that RESTful systems can't use media-type versioning
> > because some web browsers don't deliver the right accept header is
> > bogus because if we are going to dictate restful architectures based
> > on existing client behaviour then we had better toss out PUT and
> > DELETE too. Sure there will be some systems where this is a deal
> > breaker, but that needs to be decided on a case by case basis.
>
> Yes, we continue to toss out PUT and DELETE for web apps. What's the
> point of an architecture if all tradeoffs are ignored? REST is not a
> non-negotiable style.
Just a bit of a side note.... In my experience, *not* using PUT and
DELETE is unnecessary. If we keep overloading/misusing POST, we're
just creating a messy eco-system for REST. I've had no problem with
any of the major browsers "hijacking" a form to perfom a real http
DELETE using a touch of javascript/xhr. Same for PUT -- if you have a
template generating your html, the "action" attribute of the form can
quite easily be the actual URI of the resource being edited. Then a
simply myform.onsubmit = function() { ajaxlib.ajax( this.action,'PUT',
this.serialize_as_atom()) }.
I think it makes for a better RESTful interface, and all of the other
tools I use to interact w/ the app (CURL or whatever), work just as
I'd hope.
If I need/want to stay in the browser for testing and such, There's
Firebug to trace all of this XHR HTTP and plugins like Poster which
allow me to do any HTTP interaction (w/ all 4 verbs).
--peter
>
> When all the media types are application specific (i.e. custom), and
> all bits of the infrastructure can deal with it those media types,
> then media type based versioning works just fine. But these conditions
> don't hold good always.
>
> > Out of curiosity, do you have pointers to the scenarios where URI
> > based versioning has worked well. I would be interested to compare
> > the contexts.
>
> Sorry, I don't have one on the public internet.
>
> Subbu
>
OpenRasta, which I happened to have written, has plenty of good reviews... -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of oliver.riches@... Sent: 21 September 2009 15:18 To: rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... thanks seb for the clarity... Any recommendations for a REST framework for .Net that's not WCF... Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: Sebastien Lambla [mailto:seb@...] Sent: 21 September 2009 15:14 To: RICHES, Oliver, GBM; rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... It means that if you want to let clients discover if an operation *can* be done, using the OPTIONS method is the way to go. If the returned message tells you teh resource only supports GET, you now know that it is read-only. If a client still tries an operation that is not allowed, issue a 405. S -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of ollie.riches@... Sent: 21 September 2009 14:32 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: REST exposing supported operations... So is the end clients meant to determine the usage of the service by the status code returns when trying an operation on a resource? i.e. if a resource is read only - no POST, PUT or DELETE (when using HTTP) then corresponding HTTP error code is returned and the end user interprets this as required... Cheers Ollie --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > I wouldn't say you _should_ expose the operations a resource supports, > but if you _want_ to, that's what the HTTP OPTIONS method is for. > > -Eric > > On Mon, 21 Sep 2009 11:22:14 -0000 > "Ollie" wrote: > > > Should I expose the operations a resources supports & If so how > > should I expose the operations? > > > > > > > > Cheers > > > > Ollie > > > > > ------------------------------------ Yahoo! Groups Links **************************************************************************** ******* The Royal Bank of Scotland plc. Registered in Scotland No 90312. Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. Authorised and regulated by the Financial Services Authority. This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer. Internet e-mails are not necessarily secure. The Royal Bank of Scotland plc does not accept responsibility for changes made to this message after it was sent. Whilst all reasonable care has been taken to avoid the transmission of viruses, it is the responsibility of the recipient to ensure that the onward transmission, opening or use of this message and any attachments will not adversely affect its systems or data. No responsibility is accepted by The Royal Bank of Scotland plc in this regard and the recipient should carry out such virus and other checks as it considers appropriate. Visit our website at www.rbs.com **************************************************************************** ******* ------------------------------------ Yahoo! Groups Links
On Sep 21, 2009, at 5:00 PM, oliver.riches@... wrote: > Thanks Jan I will look Atom, you're info has been very helpful... Glad to be helpful - it takes a while to get into the right mind set (as far as I experienced it). > > I'm starting to think the use of a DELETE operation is not as common > as it appears it might be - because you can POST a resource that > happen to encapsulate the operation - 'order cancelation' resource > posted to cancel an order in a booking system... The issue here is that client and server must not be coupled by hardcoding a set of URIs to use but that the client *discovers* the URIs from the server's responses. This allows the server to change without breaking the client. REST aims at maximizing this decoupling. You migh have something like this (all hypothetical and just a sketch!): -> POST /orders/ Content-Type: application/procurement (hypothetical media type) <order> <item>...</item> <item>...</item> </order> <- 201 Created Location: /orders/42 Content-Type: application/procurement <order cancelUri="/orders/42/cancelationProcessor"> <status>accepted</status> <item>...</item> <item>...</item> </order> Should the client want to cancel, it now has learned where to send the cancelation request. This is how hypermedia drives the client's application state. Note that the media type application/procurement would need to specify all this in a way that you can actually implement a client that can interact with the server. On the human Web, much of this gap is filled by the human user, but still, your browser does quite some interesting things behind the scenes once you think about it (All this is defined in the HTML media type and friends). Hope you can extract the necessary pieces to get your problem solved. Jan > > > Ollie Riches > RBS Global Banking & Markets > Office: +44 203 361 4071 > > -----Original Message----- > From: Jan Algermissen [mailto:algermissen1971@...] > Sent: 21 September 2009 15:55 > To: RICHES, Oliver, GBM > Cc: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Re: REST exposing supported operations... > > > On Sep 21, 2009, at 4:15 PM, oliver.riches@... wrote: > >>> Think differently: a client understands the semantics of the links >>> it >>> encounters in representations it receives from the >server and >>> follows the appropriate links to proceed through the (Web-) >>> application. >> >>> It is really not different than you making a purchase at Amazon, >>> except that in the machine to machine case the client code needs to >>> be aware of the link semantics, e.g. where to POST an order. >> >> I just don't get this... >> >> I'am not interested in how the end-user uses the links I've provide >> in >> my XML\JSON representation of a resource - I don't have anything to >> do >> with presentation I just provide links to access resources and not >> the >> operation that is acceptable. >> >> So I don't understand how they know what they can do... > > Take a look at the Atom Publishing Protocol specification[1] as an > example. The specification of mediatypes and link semantics tell the > client implementor what to look for in responses and what HTTP calls > to make on which resources. > > If the media type you are using does not have the necessary > semantics (e.g. application/xml or application/json do not, while > application/ atomsrv does) you cannot convey the necessary semantics > to the client. > >> >> plus when I delete an order from Amazon is it doing a http DELETE >> operation with the link directly or is it doing a http POST on a >> button action which is then interpreted as a http DELETE on the >> server >> and forward to the service... >> > > The use of POST partly has to do with browser capabilities and with > the collaboration style. Canceling an order is more than just > deleting the order resource. Such things are better done with POSTs > and explicit documents (e.g. order cancelation). > > Jan > > > [1] http://tools.ietf.org/html/rfc5023 > > > >> >> >> >> >> Ollie Riches >> RBS Global Banking & Markets >> Office: +44 203 361 4071 >> >> -----Original Message----- >> From: Jan Algermissen [mailto:algermissen1971@...] >> Sent: 21 September 2009 14:48 >> To: RICHES, Oliver, GBM >> Cc: rest-discuss@yahoogroups.com >> Subject: Re: [rest-discuss] Re: REST exposing supported operations... >> >> Ollie, >> >> On Sep 21, 2009, at 3:33 PM, Ollie wrote: >> >>> For some of the resources I'm working with don't have a method >>> \operation that maps on to PUT, POST, DELETE >> >> Think differently: design your resources in a way that you can achive >> the goals with the uniform interface (GET,PUT,POST,DELETE). >> >> >>> if I'm using HTTP as the transport. >> >> Bbzzzz - sorry, this rings the buzzer :-) You have to make sure you >> understand that HTTP is not a transport protocol but an application >> protocol. You do not layer application semantics on top of HTTP. >> HTTP is used to trans*fer* resource state between client and server. >> >> >>> >>> Should the end user of the service have some way to find out the >>> operations supported by a reosurce or should I just return the >>> appropriate status code? >> >> Think differently: a client understands the semantics of the links it >> encounters in representations it receives from the server and follows >> the appropriate links to proceed through the (Web-) application. >> >> It is really not different than you making a purchase at Amazon, >> except that in the machine to machine case the client code needs to >> be >> aware of the link semantics, e.g. where to POST an order. >> >> HTH, >> Jan >> >>> >>> Cheers >>> >>> Ollie >>> >>> --- In rest-discuss@yahoogroups.com, Jan Algermissen >>> <algermissen1971@...> wrote: >>>> >>>> Ollie, >>>> >>>> On Sep 21, 2009, at 1:22 PM, Ollie wrote: >>>> >>>>> Should I expose the operations a resources supports & If so how >>>>> should I expose the operations? >>>>> >>>> >>>> What do you mean by 'operations'? HTTP methods or descriptions of >>>> expected resource semantics (e.g. that you create an entry in APP >>>> by >>>> POSTing to a collection)? >>>> >>>> Jan >>>> >>>> >>>> >>>>> >>>>> >>>>> Cheers >>>>> >>>>> Ollie >>>>> >>>>> >>>>> >>>>> ------------------------------------ >>>>> >>>>> Yahoo! Groups Links >>>>> >>>>> >>>>> >>>> >>>> -------------------------------------- >>>> Jan Algermissen >>>> >>>> Mail: algermissen@... >>>> Blog: http://algermissen.blogspot.com/ >>>> Home: http://www.jalgermissen.com >>>> -------------------------------------- >>>> >>> >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> >> >> ********************************************************************** >> ************* The Royal Bank of Scotland plc. Registered in Scotland >> No 90312. >> Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. >> Authorised and regulated by the Financial Services Authority. >> >> This e-mail message is confidential and for use by the addressee >> only. >> If the message is received by anyone other than the addressee, please >> return the message to the sender by replying to it and then delete >> the >> message from your computer. Internet e-mails are not necessarily >> secure. The Royal Bank of Scotland plc does not accept responsibility >> for changes made to this message after it was sent. >> >> Whilst all reasonable care has been taken to avoid the transmission >> of >> viruses, it is the responsibility of the recipient to ensure that the >> onward transmission, opening or use of this message and any >> attachments will not adversely affect its systems or data. No >> responsibility is accepted by The Royal Bank of Scotland plc in this >> regard and the recipient should carry out such virus and other checks >> as it considers appropriate. >> >> Visit our website at www.rbs.com >> >> ********************************************************************** >> ************* >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Sure, it depends on your semantics. DELETE is more transparent to intermediaries, which means cache invalidation is automatic, letting you do plenty of interesting scenarios by intercepting http messages. POST is more ubiquituous and can be alright in some circumstances. As long as the client understand the semantic of a link (aka knows that the one saying _delete customer_ is the one that means, well, deleting a customer), it can simply follow that link. Then it's just a matter of letting the client use whatever verb the server instructed it to use to perform the operation. Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of oliver.riches@... Sent: 21 September 2009 16:01 To: rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... Thanks Jan I will look Atom, you're info has been very helpful... I'm starting to think the use of a DELETE operation is not as common as it appears it might be - because you can POST a resource that happen to encapsulate the operation - 'order cancelation' resource posted to cancel an order in a booking system... Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: Jan Algermissen [mailto:algermissen1971@...] Sent: 21 September 2009 15:55 To: RICHES, Oliver, GBM Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: REST exposing supported operations... On Sep 21, 2009, at 4:15 PM, oliver.riches@... wrote: >> Think differently: a client understands the semantics of the links it >> encounters in representations it receives from the >server and >> follows the appropriate links to proceed through the (Web-) >> application. > >> It is really not different than you making a purchase at Amazon, >> except that in the machine to machine case the client code needs to >> be aware of the link semantics, e.g. where to POST an order. > > I just don't get this... > > I'am not interested in how the end-user uses the links I've provide in > my XML\JSON representation of a resource - I don't have anything to do > with presentation I just provide links to access resources and not the > operation that is acceptable. > > So I don't understand how they know what they can do... Take a look at the Atom Publishing Protocol specification[1] as an example. The specification of mediatypes and link semantics tell the client implementor what to look for in responses and what HTTP calls to make on which resources. If the media type you are using does not have the necessary semantics (e.g. application/xml or application/json do not, while application/ atomsrv does) you cannot convey the necessary semantics to the client. > > plus when I delete an order from Amazon is it doing a http DELETE > operation with the link directly or is it doing a http POST on a > button action which is then interpreted as a http DELETE on the server > and forward to the service... > The use of POST partly has to do with browser capabilities and with the collaboration style. Canceling an order is more than just deleting the order resource. Such things are better done with POSTs and explicit documents (e.g. order cancelation). Jan [1] http://tools.ietf.org/html/rfc5023 > > > > > Ollie Riches > RBS Global Banking & Markets > Office: +44 203 361 4071 > > -----Original Message----- > From: Jan Algermissen [mailto:algermissen1971@...] > Sent: 21 September 2009 14:48 > To: RICHES, Oliver, GBM > Cc: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Re: REST exposing supported operations... > > Ollie, > > On Sep 21, 2009, at 3:33 PM, Ollie wrote: > >> For some of the resources I'm working with don't have a method >> \operation that maps on to PUT, POST, DELETE > > Think differently: design your resources in a way that you can achive > the goals with the uniform interface (GET,PUT,POST,DELETE). > > >> if I'm using HTTP as the transport. > > Bbzzzz - sorry, this rings the buzzer :-) You have to make sure you > understand that HTTP is not a transport protocol but an application > protocol. You do not layer application semantics on top of HTTP. > HTTP is used to trans*fer* resource state between client and server. > > >> >> Should the end user of the service have some way to find out the >> operations supported by a reosurce or should I just return the >> appropriate status code? > > Think differently: a client understands the semantics of the links it > encounters in representations it receives from the server and follows > the appropriate links to proceed through the (Web-) application. > > It is really not different than you making a purchase at Amazon, > except that in the machine to machine case the client code needs to be > aware of the link semantics, e.g. where to POST an order. > > HTH, > Jan > >> >> Cheers >> >> Ollie >> >> --- In rest-discuss@yahoogroups.com, Jan Algermissen >> <algermissen1971@...> wrote: >>> >>> Ollie, >>> >>> On Sep 21, 2009, at 1:22 PM, Ollie wrote: >>> >>>> Should I expose the operations a resources supports & If so how >>>> should I expose the operations? >>>> >>> >>> What do you mean by 'operations'? HTTP methods or descriptions of >>> expected resource semantics (e.g. that you create an entry in APP by >>> POSTing to a collection)? >>> >>> Jan >>> >>> >>> >>>> >>>> >>>> Cheers >>>> >>>> Ollie >>>> >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>> >>> -------------------------------------- >>> Jan Algermissen >>> >>> Mail: algermissen@... >>> Blog: http://algermissen.blogspot.com/ >>> Home: http://www.jalgermissen.com >>> -------------------------------------- >>> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > ********************************************************************** > ************* The Royal Bank of Scotland plc. Registered in Scotland > No 90312. > Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. > Authorised and regulated by the Financial Services Authority. > > This e-mail message is confidential and for use by the addressee only. > If the message is received by anyone other than the addressee, please > return the message to the sender by replying to it and then delete the > message from your computer. Internet e-mails are not necessarily > secure. The Royal Bank of Scotland plc does not accept responsibility > for changes made to this message after it was sent. > > Whilst all reasonable care has been taken to avoid the transmission of > viruses, it is the responsibility of the recipient to ensure that the > onward transmission, opening or use of this message and any > attachments will not adversely affect its systems or data. No > responsibility is accepted by The Royal Bank of Scotland plc in this > regard and the recipient should carry out such virus and other checks > as it considers appropriate. > > Visit our website at www.rbs.com > > ********************************************************************** > ************* > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com -------------------------------------- ------------------------------------ Yahoo! Groups Links
<customer> <link rel="http://rels.acme.org/delete" method="delete" action="/customer/21" /> </customer> Or if you want a post <customer> <link rel="http://rels.acme.org/deactivate" method="post" action="/deactivationRequests" mediaType="application/vnd.customerml+xml" /> </customer> That's how you associate verb + link in a representation. It's why the pompous don't like the theorists calling hypermedia controls. Or what html calls forms. :) Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of oliver.riches@rbs.com Sent: 21 September 2009 15:25 To: rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... Quality answer... I don't understand when the 'theorists' say 'the REST theorists will tell you that shouldn't happen because a REST application will have all those URI/Verbs being driven by Hipertext,' but how am I meant to insert\assocatite the verb with\into URI when returning a XML\JSON representation of a resource? Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: António Mota [mailto:amsmota@...] Sent: 21 September 2009 15:21 To: RICHES, Oliver, GBM Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: REST exposing supported operations... Sometimes I wonder if the people on this list actually work, as in doing practical things and not theoretically ones, because sometimes I see such a complicated answers to so simple questions. In HTTP, if you send a OPTIONS to a URI, you get a answer with the verbs that it supports. In your case, OPTIONS will return just GET (from the four you mentioned) If a user-agent send a PUT, or POST, or DELETE to that URI the server will respond with 405 Method Not Allowed. That's it, the REST theorists will tell you that shouldn't happen because a REST application will have all those URI/Verbs being driven by Hipertext, meaning you don't know in advance the URI and/or the Verbs but you should only follow the links that the server send back to you. But in fact, sometimes things work in practice and not in theory, so there are situations where that can not happen, for example in the "few well know URI's that are the entry point of a application", but there are more... Now, to be REST and not HTTP, where I said URI you should read Resource, where I said "links" you should read "hipertext embedded in the representation of the resource that should drive the application state changes" and I'm sure that the good theorists in here will have a way of adding at least "Media-Type" or "content-negotiation" and other valuable concepts, but for me, that actually have to use these things in practice, I found much better to start from the bottom with simple things and try to go up from there... Ollie wrote: > > > For some of the resources I'm working with don't have a > method\operation that maps on to PUT, POST, DELETE if I'm using HTTP > as the transport. > > Should the end user of the service have some way to find out the > operations supported by a reosurce or should I just return the > appropriate status code? > > Cheers > > Ollie > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Jan Algermissen > <algermissen1971@...> wrote: > > > > Ollie, > > > > On Sep 21, 2009, at 1:22 PM, Ollie wrote: > > > > > Should I expose the operations a resources supports & If so how > > > should I expose the operations? > > > > > > > What do you mean by 'operations'? HTTP methods or descriptions of > > expected resource semantics (e.g. that you create an entry in APP by > > POSTing to a collection)? > > > > Jan > > > > > > > > > > > > > > > Cheers > > > > > > Ollie > > > > > > > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > -------------------------------------- > > Jan Algermissen > > > > Mail: algermissen@... > > Blog: http://algermissen.blogspot.com/ > <http://algermissen.blogspot.com/> > > Home: http://www.jalgermissen.com <http://www.jalgermissen.com> > > -------------------------------------- > > > > **************************************************************************** ******* The Royal Bank of Scotland plc. Registered in Scotland No 90312. Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. Authorised and regulated by the Financial Services Authority. This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer. Internet e-mails are not necessarily secure. The Royal Bank of Scotland plc does not accept responsibility for changes made to this message after it was sent. Whilst all reasonable care has been taken to avoid the transmission of viruses, it is the responsibility of the recipient to ensure that the onward transmission, opening or use of this message and any attachments will not adversely affect its systems or data. No responsibility is accepted by The Royal Bank of Scotland plc in this regard and the recipient should carry out such virus and other checks as it considers appropriate. Visit our website at www.rbs.com **************************************************************************** ******* ------------------------------------ Yahoo! Groups Links
Thanks subbu. I'm very well aware of the app/x-www-form-urlencoded mediatype. It's use as the querystring part in a GET verb is also specified in HTML5 yes. That said, this adds a specific semantic to the querystring part of an HTTP uri, and there is no way to detect if the client came from a browser that passes a mediatype in the querystring, or not. My question remains, when receiving a request, and without being able to assume if the querystring is encoded the URI way or is in fact an html production, what default (and non dangerous) behaviour can be accepted in all cases? Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Subbu Allamaraju Sent: 21 September 2009 15:38 To: Sebastien Lambla Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Slightly OT: Plus sign in querystrings The "+" style encoding is specified by HTML for the "application/x-www- form-urlencoded" media type. Both HTML4.01 (http://www.w3.org/TR/html401/interact/forms.html#h-17.13.4 ) and HTML5 (http://www.w3.org/TR/html5/forms.html#application-x-www-form-urlencoded-enc oding-algorithm ) describe this. When you use forms with method GET, the "+" character becomes part of the URI. For query parameter data, the framework will have to use HTML encoding rules. Subbu On Sep 21, 2009, at 12:40 AM, Sebastien Lambla wrote: > Hi guys, > > this is an HTTP question, if you feel it's OT please discard :) > > The original URLs used + as a shorthand for spaces in querystrings. > Browsers still implement this feature. Sadly, neither http nor html > (except for app/www-url-formencoded used as querystring in the HTML5 > spec) imply this should still apply. > > Hence my question. Should an HTTP framework decode those + by > default for any http URI? I'm a bit split on the issue, as I don't > want to implement non-standard features, but I also don't want to > p*ss off my users. > > Any suggestions? > > Seb > > > View your other email accounts from your Hotmail inbox. Add them now. > > ------------------------------------ Yahoo! Groups Links
So all my links should be annotated like this or a similiar manner? Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: Sebastien Lambla [mailto:seb@...] Sent: 21 September 2009 16:34 To: RICHES, Oliver, GBM; rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... <customer> <link rel="http://rels.acme.org/delete" method="delete" action="/customer/21" /> </customer> Or if you want a post <customer> <link rel="http://rels.acme.org/deactivate" method="post" action="/deactivationRequests" mediaType="application/vnd.customerml+xml" /> </customer> That's how you associate verb + link in a representation. It's why the pompous don't like the theorists calling hypermedia controls. Or what html calls forms. :) Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of oliver.riches@... Sent: 21 September 2009 15:25 To: rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... Quality answer... I don't understand when the 'theorists' say 'the REST theorists will tell you that shouldn't happen because a REST application will have all those URI/Verbs being driven by Hipertext,' but how am I meant to insert\assocatite the verb with\into URI when returning a XML\JSON representation of a resource? Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: António Mota [mailto:amsmota@...] Sent: 21 September 2009 15:21 To: RICHES, Oliver, GBM Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: REST exposing supported operations... Sometimes I wonder if the people on this list actually work, as in doing practical things and not theoretically ones, because sometimes I see such a complicated answers to so simple questions. In HTTP, if you send a OPTIONS to a URI, you get a answer with the verbs that it supports. In your case, OPTIONS will return just GET (from the four you mentioned) If a user-agent send a PUT, or POST, or DELETE to that URI the server will respond with 405 Method Not Allowed. That's it, the REST theorists will tell you that shouldn't happen because a REST application will have all those URI/Verbs being driven by Hipertext, meaning you don't know in advance the URI and/or the Verbs but you should only follow the links that the server send back to you. But in fact, sometimes things work in practice and not in theory, so there are situations where that can not happen, for example in the "few well know URI's that are the entry point of a application", but there are more... Now, to be REST and not HTTP, where I said URI you should read Resource, where I said "links" you should read "hipertext embedded in the representation of the resource that should drive the application state changes" and I'm sure that the good theorists in here will have a way of adding at least "Media-Type" or "content-negotiation" and other valuable concepts, but for me, that actually have to use these things in practice, I found much better to start from the bottom with simple things and try to go up from there... Ollie wrote: > > > For some of the resources I'm working with don't have a > method\operation that maps on to PUT, POST, DELETE if I'm using HTTP > as the transport. > > Should the end user of the service have some way to find out the > operations supported by a reosurce or should I just return the > appropriate status code? > > Cheers > > Ollie > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Jan Algermissen > <algermissen1971@...> wrote: > > > > Ollie, > > > > On Sep 21, 2009, at 1:22 PM, Ollie wrote: > > > > > Should I expose the operations a resources supports & If so how > > > should I expose the operations? > > > > > > > What do you mean by 'operations'? HTTP methods or descriptions of > > expected resource semantics (e.g. that you create an entry in APP by > > POSTing to a collection)? > > > > Jan > > > > > > > > > > > > > > > Cheers > > > > > > Ollie > > > > > > > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > -------------------------------------- > > Jan Algermissen > > > > Mail: algermissen@... > > Blog: http://algermissen.blogspot.com/ > <http://algermissen.blogspot.com/> > > Home: http://www.jalgermissen.com <http://www.jalgermissen.com> > > -------------------------------------- > > > > **************************************************************************** ******* The Royal Bank of Scotland plc. Registered in Scotland No 90312. Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. Authorised and regulated by the Financial Services Authority. This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer. Internet e-mails are not necessarily secure. The Royal Bank of Scotland plc does not accept responsibility for changes made to this message after it was sent. Whilst all reasonable care has been taken to avoid the transmission of viruses, it is the responsibility of the recipient to ensure that the onward transmission, opening or use of this message and any attachments will not adversely affect its systems or data. No responsibility is accepted by The Royal Bank of Scotland plc in this regard and the recipient should carry out such virus and other checks as it considers appropriate. Visit our website at www.rbs.com **************************************************************************** ******* ------------------------------------ Yahoo! Groups Links
If you want to hint the client as to what it can do against an operation, yes you can craft your media type to let the client be hinted about it. If the client doesn't know (because whoever wrote the document or because you decided against going with hyperlinks), when you can issue the OPTIONS every time your retrieve a request, to know what you can do with it, and update your UI accordingly. Depends on what you are trying to achieve. Why is it that you need the client to know if something is readonly or not? UI interaction, developer education, something else? -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of oliver.riches@... Sent: 21 September 2009 17:00 To: rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... So all my links should be annotated like this or a similiar manner? Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: Sebastien Lambla [mailto:seb@serialseb.com] Sent: 21 September 2009 16:34 To: RICHES, Oliver, GBM; rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... <customer> <link rel="http://rels.acme.org/delete" method="delete" action="/customer/21" /> </customer> Or if you want a post <customer> <link rel="http://rels.acme.org/deactivate" method="post" action="/deactivationRequests" mediaType="application/vnd.customerml+xml" /> </customer> That's how you associate verb + link in a representation. It's why the pompous don't like the theorists calling hypermedia controls. Or what html calls forms. :) Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of oliver.riches@... Sent: 21 September 2009 15:25 To: rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... Quality answer... I don't understand when the 'theorists' say 'the REST theorists will tell you that shouldn't happen because a REST application will have all those URI/Verbs being driven by Hipertext,' but how am I meant to insert\assocatite the verb with\into URI when returning a XML\JSON representation of a resource? Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: António Mota [mailto:amsmota@...] Sent: 21 September 2009 15:21 To: RICHES, Oliver, GBM Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: REST exposing supported operations... Sometimes I wonder if the people on this list actually work, as in doing practical things and not theoretically ones, because sometimes I see such a complicated answers to so simple questions. In HTTP, if you send a OPTIONS to a URI, you get a answer with the verbs that it supports. In your case, OPTIONS will return just GET (from the four you mentioned) If a user-agent send a PUT, or POST, or DELETE to that URI the server will respond with 405 Method Not Allowed. That's it, the REST theorists will tell you that shouldn't happen because a REST application will have all those URI/Verbs being driven by Hipertext, meaning you don't know in advance the URI and/or the Verbs but you should only follow the links that the server send back to you. But in fact, sometimes things work in practice and not in theory, so there are situations where that can not happen, for example in the "few well know URI's that are the entry point of a application", but there are more... Now, to be REST and not HTTP, where I said URI you should read Resource, where I said "links" you should read "hipertext embedded in the representation of the resource that should drive the application state changes" and I'm sure that the good theorists in here will have a way of adding at least "Media-Type" or "content-negotiation" and other valuable concepts, but for me, that actually have to use these things in practice, I found much better to start from the bottom with simple things and try to go up from there... Ollie wrote: > > > For some of the resources I'm working with don't have a > method\operation that maps on to PUT, POST, DELETE if I'm using HTTP > as the transport. > > Should the end user of the service have some way to find out the > operations supported by a reosurce or should I just return the > appropriate status code? > > Cheers > > Ollie > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Jan Algermissen > <algermissen1971@...> wrote: > > > > Ollie, > > > > On Sep 21, 2009, at 1:22 PM, Ollie wrote: > > > > > Should I expose the operations a resources supports & If so how > > > should I expose the operations? > > > > > > > What do you mean by 'operations'? HTTP methods or descriptions of > > expected resource semantics (e.g. that you create an entry in APP by > > POSTing to a collection)? > > > > Jan > > > > > > > > > > > > > > > Cheers > > > > > > Ollie > > > > > > > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > -------------------------------------- > > Jan Algermissen > > > > Mail: algermissen@... > > Blog: http://algermissen.blogspot.com/ > <http://algermissen.blogspot.com/> > > Home: http://www.jalgermissen.com <http://www.jalgermissen.com> > > -------------------------------------- > > > > **************************************************************************** ******* The Royal Bank of Scotland plc. Registered in Scotland No 90312. Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. Authorised and regulated by the Financial Services Authority. This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer. Internet e-mails are not necessarily secure. The Royal Bank of Scotland plc does not accept responsibility for changes made to this message after it was sent. Whilst all reasonable care has been taken to avoid the transmission of viruses, it is the responsibility of the recipient to ensure that the onward transmission, opening or use of this message and any attachments will not adversely affect its systems or data. No responsibility is accepted by The Royal Bank of Scotland plc in this regard and the recipient should carry out such virus and other checks as it considers appropriate. Visit our website at www.rbs.com **************************************************************************** ******* ------------------------------------ Yahoo! Groups Links ------------------------------------ Yahoo! Groups Links
That depends on what you agree with your users/clients, use your own format or look at existing ones (maybe better) like the already mentioned Atom Application Protocol (I never used it so I don't know the specifics). I think there are some JSON-based as well. A article I was referring by Subbu that explains all this in a little more depth is http://www.infoq.com/articles/subbu-allamaraju-rest oliver.riches@... wrote: > > > So all my links should be annotated like this or a similiar manner? > > Ollie Riches > RBS Global Banking & Markets > Office: +44 203 361 4071 > > -----Original Message----- > From: Sebastien Lambla [mailto:seb@... > <mailto:seb%40serialseb.com>] > Sent: 21 September 2009 16:34 > To: RICHES, Oliver, GBM; rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com> > Subject: RE: [rest-discuss] Re: REST exposing supported operations... > > <customer> > <link rel="http://rels.acme.org/delete <http://rels.acme.org/delete>" > method="delete" > action="/customer/21" /> > </customer> > > Or if you want a post > > <customer> > <link rel="http://rels.acme.org/deactivate > <http://rels.acme.org/deactivate>" method="post" > action="/deactivationRequests" > mediaType="application/vnd.customerml+xml" /> </customer> > > That's how you associate verb + link in a representation. It's why the > pompous don't like the theorists calling hypermedia controls. Or what > html calls forms. :) > > Seb > > -----Original Message----- > From: rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com> > [mailto:rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>] On Behalf Of > oliver.riches@... <mailto:oliver.riches%40rbs.com> > Sent: 21 September 2009 15:25 > To: rest-discuss@yahoogroups.com <mailto:rest-discuss%40yahoogroups.com> > Subject: RE: [rest-discuss] Re: REST exposing supported operations... > > Quality answer... > > I don't understand when the 'theorists' say > > 'the REST theorists will tell you that shouldn't happen because a REST > application will have all those URI/Verbs being driven by Hipertext,' > > but how am I meant to insert\assocatite the verb with\into URI when > returning a XML\JSON representation of a resource? > > Ollie Riches > RBS Global Banking & Markets > Office: +44 203 361 4071 > > -----Original Message----- > From: Ant�nio Mota [mailto:amsmota@... <mailto:amsmota%40gmail.com>] > Sent: 21 September 2009 15:21 > To: RICHES, Oliver, GBM > Cc: rest-discuss@yahoogroups.com <mailto:rest-discuss%40yahoogroups.com> > Subject: Re: [rest-discuss] Re: REST exposing supported operations... > > Sometimes I wonder if the people on this list actually work, as in > doing practical things and not theoretically ones, because sometimes I > see such a complicated answers to so simple questions. > > In HTTP, if you send a OPTIONS to a URI, you get a answer with the > verbs that it supports. In your case, OPTIONS will return just GET > (from the four you mentioned) > > If a user-agent send a PUT, or POST, or DELETE to that URI the server > will respond with 405 Method Not Allowed. > That's it, the REST theorists will tell you that shouldn't happen > because a REST application will have all those URI/Verbs being driven > by Hipertext, meaning you don't know in advance the URI and/or the > Verbs but you should only follow the links that the server send back > to you. > > But in fact, sometimes things work in practice and not in theory, so > there are situations where that can not happen, for example in the > "few well know URI's that are the entry point of a application", but > there are more... > > Now, to be REST and not HTTP, where I said URI you should read > Resource, where I said "links" you should read "hipertext embedded in > the representation of the resource that should drive the application > state changes" and I'm sure that the good theorists in here will have > a way of adding at least "Media-Type" or "content-negotiation" and > other valuable concepts, but for me, that actually have to use these > things in practice, I found much better to start from the bottom with > simple things and try to go up from there... > > Ollie wrote: > > > > > > For some of the resources I'm working with don't have a > > method\operation that maps on to PUT, POST, DELETE if I'm using HTTP > > as the transport. > > > > Should the end user of the service have some way to find out the > > operations supported by a reosurce or should I just return the > > appropriate status code? > > > > Cheers > > > > Ollie > > > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com> > > <mailto:rest-discuss%40yahoogroups.com>, Jan Algermissen > > <algermissen1971@...> wrote: > > > > > > Ollie, > > > > > > On Sep 21, 2009, at 1:22 PM, Ollie wrote: > > > > > > > Should I expose the operations a resources supports & If so how > > > > should I expose the operations? > > > > > > > > > > What do you mean by 'operations'? HTTP methods or descriptions of > > > expected resource semantics (e.g. that you create an entry in APP by > > > POSTing to a collection)? > > > > > > Jan > > > > > > > > > > > > > > > > > > > > > Cheers > > > > > > > > Ollie > > > > > > > > > > > > > > > > ------------------------------------ > > > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > > > > > > -------------------------------------- > > > Jan Algermissen > > > > > > Mail: algermissen@... > > > Blog: http://algermissen.blogspot.com/ > <http://algermissen.blogspot.com/> > > <http://algermissen.blogspot.com/ <http://algermissen.blogspot.com/>> > > > Home: http://www.jalgermissen.com <http://www.jalgermissen.com> > <http://www.jalgermissen.com <http://www.jalgermissen.com>> > > > -------------------------------------- > > > > > > > > > **************************************************************************** > ******* > The Royal Bank of Scotland plc. Registered in Scotland No 90312. > Registered > Office: 36 St Andrew Square, Edinburgh EH2 2YB. > Authorised and regulated by the Financial Services Authority. > > This e-mail message is confidential and for use by the addressee only. > If the message is received by anyone other than the addressee, please > return the message to the sender by replying to it and then delete the > message from your computer. Internet e-mails are not necessarily > secure. The Royal Bank of Scotland plc does not accept responsibility > for changes made to this message after it was sent. > > Whilst all reasonable care has been taken to avoid the transmission of > viruses, it is the responsibility of the recipient to ensure that the > onward transmission, opening or use of this message and any > attachments will not adversely affect its systems or data. No > responsibility is accepted by The Royal Bank of Scotland plc in this > regard and the recipient should carry out such virus and other checks > as it considers appropriate. > > Visit our website at www.rbs.com > > **************************************************************************** > ******* > > ------------------------------------ > > Yahoo! Groups Links > >
On Sep 21, 2009, at 9:03 AM, Sebastien Lambla wrote: > If the client doesn't know (because whoever wrote the document or > because > you decided against going with hyperlinks), when you can issue the > OPTIONS > every time your retrieve a request, to know what you can do with it, > and > update your UI accordingly. OPTIONS at runtime is costly. It is not cacheable, and introduces an extra roundtrip. Subbu
Andrew Wahbe wrote: > I can see link headers being useful if you need to use a representation > format that doesn't support linking. But preferring HTTP link headers > over links in your hypermedia format seems backwards to me. It goes > against the established practice of the Web doesn't it? > > It also seems like the first step in collapsing the protocol space down > to just HTTP. REST allows other protocols (e.g. FTP and in the future > Waka?) to be used. Isn't that in part why the URI starts with a scheme > identifier? Pulling the links out of the hypermedia and into the > protocol headers makes it harder to use your hypermedia format with > other protocols and is in conflict with some of the design principles of > the web, exemplified by URI, no? > A couple of things: * Link headers aren't only useful for representation formats that don't support linking. They also help clients that don't know (or care) how to process the relationships to a resource, but do know how to manipulate the relationships. For the situation where the client doesn't care what the representation is and is only interested in the relationships, it can just do a HEAD request to obtain the metadata. * IMO, there's no reason that a link relationship couldn't be published both as a LInk header and embedded within the representation. * IMO, the established practices of the web were done to solve the issues with the human interactions with a browser. The programmatic web where your clients are machines will need different ways to consume the same data so that it can efficiently operate. * Finally, links are metadata. Other established headers define metadata about the resource as well (ETag, Last-Modified for instance). Why not just one more? Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Here's my thoughts on the compatibility of Transactions and REST. Maybe now you can see where I am coming from. http://bill.burkecentral.com/2009/09/21/credit-cards-transactions-and-rest/ -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
I'm just kicking around ideas becuase I see alot so called RESTful APIs being dismissed as not RESTful and coupling the server to client. I always thought that one of the advantages of REST over SOAP was the removal of this coupling... -----Original Message----- From: Sebastien Lambla [mailto:seb@serialseb.com] Sent: 21 September 2009 17:04 To: RICHES, Oliver, GBM; rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... If you want to hint the client as to what it can do against an operation, yes you can craft your media type to let the client be hinted about it. If the client doesn't know (because whoever wrote the document or because you decided against going with hyperlinks), when you can issue the OPTIONS every time your retrieve a request, to know what you can do with it, and update your UI accordingly. Depends on what you are trying to achieve. Why is it that you need the client to know if something is readonly or not? UI interaction, developer education, something else? -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of oliver.riches@... Sent: 21 September 2009 17:00 To: rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... So all my links should be annotated like this or a similiar manner? Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: Sebastien Lambla [mailto:seb@...] Sent: 21 September 2009 16:34 To: RICHES, Oliver, GBM; rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... <customer> <link rel="http://rels.acme.org/delete" method="delete" action="/customer/21" /> </customer> Or if you want a post <customer> <link rel="http://rels.acme.org/deactivate" method="post" action="/deactivationRequests" mediaType="application/vnd.customerml+xml" /> </customer> That's how you associate verb + link in a representation. It's why the pompous don't like the theorists calling hypermedia controls. Or what html calls forms. :) Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of oliver.riches@... Sent: 21 September 2009 15:25 To: rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: REST exposing supported operations... Quality answer... I don't understand when the 'theorists' say 'the REST theorists will tell you that shouldn't happen because a REST application will have all those URI/Verbs being driven by Hipertext,' but how am I meant to insert\assocatite the verb with\into URI when returning a XML\JSON representation of a resource? Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: António Mota [mailto:amsmota@...] Sent: 21 September 2009 15:21 To: RICHES, Oliver, GBM Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: REST exposing supported operations... Sometimes I wonder if the people on this list actually work, as in doing practical things and not theoretically ones, because sometimes I see such a complicated answers to so simple questions. In HTTP, if you send a OPTIONS to a URI, you get a answer with the verbs that it supports. In your case, OPTIONS will return just GET (from the four you mentioned) If a user-agent send a PUT, or POST, or DELETE to that URI the server will respond with 405 Method Not Allowed. That's it, the REST theorists will tell you that shouldn't happen because a REST application will have all those URI/Verbs being driven by Hipertext, meaning you don't know in advance the URI and/or the Verbs but you should only follow the links that the server send back to you. But in fact, sometimes things work in practice and not in theory, so there are situations where that can not happen, for example in the "few well know URI's that are the entry point of a application", but there are more... Now, to be REST and not HTTP, where I said URI you should read Resource, where I said "links" you should read "hipertext embedded in the representation of the resource that should drive the application state changes" and I'm sure that the good theorists in here will have a way of adding at least "Media-Type" or "content-negotiation" and other valuable concepts, but for me, that actually have to use these things in practice, I found much better to start from the bottom with simple things and try to go up from there... Ollie wrote: > > > For some of the resources I'm working with don't have a > method\operation that maps on to PUT, POST, DELETE if I'm using HTTP > as the transport. > > Should the end user of the service have some way to find out the > operations supported by a reosurce or should I just return the > appropriate status code? > > Cheers > > Ollie > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Jan Algermissen > <algermissen1971@...> wrote: > > > > Ollie, > > > > On Sep 21, 2009, at 1:22 PM, Ollie wrote: > > > > > Should I expose the operations a resources supports & If so how > > > should I expose the operations? > > > > > > > What do you mean by 'operations'? HTTP methods or descriptions of > > expected resource semantics (e.g. that you create an entry in APP by > > POSTing to a collection)? > > > > Jan > > > > > > > > > > > > > > > Cheers > > > > > > Ollie > > > > > > > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > -------------------------------------- > > Jan Algermissen > > > > Mail: algermissen@... > > Blog: http://algermissen.blogspot.com/ > <http://algermissen.blogspot.com/> > > Home: http://www.jalgermissen.com <http://www.jalgermissen.com> > > -------------------------------------- > > > > **************************************************************************** ******* The Royal Bank of Scotland plc. Registered in Scotland No 90312. Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. Authorised and regulated by the Financial Services Authority. This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer. Internet e-mails are not necessarily secure. The Royal Bank of Scotland plc does not accept responsibility for changes made to this message after it was sent. Whilst all reasonable care has been taken to avoid the transmission of viruses, it is the responsibility of the recipient to ensure that the onward transmission, opening or use of this message and any attachments will not adversely affect its systems or data. No responsibility is accepted by The Royal Bank of Scotland plc in this regard and the recipient should carry out such virus and other checks as it considers appropriate. Visit our website at www.rbs.com **************************************************************************** ******* ------------------------------------ Yahoo! Groups Links ------------------------------------ Yahoo! Groups Links
Sebastien Lambla wrote: > <link rel="http://rels.acme.org/deactivate > <http://rels.acme.org/deactivate>" method="post" > action="/deactivationRequests" mediaType="application/vnd.customerml+xml" /> I haven't seen this usage of a method attribute on link neither in Atom spec or Link header spec or in the list of registered links. Seems like a good thing though :) -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Sep 21, 2009, at 9:46 AM, Bill Burke wrote: > > > Sebastien Lambla wrote: > > <link rel="http://rels.acme.org/deactivate > > <http://rels.acme.org/deactivate>" method="post" > > action="/deactivationRequests" mediaType="application/ > vnd.customerml+xml" /> > > I haven't seen this usage of a method attribute on link neither in > Atom > spec or Link header spec or in the list of registered links. > > Seems like a good thing though :) There is no method on links because the idea is that the link relation specifies the semantics of the action along with other details like what media types are valid, the preconditions and so on for the URI in the lik. By the way, the type attribute (not "mediaType" as in the example above) is just a hint, and isn't useful when the purpose of the link is a write. Subbu
Subbu Allamaraju wrote: > > On Sep 21, 2009, at 9:46 AM, Bill Burke wrote: > >> >> >> Sebastien Lambla wrote: >> > <link rel="http://rels.acme.org/deactivate >> > <http://rels.acme.org/deactivate>" method="post" >> > action="/deactivationRequests" >> mediaType="application/vnd.customerml+xml" /> >> >> I haven't seen this usage of a method attribute on link neither in Atom >> spec or Link header spec or in the list of registered links. >> >> Seems like a good thing though :) > > There is no method on links because the idea is that the link relation > specifies the semantics of the action along with other details like what > media types are valid, the preconditions and so on for the URI in the > lik. By the way, the type attribute (not "mediaType" as in the example > above) is just a hint, and isn't useful when the purpose of the link is > a write. > One thing I didn't understand from the Link header draft was "rel" vs. "rev", outbound vs. inbound. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
To be clear: I'm not against the link header. Maybe I was reading into what you were saying a little, but you seemed to be implying that the link header should be preferred over using hypermedia, the hypermedia only expressing links that could not be expressed with a link header. That is what I have issue with. I think having links in both the headers and the hypermedia is ok too if you think you need it. I think we've been writing spiders for years now with out it so I'm not totally sold on the utility but I don't think it's "wrong". The HEAD optimization you mention is interesting, but I wonder how useful the yeilded links are in practice when you lack the context of the resource representation. You end up knowing that there's this "thing" that links to this other "thing" with a certain class of relationship, but you know very little about the "things". But I suppose it all depends on what you are trying to do. Your statement on how the "human" web and the "machine" web are different and require a paradigm shift needs backing up -- at least to elaborate on why link headers etc. are the right shift to make. Yes, I think you need new hypermedia formats for these new types of clients (and clients negotiating the format they need is built into HTTP and REST) or maybe evolve existing formats, but it's not immediately obvious why you need to change more than that. Could you elaborate? Regards, Andrew Wahbe --- In rest-discuss@yahoogroups.com, Bill Burke <bburke@...> wrote: > > > A couple of things: > > * Link headers aren't only useful for representation formats that don't > support linking. They also help clients that don't know (or care) how > to process the relationships to a resource, but do know how to > manipulate the relationships. For the situation where the client > doesn't care what the representation is and is only interested in the > relationships, it can just do a HEAD request to obtain the metadata. > > * IMO, there's no reason that a link relationship couldn't be published > both as a LInk header and embedded within the representation. > > * IMO, the established practices of the web were done to solve the > issues with the human interactions with a browser. The programmatic web > where your clients are machines will need different ways to consume the > same data so that it can efficiently operate. > > * Finally, links are metadata. Other established headers define > metadata about the resource as well (ETag, Last-Modified for instance). > Why not just one more? > > Bill > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
It was a contived example that wasn't related to any other media type. I annotate link controls with media types and form controls with http method names, because it makes more sense to me than impose the verb in the specification. As you get your media types more generalized, this becomes a useful tool you can leverage. Anyway, you mak have seen it in the past in teh form of <form method="POST"> :) My apologies for using interchangeably form and link to denotate any hypermedia controls, I don't see them as different. S -----Original Message----- From: Subbu Allamaraju [mailto:subbu@...] Sent: 21 September 2009 17:52 To: Bill Burke Cc: Sebastien Lambla; oliver.riches@...; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: REST exposing supported operations... On Sep 21, 2009, at 9:46 AM, Bill Burke wrote: > > > Sebastien Lambla wrote: > > <link rel="http://rels.acme.org/deactivate > > <http://rels.acme.org/deactivate>" method="post" > > action="/deactivationRequests" mediaType="application/ > vnd.customerml+xml" /> > > I haven't seen this usage of a method attribute on link neither in > Atom > spec or Link header spec or in the list of registered links. > > Seems like a good thing though :) There is no method on links because the idea is that the link relation specifies the semantics of the action along with other details like what media types are valid, the preconditions and so on for the URI in the lik. By the way, the type attribute (not "mediaType" as in the example above) is just a hint, and isn't useful when the purpose of the link is a write. Subbu
On Sep 21, 2009, at 10:08 AM, Bill Burke wrote: > One thing I didn't understand from the Link header draft was "rel" > vs. "rev", outbound vs. inbound. AFAIR, this was not there originally, and was only added because HTML has it. As the draft says, "its use is not encouraged nor defined by this specification". Subbu
+1 On Mon, Sep 21, 2009 at 10:50 AM, Andrew Wahbe <andrew.wahbe@...>wrote: > > > I can see link headers being useful if you need to use a representation > format that doesn't support linking. But preferring HTTP link headers over > links in your hypermedia format seems backwards to me. It goes against the > established practice of the Web doesn't it? > > It also seems like the first step in collapsing the protocol space down to > just HTTP. REST allows other protocols (e.g. FTP and in the future Waka?) to > be used. Isn't that in part why the URI starts with a scheme identifier? > Pulling the links out of the hypermedia and into the protocol headers makes > it harder to use your hypermedia format with other protocols and is in > conflict with some of the design principles of the web, exemplified by URI, > no? > > > Andrew > > > On Mon, Sep 21, 2009 at 7:30 AM, Bill Burke <bburke@...> wrote: > >> >> >> wahbedahbe wrote: >> >>> >>> Huh? >>> So now it's "HEADERS as the engine of application state"? ;-) >>> >>> I can see a link header replacing the <link> elements of an Atom entry >>> document or I suppose any other hypermedia link that applies to the entire >>> document/body. >>> >>> >> IMO, its a compliment not a replacement. I definitely see your point that >> media type + links allows you to compose things. ALso, for some of the >> stuff I'm doing, I'm modeling basic relationships as link headers and using >> different media types to expand/extend what relationships exist. So the >> media type is not only a mechanism to transfer state, but a mechanism to >> extend the relationship model. >> >> This was my reasoning for saying that REST-* should "isolate data formats >> to extensions". >> >> Bill >> >> -- >> Bill Burke >> JBoss, a division of Red Hat >> http://bill.burkecentral.com >> > > > > -- > Andrew Wahbe > > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly, Think Lucid www.lucidtechnics.com (p) 202.683.7486 (f) 703.563.6279
On Sep 21, 2009, at 7:37 AM, Subbu Allamaraju wrote: > The "+" style encoding is specified by HTML for the "application/x- > www- > form-urlencoded" media type. Both HTML4.01 (http://www.w3.org/TR/ > html401/interact/forms.html#h-17.13.4 > ) and HTML5 (http://www.w3.org/TR/html5/forms.html#application-x- > www-form-urlencoded-encoding-algorithm > ) describe this. When you use forms with method GET, the "+" character > becomes part of the URI. For query parameter data, the framework will > have to use HTML encoding rules. It originally came from the ISINDEX feature of HTML http://www.w3.org/MarkUp/html-spec/html-spec_7.html#SEC7.5 http://www.w3.org/TR/REC-html32#isindex ....Roy
On Mon, Sep 21, 2009 at 11:36 AM, Sebastien Lambla <seb@...> wrote: > > That said, this adds a specific semantic to the querystring part of an HTTP > uri, and there is no way to detect if the client came from a browser that > passes a mediatype in the querystring, or not. > > My question remains, when receiving a request, and without being able to > assume if the querystring is encoded the URI way or is in fact an html > production, what default (and non dangerous) behaviour can be accepted in > all cases? > As I said earlier, I would always assume that the query (and NO other part) of an HTTP uri is application/www-url-formencoded no matter where it came from.
Andrew Wahbe wrote: > > > I can see link headers being useful if you need to use a > representation format that doesn't support linking. But preferring > HTTP link headers over links in your hypermedia format seems backwards > to me. It goes against the established practice of the Web doesn't it? I don't think the link header 'goes against' HTTP. It is an extension, though. > > Pulling the links out of the hypermedia and into the protocol headers > makes it harder to use your hypermedia format with other protocols and > is in conflict with some of the design principles of the web, > exemplified by URI, no? > As above,I don't think the link header is in conflict with HTTP or URI. If the link header is used to represent relationships from one resource to another, this should make messages more self descriptive which in turn makes layering (i.e. intermediaries) more powerful. That's a good thing for RESTful HTTP, I think. - Mike
wahbedahbe wrote: > To be clear: I'm not against the link header. Maybe I was reading into > what you were saying a little, but you seemed to be implying that the > link header should be preferred over using hypermedia, the hypermedia > only expressing links that could not be expressed with a link header. > That is what I have issue with. > Why? It depends on what the system requirements are, surely? > I think having links in both the headers and the hypermedia is ok too > if you think you need it. I think we've been writing spiders for years > now with out it so I'm not totally sold on the utility but I don't > think it's "wrong". The HEAD optimization you mention is interesting, > but I wonder how useful the yeilded links are in practice when you > lack the context of the resource representation. You end up knowing > that there's this "thing" that links to this other "thing" with a > certain class of relationship, but you know very little about the > "things". But I suppose it all depends on what you are trying to do. > Is that much different from, say, HTML? > Your statement on how the "human" web and the "machine" web are > different and require a paradigm shift needs backing up > Humans are intelligent. apparently! - Mike
On Mon, Sep 21, 2009 at 6:05 PM, Mike Kelly <mike@...> wrote: > wahbedahbe wrote: > Is that much different from, say, HTML? > > > Your statement on how the "human" web and the "machine" web are > > different and require a paradigm shift needs backing up > > > > Humans are intelligent. > > apparently! Not enough backing up. XML and all other human-readable serialization languages (eg JASON) are evidence that human web and machine web should be the same. -- Nick
On Mon, Sep 21, 2009 at 1:18 PM, Sebastien Lambla <seb@...> wrote: > > > I annotate link controls with media types and form controls with http method > names, because it makes more sense to me than impose the verb in the > specification. As you get your media types more generalized, this becomes a > useful tool you can leverage. > I often annotate links with an allow attribute <link href="/resource/1" allow="GET,PUT,DELETE" /> I find it a useful hint to provide to the client when displaying a list of items. Darrel
Heh, I have been practising my SOA-speak. Let me rephrase a few things. 2009/9/21 Jan Algermissen <algermissen1971@...>: > On Sep 20, 2009, at 4:16 PM, Benjamin Carlyle wrote: >> 2009/8/31 Jan Algermissen <algermissen1971@...>: >>> When viewing a REST API as essentially a set of link semantics how >>> can we version such APIs? And do we need to version them at all? >> Governance of a REST architecture is applied at a uniform contract >> level and at a service interface description level. > Can you explain what you mean by "uniform contract level" and "service > interface description level" and how governance is applied to them? Each REST architecture has a uniform interface consisting of three parts: resource identifier syntax, connectors (ie methods), and media types. Each server (aka service) exposes a set of resources that are identified in compliance with the resource identifier syntax, exchange messages in compliance with uniform interface connector semantics, and encode and process information in compliance with uniform interface media types. Governance of a REST architecture governs: 1. The Resource Identifier syntax 2. The set of connectors 3. The set of media types 4. The set of resources exposed by a server These elements tend to be controlled independently, and may be controlled by different people and organisations. Each level of governance seeks to ensure compliance with REST constraints and with principles of good design. >> 3. A set of media types, which will almost certainly have >> corresponding individually versioned specifications >> Each service itself has a description of its interface in terms of a >> set of resources and methods on those resources that correspond to the >> capabilities of the service. > But in REST you do not describe that but let the client discover it. Or am I > misunderstanding you? Exactly, a client should be coupled to the uniform interface expressed by all resources. They should not be coupled to any particular resource or server. A particular client beginning some kind of business process or application will be given a set of starting URLs to work from, and will imbue these resources with some predefined semantics. However, any resources outside this set should be discovered by following hyperlinks. The semantics imbued on discovered resources depend on the context and type of the link. The link types and to a significant degree the context of the link are specified as part of the uniform interface within the set of media type specifications. >> The outcome is a high level of integration maturity. One URL can be >> substituted for another in the architecture at runtime. Regardless of >> the specific URL or service the consumer knows what kind of message to >> construct. The service knows how to interpret the request and how to >> return an appropriate response in a form the consumer understands. The >> uniform interface of each resource enables communication and then gets >> out of the way, permitting dynamic reconfiguration to occur as >> required. > Hmmm - and how does all that adsress the problem of versioning the set of > semantics that make up a certain RESTful API? The resource identifier syntax is change-managed under one specification. The set of connectors is change-managed as either one or several specifications. The set of media types is change-managed as individual specifications, generally one per media type but possibly including some common elements that are reused between media types. Collectively these form the uniform interface specification. The uniform interface as a whole could be assigned a particular version number, but is generally evolving quickly enough that doing so adds little benefit. It is possible that the set of connectors themselves include link types and content-like information. For example, HTTP can include a "link" header with a variety of possible relationship types. These types would be version-controlled effectively as part of the media type set, in whatever configuration of specifications is deemed appropriate by the appropriate governance body. Each server (i.e. service) should have a written description of the resources it exposes to enable governance of itself, form implementation, and to inform maintenance and support activities. The specification for each server/service should be version-controlled separately to those of other servers. This document captures the specific capabilities of the service in the same way as a service-specific contract would have done in a SOA, but uses messages to invoke and respond to these capabilities that are highly abstract, reusable, and demonstrate a high degree of integration maturity. Benjamin.
Nick Gall wrote: > On Mon, Sep 21, 2009 at 6:05 PM, Mike Kelly <mike@...> wrote: > >> wahbedahbe wrote: >> Is that much different from, say, HTML? >> >> >>> Your statement on how the "human" web and the "machine" web are >>> different and require a paradigm shift needs backing up >>> >>> >> Humans are intelligent. >> >> apparently! >> > > Not enough backing up. XML and all other human-readable serialization > languages (eg JASON) are evidence that human web and machine web > should be the same. > > -- Nick > Agreed, that is insufficient backing up - it was sort of tongue in cheek. I don't know if we're talking about the same 'JASON' here - but the one I know isn't part of what I would call the 'human web', and neither is his pedantic nemesis XML. He doesn't spell his name like that either - was that a Freudian slip? The human web (of documents?) had the luxury of being able to put resource relationships/hyperlinks in context via natural language and symbolism; this is something that is not afforded by the machine web. - Mike
Mike Kelly wrote: > The human web (of documents?) had the luxury of being able to put > resource relationships/hyperlinks in context via natural language and > symbolism; this is something that is not afforded by the machine web Not at the current stage of development, but with so much work being done in Semantic Web and even in AI, that gap will undoubtedly narrow, so designing for a "machine web" should aim in that direction also.
Ant�nio Mota wrote: > Mike Kelly wrote: > > >> The human web (of documents?) had the luxury of being able to put >> resource relationships/hyperlinks in context via natural language and >> symbolism; this is something that is not afforded by the machine web >> > Not at the current stage of development, but with so much work being > done in Semantic Web and even in AI, that gap will undoubtedly narrow, > so designing for a "machine web" should aim in that direction also. > Possibly. It could also simply lead to the death of the human web, and leave us with smarter/dynamic UAs that allow users to apply contexts to the machine web. - Mike
I wrote: > Ant�nio Mota wrote: > >> Mike Kelly wrote: >> >> >> >>> The human web (of documents?) had the luxury of being able to put >>> resource relationships/hyperlinks in context via natural language and >>> symbolism; this is something that is not afforded by the machine web >>> >>> >> Not at the current stage of development, but with so much work being >> done in Semantic Web and even in AI, that gap will undoubtedly narrow, >> so designing for a "machine web" should aim in that direction also. >> >> > > Possibly. It could also simply lead to the death of the human web, and > leave us with smarter/dynamic UAs that allow users to apply contexts to > the machine web. > Although, if these UAs are code-on-demand then the human web would continue to exist as a means to distribute application components. So that would be more of an evolution of the human web, rather than a death. - Mike
As the subject states really, does a RESTful implmentation remove copmpletely the coupling between the server & client or is just more loosely coupled than say SOAP? Cheers Ollie
Ollie, On Sep 22, 2009, at 2:28 PM, Ollie wrote: > As the subject states really, does a RESTful implmentation remove > copmpletely the coupling between the server & client or is just more > loosely coupled than say SOAP? > Well, you cannot communicate without coupling, so the answer is 'no'. REST moves the all coupling into the data elements of the architecture by making the interface uniform. Another way to look at this is that in an architecture that has coupling on the interface and the data (e.g. SOAP) you have one more thing to deal with when your system evolves. My perception is that you simply cannot decouple to a greater extend than REST does (though I do lack scientific proof for that). Jan > > Cheers > > Ollie > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
thanks, I was being a little facetious wiht the question ;) from a clients (user of a service) perspective they are agreeing a contract and how and when that contract varies does not matter greatly to them only that it has changed. So therefore REST service suffer from exactly the same problems as SOAP services with versioning - breaking changes... Cheers Ollie Riches RBS Global Banking & Markets Office: +44 203 361 4071 -----Original Message----- From: Jan Algermissen [mailto:algermissen1971@...] Sent: 22 September 2009 13:42 To: RICHES, Oliver, GBM Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Does REST remove completely the coupling between the server & client? Ollie, On Sep 22, 2009, at 2:28 PM, Ollie wrote: > As the subject states really, does a RESTful implmentation remove > copmpletely the coupling between the server & client or is just more > loosely coupled than say SOAP? > Well, you cannot communicate without coupling, so the answer is 'no'. REST moves the all coupling into the data elements of the architecture by making the interface uniform. Another way to look at this is that in an architecture that has coupling on the interface and the data (e.g. SOAP) you have one more thing to deal with when your system evolves. My perception is that you simply cannot decouple to a greater extend than REST does (though I do lack scientific proof for that). Jan > > Cheers > > Ollie > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com -------------------------------------- *********************************************************************************** The Royal Bank of Scotland plc. Registered in Scotland No 90312. Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. Authorised and regulated by the Financial Services Authority. This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer. Internet e-mails are not necessarily secure. The Royal Bank of Scotland plc does not accept responsibility for changes made to this message after it was sent. Whilst all reasonable care has been taken to avoid the transmission of viruses, it is the responsibility of the recipient to ensure that the onward transmission, opening or use of this message and any attachments will not adversely affect its systems or data. No responsibility is accepted by The Royal Bank of Scotland plc in this regard and the recipient should carry out such virus and other checks as it considers appropriate. Visit our website at www.rbs.com ***********************************************************************************
You need to have a coupling somewhere. In the case of a RESTful architecture, the coupling is with mediatypes (that often define the link semantics), so you don't have coupling with the server itself, it's address space or even it's existence. S -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Ollie Sent: 22 September 2009 13:29 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Does REST remove completely the coupling between the server & client? As the subject states really, does a RESTful implmentation remove copmpletely the coupling between the server & client or is just more loosely coupled than say SOAP? Cheers Ollie ------------------------------------ Yahoo! Groups Links
On Sep 22, 2009, at 2:51 PM, oliver.riches@... wrote: > So therefore REST service suffer from exactly the same problems as > SOAP services with versioning - breaking changes... Well...the nature of the problem is the same (if one communication partner changes the language communication is harmed) but REST is an improvement because it makes fragmented change much easier. E.g. a server can add elements to HTML without breaking any clients or Amazon can change the state machine of its Web application even *while* you are in the middle of making a purchase. Jan > > > Cheers > > Ollie Riches > RBS Global Banking & Markets > Office: +44 203 361 4071 > > -----Original Message----- > From: Jan Algermissen [mailto:algermissen1971@...] > Sent: 22 September 2009 13:42 > To: RICHES, Oliver, GBM > Cc: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Does REST remove completely the coupling > between the server & client? > > Ollie, > > On Sep 22, 2009, at 2:28 PM, Ollie wrote: > >> As the subject states really, does a RESTful implmentation remove >> copmpletely the coupling between the server & client or is just more >> loosely coupled than say SOAP? >> > > Well, you cannot communicate without coupling, so the answer is 'no'. > > REST moves the all coupling into the data elements of the > architecture by making the interface uniform. > > Another way to look at this is that in an architecture that has > coupling on the interface and the data (e.g. SOAP) you have one more > thing to deal with when your system evolves. > > My perception is that you simply cannot decouple to a greater extend > than REST does (though I do lack scientific proof for that). > > Jan > >> >> Cheers >> >> Ollie >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > *********************************************************************************** > The Royal Bank of Scotland plc. Registered in Scotland No 90312. > Registered Office: 36 St Andrew Square, Edinburgh EH2 2YB. > Authorised and regulated by the Financial Services Authority. > > This e-mail message is confidential and for use by the > addressee only. If the message is received by anyone other > than the addressee, please return the message to the sender > by replying to it and then delete the message from your > computer. Internet e-mails are not necessarily secure. The > Royal Bank of Scotland plc does not accept responsibility for > changes made to this message after it was sent. > > Whilst all reasonable care has been taken to avoid the > transmission of viruses, it is the responsibility of the recipient to > ensure that the onward transmission, opening or use of this > message and any attachments will not adversely affect its > systems or data. No responsibility is accepted by The > Royal Bank of Scotland plc in this regard and the recipient should > carry > out such virus and other checks as it considers appropriate. > > Visit our website at www.rbs.com > > *********************************************************************************** > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Tue, Sep 22, 2009 at 01:51:08PM +0100, oliver.riches@... wrote:
> thanks, I was being a little facetious wiht the question ;)
>
> from a clients (user of a service) perspective they are agreeing a
> contract and how and when that contract varies does not matter greatly
> to them only that it has changed. So therefore REST service suffer
> from exactly the same problems as SOAP services with versioning -
> breaking changes...
Any coupling whatsoever will do that - the spectre of versioning haunts
every distributed system to some degree. No system will ever be fully
free of that particular problem.
One of the key points of the uniform interface constraint is that it
removes one of the elements that causes the most brittleness from
causing any breakage, because it's far, far easier--at least in my
experience--to version documents than it is interface contracts.
K.
--
Keith Gaughan - k@... - http://stereochro.me/
Television enables you to be entertained in your home
by people you wouldn't have in your home.
-- David Frost
Do we want hypermedia formats like HTML to allow us to advise UAs on appropriate conneg values for a given hyperlink, or is control data intended to be static and isolated away from our hyperlinks? HTML has a type attribute for hyperlink elements such as script, style, anchor - it is used to advise on the media type that should be expected; but it is not, for some reason, intended to affect the Accept header for requests to the corresponding URI. It seems to make sense to me that it should, so I filed a bug report. Any thoughts? http://www.w3.org/Bugs/Public/show_bug.cgi?id=7697 Description: Section: http://www.whatwg.org/specs/web-apps/current-work/#hyperlink-elements <http://www.whatwg.org/specs/web-apps/current-work/#hyperlink-elements> The type attribute should ammend the Accept header accordingly. HTTP spec is in accordance with the rules here, in terms of the Accept header being non-authorative. script and style elements already ammend accept header (in firefox) but not for other link elements. This behaviour would be valuable for enabling HTML powered applications which leverage HTTP conneg. ------- /Comment #1 From Ian 'Hixie' Hickson <mailto:ian@...> 2009-09-22 11:49:32/ ------- Accept should reflect what the UA accepts, not what the page says the server provides. ------- /Comment #2 From Mike Kelly <mailto:mike@...> 2009-09-22 12:48:01/ ------- Ian that would appear to be incorrect given that, according to the HTTP spec, the Accept header is simply intended to indicate the appropriate media types for a response to a given request: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html "The Accept request-header field can be used to specify certain media types which are acceptable for the response" So, the value of the Accept header should be taken (if specified) from the type attribute for a given hyperlink element - which is presumably why firefox modifies the accept header for the style and script elements. This allows hyperlinks to specific representations that must be negotiated via HTTP content negotiation - as it stands, it is not possible to specify such a link with HTML5. i.e. currently - these links would actually generate identical requests with the same Accept header: <a type='application/xml' href='/document'>document in xml</a> <a type='application/json' href='/document'>document in json</a> If implemented any existing mechanisms which disregard the Accept header would remain unaffected by the change. If no type is specified then default UA Accept header value should be assumed.
Accept and @type are two separate concerns. @type hints the client at what type(s) the link might be. This lets the UA decides if it can process teh link or not. Accept: tells the server which formats it can process. If a UA can't understand the @type, it probably can't process it and may ignore it. If it can, and it issues an Accept header, then surely that type will be in the Accept header anyway. If you want to distinctly identify two documents separately, you give them two URIs. If the difference between the two representations don't matter to the client (aka there is no significant semantic difference between the two formats), then two representations can be served as conneg'd resources. Seb > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@yahoogroups.com] On Behalf Of Mike Kelly > Sent: 22 September 2009 14:27 > To: Rest List > Subject: [rest-discuss] Content negotiation and hypermedia > > Do we want hypermedia formats like HTML to allow us to advise UAs on > appropriate conneg values for a given hyperlink, or is control data > intended to be static and isolated away from our hyperlinks? > > HTML has a type attribute for hyperlink elements such as script, style, > anchor - it is used to advise on the media type that should be > expected; > but it is not, for some reason, intended to affect the Accept header > for > requests to the corresponding URI. > > It seems to make sense to me that it should, so I filed a bug report. > > Any thoughts? > > http://www.w3.org/Bugs/Public/show_bug.cgi?id=7697 > > Description: > > Section: http://www.whatwg.org/specs/web-apps/current-work/#hyperlink- > elements > <http://www.whatwg.org/specs/web-apps/current-work/#hyperlink- > elements> > The type attribute should ammend the Accept header accordingly. HTTP > spec is in > accordance with the rules here, in terms of the Accept header being > non-authorative. script and style elements already ammend accept header > (in > firefox) but not for other link elements. This behaviour would be > valuable for > enabling HTML powered applications which leverage HTTP conneg. > > > ------- /Comment #1 From Ian 'Hixie' Hickson <mailto:ian@...> > 2009-09-22 11:49:32/ ------- > > Accept should reflect what the UA accepts, not what the page says the > server > provides. > > > > ------- /Comment #2 From Mike Kelly <mailto:mike@...> > 2009-09-22 12:48:01/ ------- > > Ian that would appear to be incorrect given that, according to the HTTP > spec, > the Accept header is simply intended to indicate the appropriate media > types > for a response to a given request: > > http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html > > "The Accept request-header field can be used to specify certain media > types > which are acceptable for the response" > > So, the value of the Accept header should be taken (if specified) from > the type > attribute for a given hyperlink element - which is presumably why > firefox > modifies the accept header for the style and script elements. > > This allows hyperlinks to specific representations that must be > negotiated via > HTTP content negotiation - as it stands, it is not possible to specify > such a > link with HTML5. > > i.e. currently - these links would actually generate identical requests > with > the same Accept header: > > <a type='application/xml' href='/document'>document in xml</a> > <a type='application/json' href='/document'>document in json</a> > > If implemented any existing mechanisms which disregard the Accept > header would > remain unaffected by the change. > > If no type is specified then default UA Accept header value should be > assumed. > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Sebastien Lambla wrote: > Accept: tells the server which formats it can process. > I disagree; RFC 2616 says "The Accept request-header field can be used to specify certain media types which are acceptable for the response. Accept headers can be used to indicate that the request is specifically limited to a small set of desired types, as in the case of a request for an in-line image.". How did you come to your understanding of the purpose of Accept? > If a UA can't understand the @type, it probably can't process it and may > ignore it. If it can, and it issues an Accept header, then surely that type > will be in the Accept header anyway. > The media type preference of a particular request could be significant in application flow e.g: <a type='application/xml' href='/document'>document in xml</a> <a type='application/json' href='/document'>document in json</a> Without specifying the appropriate Accept header for each request, the same representation will be returned. > If you want to distinctly identify two documents separately, you give them two URIs.If the difference between the two representations don't matter to the client (aka there is no significant semantic difference between the two formats), then two representations can be served as conneg'd resources. > Why do you want to give two *representations* of one resource, two separate *resource* identifiers? One answer to that might be that you need to link explicitly to them and there's currently no way to indicate the necessary control data with (HTML) hyperlinks. That is true, and is the reason I brought this up. What are the other reasons you have for violating the distinction between resource and representation? - Mike
> I disagree; RFC 2616 says "The Accept request-header field can be used > to specify certain media types which are acceptable for the response. > Accept headers can be used to indicate that the request is specifically > limited to a small set of desired types, as in the case of a request for > an in-line image.". It can limit a range of *understood* formats if it knows that some don't make sense in a particular context, such as inline images. > How did you come to your understanding of the purpose of Accept? Common sense. Any other explanation is not logical. > Without specifying the appropriate Accept header for each request, the > same representation will be returned. If the client understands both and advertises both in the Accept header, and if the server considers the two representations equal, then it doesn't matter at all. > Why do you want to give two *representations* of one resource, two > separate *resource* identifiers? Because my definition of resource is not anal. anything can be a resource, as a resource is a thing with a URI. If you need to name something individually (such as a json representation), then you can give it a URI and be done with it. > One answer to that might be that you need to link explicitly to them and > there's currently no way to indicate the necessary control data with > (HTML) hyperlinks. That is true, and is the reason I brought this up. If I want to link explicitly to two things, they're by definition distinct, and as such I will give them two URIs. > What are the other reasons you have for violating the distinction > between resource and representation? I don't violate anything. I assume that you're overly focused on keeping the same URI for documents that are vastly different, and in doing so you see conneg as not being sufficient for what you're trying to achieve. I say that what you're trying to use conneg for is not what it was designed for. S _________________________________________________________________ Use Hotmail to send and receive mail from your different email accounts. http://clk.atdmt.com/UKM/go/167688463/direct/01/
Sebastien Lambla wrote: > > I disagree; RFC 2616 says "The Accept request-header field can be used > > to specify certain media types which are acceptable for the response. > > Accept headers can be used to indicate that the request is specifically > > limited to a small set of desired types, as in the case of a request > for > > an in-line image.". > > It can limit a range of *understood* formats if it knows that some > don't make sense in a particular context, such as inline images. > OK, but then you go on to say: > > > Without specifying the appropriate Accept header for each request, the > > same representation will be returned. > > If the client understands both and advertises both in the Accept > header, and if the server considers the two representations equal, > then it doesn't matter at all. > .. which is completely inconsistent. Why would there be a mechanism to limit understood formats if "the server considers to formats equal" and "it doesn't matter at all" ? > > How did you come to your understanding of the purpose of Accept? > Common sense. Any other explanation is not logical. If it was common sense why is it not in the spec? > > Why do you want to give two *representations* of one resource, two > > separate *resource* identifiers? > > Because my definition of resource is not anal. anything can be a > resource, as a resource is a thing with a URI. RPC endpoints are 'resources' ? > > > One answer to that might be that you need to link explicitly to them > and > > there's currently no way to indicate the necessary control data with > > (HTML) hyperlinks. That is true, and is the reason I brought this up. > > If I want to link explicitly to two things, they're by definition > distinct, and as such I will give them two URIs. So you are implying that multiple representations of a resource should not be distinct from one-another? That doesn't make sense. Resources are distinguished by URIs, representations are distinguished by control data. > > > What are the other reasons you have for violating the distinction > > between resource and representation? > > I don't violate anything. > > I assume that you're overly focused on keeping the same URI for > documents that are vastly different, and in doing so you see conneg as > not being sufficient for what you're trying to achieve. I say that > what you're trying to use conneg for is not what it was designed for. OK - Please could you explain your reasons for saying that - Mike
On Mon, Sep 21, 2009 at 5:46 PM, Mike Kelly <mike@...> wrote: > Andrew Wahbe wrote: > >> >> >> I can see link headers being useful if you need to use a representation >> format that doesn't support linking. But preferring HTTP link headers over >> links in your hypermedia format seems backwards to me. It goes against the >> established practice of the Web doesn't it? >> > > I don't think the link header 'goes against' HTTP. It is an extension, > though. > > >> Pulling the links out of the hypermedia and into the protocol headers >> makes it harder to use your hypermedia format with other protocols and is in >> conflict with some of the design principles of the web, exemplified by URI, >> no? >> >> > > As above,I don't think the link header is in conflict with HTTP or URI. > > If the link header is used to represent relationships from one resource to > another, this should make messages more self descriptive which in turn makes > layering (i.e. intermediaries) more powerful. That's a good thing for > RESTful HTTP, I think. > > - Mike > I'm not sure if you understand my points. I am not saying that the concept of link headers goes against HTTP, URI, the Web in general, open standards, world peace, intergalactic treaties or anything else. 8-P In general, link headers are good! Using them when the representation format doesn't support links is a really good thing! Using them to supplement link information that is in a hypermedia representation format does make the message more self-descriptive and could be very useful when layers/intermediaries don't understand your hypermedia format. I agree with what you are saying. The thing that I have an issue with is the notion that link headers should be used *instead of* putting links in your hypermedia representation. Or that the practice of putting links in headers should be *preferred over* putting links in your hypermedia representation. That is bad IMO. This doesn't "go against HTTP" -- I said it goes against "the established practice of the Web". The Web is more than URI and HTTP (I know... shocking isn't it?!!). There's that hypermedia thing too e.g. HTML. Here's a question: on the Web today, is it more common to see links in a) HTTP headers or b) the HTML body. If you said "b", then you agree with my point! I also think it goes against the design principles of the Web. It is my understanding that the Web standards were designed in a way that allows the interchange and separate evolution of hypermedia formats and protocols. HTML can be used with HTTP, FTP and other protocols yet to be invented. HTTP can be used with HTML, SVG, VoiceXML, Atom, etc. URI is the lynch pin as HTML (and other hypermedia formats) only depend on URI for references and URIs are extensible to many protocols. When you put your links in your HTTP headers rather than the hypermedia body, you prevent the application state machine from being driven by hypermedia alone and make it dependent on HTTP. This design principle has been discussed by the editors of the Web's specifications and is embodied in those specifications. Larry Masinter wrote the following in an article on the orthogonality of specifications ( http://www.w3.org/QA/2009/06/orthogonality_of_specification.html): While HTTP is the current "common denominator" protocol that all web agents are expected to speak, the web should continue to work if web content is delivered by other protocols -- FTP, shared file systems, email, instant messaging, and so forth. HTTP as it has evolved has severe difficulties, and designing a Web that *only works* with HTTP as it is currently implemented and deployed would unfortunate. We should work harder to reduce the dependencies and isolate them. Roy Fielding wrote the following in his "REST APIs must by hypertext driven" post (http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven ): A REST API should not be dependent on any single communication protocol, though its successful mapping to a given protocol may be dependent on the availability of metadata, choice of methods, etc. In general, any protocol element that uses a URI for identification must allow any URI scheme to be used for the sake of that identification. *[Failure here implies that identification is not separated from interaction.] * * ***The HTTP RFC is following this principle when it requires that when creating a resource, the 201 Created response should not only have a location header, but also a body that refers to the new resource: If a resource has been created on the origin server, the response SHOULD be 201 (Created) and contain an entity which describes the status of the request and refers to the new resource, and a Location header (see section 14.30). In summary, it must be possible for the application state machine to be driven by hypermedia alone. Removing links from the representation body and instead putting them in HTTP headers prevents this and creates a dependency on the HTTP protocol that hinders the evolution of the Web. Regards, Andrew Wahbe
> .. which is completely inconsistent. Why would there be a mechanism to > limit understood formats if "the server considers to formats equal" and > "it doesn't matter at all" ? It wouldnt make sense to serve images and word documents from the same URI, as they're not the same resource. AKA an image is "an image representation of x" while a document is "a description of x". The reason why UAs may do so is to ensure that when requesting something only understanding images, the failure happens quicker when the document is of another time. It saves retrieving the data, passing it into an image processor to then realize it was a mistake. > If it was common sense why is it not in the spec? Because practice shows what is useful in a spec, what is practical, and what doesn't fit with the rest of the architecture. Then again, most of those current practices have been discussed on this mailing list before. > RPC endpoints are 'resources' ? Why wouldn't they? They're things, "an endpoint", and they have a URI. I fail to see why it's such a big deal? > > If I want to link explicitly to two things, they're by definition > > distinct, and as such I will give them two URIs. > > So you are implying that multiple representations of a resource should > not be distinct from one-another? That doesn't make sense. > > Resources are distinguished by URIs, representations are distinguished > by control data. Where have I implied that? You're just arguing semantics, and still missing my point. A resource is a thing with a URI, or to put it in ReST terms, a conceptual mapping to a thing. A representation is a binary stream of data that "represents" the thing.Very often, the resource and the representation *will* be the same, and two resources will exist that are *about* a thing (such as a web page showing a customer, and an xml stream with customer data). The current web architecture (or ReST for that matter) does not state that every thing you can access about a thing should be a representation. As you've discovered, if it was the case, we couldnt link to things that are important enough to be linked to. > > I assume that you're overly focused on keeping the same URI for > > documents that are vastly different, and in doing so you see conneg as > > not being sufficient for what you're trying to achieve. I say that > > what you're trying to use conneg for is not what it was designed for. > > OK - Please could you explain your reasons for saying that I just did, but as this same conversation has been had so many times on this list, I'll simply quote Roy: We encourage resource owners to only use true content negotiation (without redirects) when the only difference between formats is mechanical in nature. _________________________________________________________________ Save time by using Hotmail to access your other email accounts. http://clk.atdmt.com/UKM/go/167688463/direct/01/
Mike Kelly wrote: > > > António Mota wrote: > > Mike Kelly wrote: > > > > > >> The human web (of documents?) had the luxury of being able to put > >> resource relationships/hyperlinks in context via natural language and > >> symbolism; this is something that is not afforded by the machine web > >> > > Not at the current stage of development, but with so much work being > > done in Semantic Web and even in AI, that gap will undoubtedly narrow, > > so designing for a "machine web" should aim in that direction also. > > > > Possibly. It could also simply lead to the death of the human web, and > leave us with smarter/dynamic UAs that allow users to apply contexts to > the machine web. > This is actually one of my biggest worries, that we will lose key features of the Web like linking when we move to Rich client applications. This is why we have to really push UI developers to use REST when designing their applications. Otherwise, the web is just going to turn solely into a means to distribute applications. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Andrew Wahbe wrote: > In summary, it must be possible for the application state machine to be > driven by hypermedia alone. Removing links from the representation body > and instead putting them in HTTP headers prevents this and creates a > dependency on the HTTP protocol that hinders the evolution of the Web. > This is an incredibly insiteful observation and something I did not think of. So your point is that Link headers should not be used *instead* of hypermedia links if possible. Sounds like a very good recommendation and guideline. BTW, can this guideline (and warning) be added to the Link header RFC? Or is that something that is not usually done within these specifications? Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
I have a case that, at least superficially, seems similar to what Google solves for with their ClientLogin API for installed applications: http://code.google.com/apis/accounts/docs/AuthForInstalledApps.html So at first glance I'm tempted to create a custom authentication that is vaguely similar to what is described at that link. But.. how RESTful is this, really? Some concerns: - Although POSTing to ClientLogin notionally creates a "resource", this resource has no URI. It is identified by the Auth token, which is not a URI. There is no way to GET it or DELETE it. - These resources (not even sure what to call them) need to be stored somewhere. While I could conceive of an implementation that kept them in memory, like sessions in a stateful web app, this would of course violate the statelessness constraint and mess with load-balancing in all the usual ways. I assume Google has plenty of room to store millions of these things on their servers. - Does this whole thing *inherently* count as stateful regardless of how they are stored? Does the very idea of forcing the client to "log in" before they start making requests for real data violate the constraint no matter how you implement it? - There is a whole set of custom error codes that are embedded in the representation of a failed login, would these be better handled as response codes (although the idea of custom response codes doesn't sound to good either.. violation of HTTP?), or in a header? Does it matter? Am I missing anything?
On Sep 22, 2009, at 9:16 PM, Mike Kelly wrote: > What are the other reasons you have for violating the distinction > between resource and representation? I don't think one can talk about "violating the distinction". Unless one representation could be transformed into the other automatically (i.e. when it's really only a matter of syntax), this is always a judgement call. I think it can be compared to a design decision about whether or not to put some behavior into two classes instead of one in an OO design, or whether to store data in one or two files. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ ** Neues Buch "REST und HTTP": http://rest-http.info **
I don't think this conclusion is valid. Headers are part of a "representation". More importantly, Link header has a well-specified meaning which is not the case with application-specific formats. Today, the formats that have well-specified meaning for links are Atom and HTML. For any other case, applications ought to define how links looks like. Subbu On Sep 22, 2009, at 5:11 PM, Bill Burke wrote: > > > Andrew Wahbe wrote: > > In summary, it must be possible for the application state machine > to be > > driven by hypermedia alone. Removing links from the representation > body > > and instead putting them in HTTP headers prevents this and creates a > > dependency on the HTTP protocol that hinders the evolution of the > Web. > > > > This is an incredibly insiteful observation and something I did not > think of. So your point is that Link headers should not be used > *instead* of hypermedia links if possible. Sounds like a very good > recommendation and guideline. BTW, can this guideline (and warning) be > added to the Link header RFC? Or is that something that is not usually > done within these specifications? > > Bill > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
Subbu Allamaraju wrote: > I don't think this conclusion is valid. Headers are part of a > "representation". More importantly, Link header has a well-specified > meaning which is not the case with application-specific formats. > Today, the formats that have well-specified meaning for links are Atom > and HTML. For any other case, applications ought to define how links > looks like. > > Subbu > +1 Also - is it always a Bad Thing to restrict a particular RESTful system to only accommodate protocols capable of hypermedia in their control data? - Mike
On Tue, 22 Sep 2009 14:27:03 +0100 Mike Kelly wrote: > > HTML has a type attribute for hyperlink elements such as script, > style, anchor - it is used to advise on the media type that should be > expected; but it is not, for some reason, intended to affect the > Accept header for requests to the corresponding URI. > Right, and it should stay that way. It's just a hint. An HTML page containing links to itself of different types, indicates that content negotiation is present. Why would, say, a Web browser change its accept header based on the fact that another representation is in application/atom+xml, which the browser also accepts? -Eric
Houghton,Andrew wrote: >> From: rest-discuss@yahoogroups.com [mailto:rest- >> discuss@yahoogroups.com] On Behalf Of Mike Kelly >> Sent: Tuesday, September 22, 2009 03:58 PM >> To: Sebastien Lambla >> Cc: rest-discuss@yahoogroups.com >> Subject: Re: [rest-discuss] Content negotiation and hypermedia >> >> OK, but then you go on to say: >> >> >>>> Without specifying the appropriate Accept header for each request, >>>> >> the >> >>>> same representation will be returned. >>>> >>> If the client understands both and advertises both in the Accept >>> header, and if the server considers the two representations equal, >>> then it doesn't matter at all. >>> >>> >> .. which is completely inconsistent. Why would there be a mechanism to >> limit understood formats if "the server considers to formats equal" and >> "it doesn't matter at all" ? >> > > No it is not inconsistent. Inconsistent because if the accept header already lists understandable media types, and representations of a resource provided by the server are entirely interchangeable - why would any UA ever want to specify away from their default? There should be no need. > Let's say a UA sends the Accept header: > > Accept: image/gif, image/jpg > > The HTTP protocol returns a > single entity and the Accept header tells the server which media types > the UA understands. The Accept header tells the server which media types are appropriate for a response to *a request*, not a UA. That is all that the spec says! :) > When the UA says it understands multiple media > types, the server gets to choose based on the Q values. It doesn't > matter which media type is returned because the UA explicitly said > that it could handle *either* media type. But it might matter in the context of a particular application, yet in terms of the entire system the only distinction between the two is the serialization of the *same* concept/resource. > Think of the Accept header > as a contract between the UA and the server. The UA sets the terms in > the Accept header and the server fulfills the requirements, if it can. > When the UA wants to limit what media type comes back to a single media > type, then the UA should specify only that media type in the Accept > header and the server will return it, if it can, or the server will > respond with 406 Not Acceptable status if it cannot. > I agree with that. My question is whether hypermedia formats should be able to encourage a UA to be specific in certain contexts. >> So you are implying that multiple representations of a resource should >> not be distinct from one-another? That doesn't make sense. >> >> Resources are distinguished by URIs, representations are distinguished >> by control data. >> > > Representations are not distinguished by control data, that is a service > oriented perspective. The Vary mechanism in HTTP operates on this principal: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.44 "The Vary field value indicates the set of request-header fields that fully determines, while the response is fresh, whether a cache is permitted to use the response to reply to a subsequent request without revalidation. For uncacheable or stale responses, *the Vary field value advises the user agent about the criteria that were used to select the representation* " > All resources have > identity which is represented by a URI (RFC 3986). All representations > have a URI because they are resources unto themselves. > If a thing is a resources unto itself, it isn't a representation. > The linked data specification makes a distinction between resource types. > There are Real-World Objects (RWO), Generic Documents (GD) and Web > Documents (WD). RWO objects are resources. I might have a RWO URI > which identifies me. I might also have a GD URI where a UA can content > negotiate something *about* me. Lastly, I probably will have zero or > more WD that are *about* me in a specific representation (format). > Understanding these aspects are important. > Important to what..? Maybe to Linked Data, but this discussion is about RESTful HTTP > RWO <http://example.org/smith> > GD <http://example.org/smith/> > GD <http://example.org/smith/picture> > WD <http://example.org/smith/picture.gif> > WD <http://example.org/smith/picture.jpg> > WD <http://example.org/smith/about.html> > GD <http://example.org/smith/publications> > WD <http://example.org/smith/publications.html> > WD <http://example.org/smith/publications.xml> > > > It is possible that WD resources could be content negotiated by using > the GD URI. In this case the representations delivered through content > negotiation are still resources, but they do not have any public identity, > e.g., URIs, however they still have some sort of identity otherwise you > couldn't find them during content negotiation. > Of course you could find them - that is the purpose of conneg! The Vary mechanism is designed so that the server can explain to the client (and intermediaries) what criteria from the request were essential to a given representations negotiation. > In the above example, I gave, having non-public identity for WD resources > is not desirable since there are multiple HTML representations. When a UA > says Accept: text/html which resource will be returned? Information about > me, about.html, or information about my publications, publications.html? > This is why resource identification is important to REST. Your 'about' resource and 'publications' resource are not the same resource, so they are not equivalent representations; therefore they cannot be cannot conneg'd and should indeed have separate URIs. As a different example: publications.html and publications.xml do not require separate URIs - and can be effectively negotiated via the Accept header and treated appropriately as one resource with one /publications URI. Whether the the representation is html or xml, the response contains a representation of a resource which lists your publications. In certain contexts of an application it may be necessary to specify what the media type preference for a hyperlink should be; <a type='text/html' href='/smith/publications'>html page listing my publications</a> <a type='application//atom/+xml' href='/smith/publications'>atom feed of my publications</a> Those 2 representations are both generated by the same mechanism, and from the same datastore - they are representations of the same resource. Thank you for your detailed response, the linked data specification seems interesting. Cheers, Mike
Eric J. Bowman wrote: > On Tue, 22 Sep 2009 14:27:03 +0100 > Mike Kelly wrote: > > >> HTML has a type attribute for hyperlink elements such as script, >> style, anchor - it is used to advise on the media type that should be >> expected; but it is not, for some reason, intended to affect the >> Accept header for requests to the corresponding URI. >> >> > > Right, and it should stay that way. It's just a hint. An HTML page > containing links to itself of different types, indicates that content > negotiation is present. Why would, say, a Web browser change its > accept header based on the fact that another representation is in > application/atom+xml, which the browser also accepts? > If the representation must be conneg'd via HTTP - how else would the browser indicate its circumstantial preference is atom? It has to change the accept header - otherwise it will never negotiate anything other than its natural default preference for html. - Mike
Stefan Tilkov wrote: > > > On Sep 22, 2009, at 9:16 PM, Mike Kelly wrote: > >> What are the other reasons you have for violating the distinction >> between resource and representation? > > I don't think one can talk about "violating the distinction". Unless > one representation could be transformed into the other automatically > (i.e. when it's really only a matter of syntax), this is always a > judgement call. I think it can be compared to a design decision about > whether or not to put some behavior into two classes instead of one in > an OO design, or whether to store data in one or two files. > I absolutely agree with this. However, in practice, there is currently no choice - even if your judgment is that HTTP conneg is preferential - because there is no standardised mechanism to provide hyperlinks which leverage HTTP conneg. Adding significance to the type attribute so it is considered advice for the Accept header of the request will open this up and offer the choice, and at the same time it will not negatively impact anyone who continues to use URI conneg; since they don't care about the accept header anyway! - Mike
On Wed, Sep 23, 2009 at 3:34 AM, Subbu Allamaraju <subbu@...> wrote: > I don't think this conclusion is valid. Headers are part of a > "representation". More importantly, Link header has a well-specified meaning > which is not the case with application-specific formats. Today, the formats > that have well-specified meaning for links are Atom and HTML. For any other > case, applications ought to define how links looks like. > > Subbu > > > On Sep 22, 2009, at 5:11 PM, Bill Burke wrote: > > >> >> Andrew Wahbe wrote: >> > In summary, it must be possible for the application state machine to be >> > driven by hypermedia alone. Removing links from the representation body >> > and instead putting them in HTTP headers prevents this and creates a >> > dependency on the HTTP protocol that hinders the evolution of the Web. >> > >> >> This is an incredibly insiteful observation and something I did not >> think of. So your point is that Link headers should not be used >> *instead* of hypermedia links if possible. Sounds like a very good >> recommendation and guideline. BTW, can this guideline (and warning) be >> added to the Link header RFC? Or is that something that is not usually >> done within these specifications? >> >> Bill >> >> Subbu, The REST thesis is a little confusing on the point of whether headers are part of the "representation". Please refer to table 5-1 which provides an example for each type of data element: resource => the intended conceptual target of a hypertext reference resource identifier => URL, URN representation => HTML document, JPEG image representation metadata => media type, last-modified time resource metadata => source link, alternates, vary control data => if-modified-since, cache-control This implies that headers are resource/represention metadata or control data. I would say that the link header falls into the resource metadata bucket. But then section 5.2.1.2 goes on to say that: A representation is a sequence of bytes, plus representation metadata to describe those bytes. Other commonly used but less precise names for a representation include: document, file, and HTTP message entity, instance, or variant. A representation consists of data, metadata describing the data, and, on occasion, metadata to describe the metadata (usually for the purpose of verifying message integrity). Metadata is in the form of name-value pairs, where the name corresponds to a standard that defines the value's structure and semantics. Response messages may include both representation metadata and resource metadata: information about the resource that is not specific to the supplied representation. It seems that the term "representation" is used to refer to both the HTTP message body as well as a the combination of body and headers. So it's not completely clear what sense of representation is intended by statements in section 5.3.3 such as: The model application is therefore an engine that moves from one state to the next by examining and choosing from among the alternative state transitions in the current set of representations. (I'm not sure if there are well know writings on this by Roy or others that clarify this though.) That said, I actually agree with you that links can reside in the headers, i.e. that the broader sense of "representation" applies. But I don't think that all parts of the representation are equal. Specifically, headers, are tied to the protocol while the document/body is not. I highlighted this key differentiator in my previous post. So some consideration should be given as to what part of the representation contains specific pieces of information. You certainly must use link headers if (for whatever reason) you are using a representation format that does not support links (e.g. an image file). I also agree with the notion that link headers can reiterate links in the body to inform intermediaries that don't understand the body format. The issue under debate is whether links in headers or the body should be preferred when (a) the format being used supports linking and (b) a new format is being designed. You can obviously put them in both places, but if you have to put them in one place, which one should it be? From what you say above, my guess is that for (a) it's the body -- if you are using a format like Atom or HTML or another format where links are well-defined you should use that mechanism. This ensures that you are not dependent on HTTP to drive the application state machine. For (b), what is your position? Are you saying that the link header should be preferred because it's already well-defined? Following that logic leads to the conclusion that all new hypermedia formats should defer linking to the link header, which seems strange. Are you saying that you should use both, to support clients/intermediaries that don't yet understand your new format? If that is the case, then I think this is reasonable to the extent it can be achieved (links that don't apply to the whole body may be hard to represent in the headers). Finally, what exactly do you mean by "application-specific" formats? Do you mean something like a JSON data serialization of an application-specific data type? This notion of a "typed resource" seems rather non-RESTful to me as discussed by Roy here: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven Are you implying that the link header is a vehicle through which application-specific formats can be made RESTful? Regards, -- Andrew Wahbe
Hi,
I recently asked a simple REST question in stackoverflow:
http://stackoverflow.com/questions/1291278/different-resource-representations-rest-api
What shocked me was this comment:
"If you are specifying a URI naming scheme in your API (like /app/person/{id}) then your API is RPC, not REST."
I thought I knew REST, but this comment (and a lot of people who supported it) got me to think if I really did...
Is that true? for what they (people on that thread) say, the client should be unaware of the URL's, kinda "spidering" the API through its linked resoruce rather than directly making the calls to an specific URL.
If this is the case.. Shall we rather say that there's no RESTful API's but RESTful clients of a particular API?
Why many people (like myself, in the case that things stated above are true) have such a misconception of REST?
> Is that true? for what they (people on that thread) say, the client > should be unaware of the URL's, kinda "spidering" the API through > its linked resoruce rather than directly making the calls to an specific URL. Ideally, the Server is "in charge" of the URLs, not the client. That's why it is frowned upon to "know how" to build the URLs. Whatever knowledge you have encoded in your application about URLs can change behind your back, this breaking your application. For example, do you know the structure of the URLs for Amazon? Some may think they do, but most folks simply don't because they just follow links to get things done. The URLs are meaningless to them. If/When Amazon changes their URLs, you can see how if the page layout itself does not change, the user isn't impacted at all on using Amazons system. If you application is designed to "follow" URLs rather than make them, then your application has a similar level of robust behavior. > If this is the case.. Shall we rather say that there's no RESTful API's but RESTful clients of a particular API? The API is the combination of data types and how to interpret those data types for your application. If your application is going to follow URLs, it will need to know what parts of the payloads are the URLs to follow. That knowledge is part of the API, not the URLs themselves. Now, obviously, in the real world, as much as the goal is to have opaque URLs, there is likely some leakage of constructing rules if for no other reason than to start the whole process chain, to know what initial URL to send to start the work. So, in those case, the URLs are important. But once started, they shouldn't be. > Why many people (like myself, in the case that things stated above are true) have such a misconception of REST? Because most people stopped learning about REST when they saw "Oh, send and recieve data over HTTP using "pretty" URLs. How hard can it be?" and then all the toolkits worked on making "pretty URLs" easier. When in fact the URLs are an implementation detail secondary to the overall architecture. Did you read the dissertation? If you read the dissertation, it's quite a bit longer than what would be necessary to simply promote RPC via pretty URLs. If you haven't read the thesis, you should. http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm Subbu's article on InfoQ will also probably be eye opening as well: http://www.infoq.com/articles/subbu-allamaraju-rest Regards, Will Hartung (willh@...)
What you are described as spidering is: hypermedia as the engine of
application state (HATEOAS). Complying with this constraint puts a higher
dependence on your media-type and link rel types so you'll want to check out
how to use these. Take a look at the Accept header for both the request and
response since this is where content-negotiation takes place.
Hopefully the discussion about URI opaque makes sense. Practically
speaking, you shouldn't be parsing a URL for meaning, i.e. looking for the
word "person" in the URI and constructing a Person object, instead you
should be relying on the rel tag or the media type you can accept. Roy's
point about serendipitous reuse is spot on. Favor human readable URIs, but
don't rely on them.
I think your realization about RESTful clients is spot on.
To you last point...how long did it take you to understand RPC (over RMI or
COM) and then actually use it in anger. It takes time and the subtleties
will get you from time to time. Don't worry, you'll get it.
On Thu, Sep 24, 2009 at 12:52 PM, pablo.fernandez@... <
fernandezpablo85@...> wrote:
> Hi,
>
> I recently asked a simple REST question in stackoverflow:
>
>
> http://stackoverflow.com/questions/1291278/different-resource-representations-rest-api
>
> What shocked me was this comment:
>
> "If you are specifying a URI naming scheme in your API (like
> /app/person/{id}) then your API is RPC, not REST."
>
> I thought I knew REST, but this comment (and a lot of people who supported
> it) got me to think if I really did...
>
> Is that true? for what they (people on that thread) say, the client should
> be unaware of the URL's, kinda "spidering" the API through its linked
> resoruce rather than directly making the calls to an specific URL.
>
> If this is the case.. Shall we rather say that there's no RESTful API's
> but RESTful clients of a particular API?
>
> Why many people (like myself, in the case that things stated above are
> true) have such a misconception of REST?
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Sep 24, 2009, at 11:41 PM, Noah Campbell wrote:
> "If you are specifying a URI naming scheme in your API (like /app/
> person/{id}) then your API is RPC, not REST."
I disagree with this; it may be non-RESTful, but that doesn't make it
RPC - rather some intermediate first step towards a RESTful API (and
much better than the typical SOAP/WSDL HTTP abuse).
Subbu has written a great article on the topic:
http://www.infoq.com/articles/subbu-allamaraju-rest
Best,
Stefan
Den 25. sep. 2009 kl. 08.23 skrev Stefan Tilkov:
>
> On Sep 24, 2009, at 11:41 PM, Noah Campbell wrote:
>
>> "If you are specifying a URI naming scheme in your API (like /app/
>> person/{id}) then your API is RPC, not REST."
>
> I disagree with this; it may be non-RESTful, but that doesn't make
> it RPC - rather some intermediate first step towards a RESTful API
> (and much better than the typical SOAP/WSDL HTTP abuse).
> .._,___
I disagree with this ( or, not really :) There is nothing inherently
non-RESTful about such a scheme. But it does play into the hands of
thight coupling between client and server, so in a practice it might
be important to emphasize the "unrestfulness" of it?
It does help with the (in some cases) really important aspect of
bookmarkability of application state; it helps humans to remember the
particular application state they visited. I don´t know if other
people think in terms of urls anymore when they want to find a page
they have visited (I do)?
Is the general consensus that this is no longer important, is it all
about programmatic clients now?
JoApologies for the top post but Yahoo's message mangling is just too
annoying...
People do associate clean URLs with REST and then start to learn them or
worse, write them into code. Sure it's useful but it can be dangerous when
changes are introduced. I think I've found a happy medium for OCCI where we
plan to derive all URLs from HTTP and/or hypertext on the fly, while
providing a table of proposed values at an RFC2119 requirement level of
"may" or "should" rather than "must".
Hopefully then implementations will largely look and feel the same (which
improves approachability of the API) but clients will tolerate changes.
Sam
On Fri, Sep 25, 2009 at 8:08 AM, Jo Størset <jo.storset@...> wrote:
>
>
>
> Den 25. sep. 2009 kl. 08.23 skrev Stefan Tilkov:
>
>
> On Sep 24, 2009, at 11:41 PM, Noah Campbell wrote:
>
> "If you are specifying a URI naming scheme in your API (like
> /app/person/{id}) then your API is RPC, not REST."
>
>
> I disagree with this; it may be non-RESTful, but that doesn't make it RPC -
> rather some intermediate first step towards a RESTful API (and much better
> than the typical SOAP/WSDL HTTP abuse).
> .._,___
>
>
> I disagree with this ( or, not really :) There is nothing inherently
> non-RESTful about such a scheme. But it does play into the hands of thight
> coupling between client and server, so in a practice it might be important
> to emphasize the "unrestfulness" of it?
>
> It does help with the (in some cases) really important aspect of
> bookmarkability of application state; it helps humans to remember the
> particular application state they visited. I don´t know if other people
> think in terms of urls anymore when they want to find a page they have
> visited (I do)?
>
> Is the general consensus that this is no longer important, is it all about
> programmatic clients now?
>
> Jo
>
>
>
Hi all,
I have spent a few thoughts on this topic myself. Should one discourage
people to create cool URIs (http://www.w3.org/Provider/Style/URI) with the
result that they are more likely not to hard-code / deep-link them?
It somehow feels better to encourage people to create cool URIs, with the
benefit that if clients hard-code / deep-link the URIs, the URIs are less
likely to change (because of their "coolness") and hence they are in a sense
more loosely coupled. But of course, this may result in people never
learning HATEOAS as some of you have pointed out.
What do you think?
Cheers,
Erling
On Fri, Sep 25, 2009 at 12:52 PM, Sam Johnston <samj@...> wrote:
>
>
> Apologies for the top post but Yahoo's message mangling is just too
> annoying...
>
> People do associate clean URLs with REST and then start to learn them or
> worse, write them into code. Sure it's useful but it can be dangerous when
> changes are introduced. I think I've found a happy medium for OCCI where we
> plan to derive all URLs from HTTP and/or hypertext on the fly, while
> providing a table of proposed values at an RFC2119 requirement level of
> "may" or "should" rather than "must".
>
> Hopefully then implementations will largely look and feel the same (which
> improves approachability of the API) but clients will tolerate changes.
>
> Sam
>
> On Fri, Sep 25, 2009 at 8:08 AM, Jo Størset <jo.storset@...>wrote:
>
>>
>>
>>
>> Den 25. sep. 2009 kl. 08.23 skrev Stefan Tilkov:
>>
>>
>> On Sep 24, 2009, at 11:41 PM, Noah Campbell wrote:
>>
>> "If you are specifying a URI naming scheme in your API (like
>> /app/person/{id}) then your API is RPC, not REST."
>>
>>
>> I disagree with this; it may be non-RESTful, but that doesn't make it RPC
>> - rather some intermediate first step towards a RESTful API (and much better
>> than the typical SOAP/WSDL HTTP abuse).
>> .._,___
>>
>>
>> I disagree with this ( or, not really :) There is nothing inherently
>> non-RESTful about such a scheme. But it does play into the hands of thight
>> coupling between client and server, so in a practice it might be important
>> to emphasize the "unrestfulness" of it?
>>
>> It does help with the (in some cases) really important aspect of
>> bookmarkability of application state; it helps humans to remember the
>> particular application state they visited. I don´t know if other people
>> think in terms of urls anymore when they want to find a page they have
>> visited (I do)?
>>
>> Is the general consensus that this is no longer important, is it all about
>> programmatic clients now?
>>
>> Jo
>>
>>
>
>
Interesting. Could you elaborate a bit more on the following statement?
"derive all URLs from HTTP and/or hypertext on the fly, while providing
a table of proposed values at an RFC2119 requirement level of "may" or
"should" rather than "must"."
Adolfo
--- On Fri, 9/25/09, Sam Johnston <samj@samj.net> wrote:
From: Sam Johnston <samj@...>
Subject: Re: [rest-discuss] Newbie REST Question
To: "Jo Størset" <jo.storset@...>
Cc: "Stefan Tilkov" <stefan.tilkov@...>, "Noah Campbell" <noahcampbell@gmail.com>, "pablo.fernandez@..." <fernandezpablo85@...>, rest-discuss@yahoogroups.com
Date: Friday, September 25, 2009, 5:52 AM
Apologies for the top post but Yahoo's message mangling is just too annoying...
People do associate clean URLs with REST and then start to learn them or worse, write them into code. Sure it's useful but it can be dangerous when changes are introduced. I think I've found a happy medium for OCCI where we plan to derive all URLs from HTTP and/or hypertext on the fly, while providing a table of proposed values at an RFC2119 requirement level of "may" or "should" rather than "must".
Hopefully then implementations will largely look and feel the same (which improves approachability of the API) but clients will tolerate changes.
Sam
On Fri, Sep 25, 2009 at 8:08 AM, Jo Størset <jo.storset@usit. uio.no> wrote:
Den 25. sep. 2009 kl. 08.23 skrev Stefan Tilkov:
On Sep 24, 2009, at 11:41 PM, Noah Campbell wrote:
"If you are specifying a URI naming scheme in your API (like /app/person/ {id}) then your API is RPC, not REST."
I disagree with this; it may be non-RESTful, but that doesn't make it RPC - rather some intermediate first step towards a RESTful API (and much better than the typical SOAP/WSDL HTTP abuse).
.._,___
I disagree with this ( or, not really :) There is nothing inherently non-RESTful about such a scheme. But it does play into the hands of thight coupling between client and server, so in a practice it might be important to emphasize the "unrestfulness" of it?
It does help with the (in some cases) really important aspect of bookmarkability of application state; it helps humans to remember the particular application state they visited. I don´t know if other people think in terms of urls anymore when they want to find a page they have visited (I do)?
Is the general consensus that this is no longer important, is it all about programmatic clients now?
Jo
On Fri, Sep 25, 2009 at 8:17 AM, Erling Wegger Linde <erlingwl@...> wrote: > > I have spent a few thoughts on this topic myself. Should one > discourage people to create cool URIs > (http://www.w3.org/Provider/Style/URI) with the result that they are > more likely not to hard-code / deep-link them? There is nothing unRESTful about deep-linking. URIs are how you identify resources in a REST/HTTP system. It is perfectly reasonable for clients want/need to store references so some resources. A clients that does so needs to understand redirections when dereferencing stored URIs, though. > It somehow feels better to encourage people to create cool URIs, > with the benefit that if clients hard-code / deep-link the URIs, the > URIs are less likely to change (because of their "coolness") and > hence they are in a sense more loosely coupled. But of course, this > may result in people never learning HATEOAS as some of you have > pointed out. URI construction is something that should be avoided for all the reasons already mentioned in this thread. However, giving up "cool URIs" as a way to reduce the possibility of URI construction would be a very unfortunate choice. -- Peter Williams http://barelyenough.org
Sam Johnston wrote: > > > Apologies for the top post but Yahoo's message mangling is just too > annoying... > > > People do associate clean URLs with REST and then start to learn them or > worse, write them into code. Sure it's useful but it can be dangerous > when changes are introduced. I think I've found a happy medium for OCCI > where we plan to derive all URLs from HTTP and/or hypertext on the fly, > while providing a table of proposed values at an RFC2119 requirement > level of "may" or "should" rather than "must". > One thing that clicked for me was when you start thinking of your URLs as opaque and using links to abstract the URL scheme away, URL schemes just become an implementation detail. As an implementation, because your clients are following links, you can refactor this URL scheme to fit your needs as your system evolves. Has anybody written an article of the progression of a REST noob to a REST veteran? Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Fri, Sep 25, 2009 at 3:44 PM, Peter Williams <pezra@...> wrote: > > > > On Fri, Sep 25, 2009 at 8:17 AM, Erling Wegger Linde <erlingwl@...> wrote: > > > > I have spent a few thoughts on this topic myself. Should one > > discourage people to create cool URIs > > (http://www.w3.org/Provider/Style/URI) with the result that they are > > more likely not to hard-code / deep-link them? > > There is nothing unRESTful about deep-linking. URIs are how you > identify resources in a REST/HTTP system. It is perfectly reasonable > for clients want/need to store references so some resources. A > clients that does so needs to understand redirections when > dereferencing stored URIs, though. I didn't say it was unRESTful. But, if a client bookmarks/deep-links/hard-codes URIs, then I would say it is more tightly coupled to the server than if it just kept one root-URI and then used HATEOAS/followed hyperlinks from there to get to the deeper resources.. Do you agree? If the server can redirect the client, that is nice of course. > > > It somehow feels better to encourage people to create cool URIs, > > with the benefit that if clients hard-code / deep-link the URIs, the > > URIs are less likely to change (because of their "coolness") and > > hence they are in a sense more loosely coupled. But of course, this > > may result in people never learning HATEOAS as some of you have > > pointed out. > > URI construction is something that should be avoided for all the > reasons already mentioned in this thread. However, giving up "cool > URIs" as a way to reduce the possibility of URI construction would be > a very unfortunate choice. > > -- > Peter Williams > http://barelyenough.org - Erling > >
Peter Williams wrote: > URI construction is something that should be avoided for all the > reasons already mentioned in this thread. However, giving up "cool > URIs" as a way to reduce the possibility of URI construction would be > a very unfortunate choice. > What about publishing Link + templates instead of HREF? It would allow clients to use PUT instead of using POST on a "factory" resource thus allowing you to be idempotent. Really I'm talking about refining resource creation. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Fri, Sep 25, 2009 at 8:55 AM, Erling Wegger Linde <erlingwl@...> wrote: > On Fri, Sep 25, 2009 at 3:44 PM, Peter Williams <pezra@...> wrote: >> >> On Fri, Sep 25, 2009 at 8:17 AM, Erling Wegger Linde <erlingwl@...> wrote: >> > >> > I have spent a few thoughts on this topic myself. Should one >> > discourage people to create cool URIs >> > (http://www.w3.org/Provider/Style/URI) with the result that they are >> > more likely not to hard-code / deep-link them? >> >> There is nothing unRESTful about deep-linking. URIs are how you >> identify resources in a REST/HTTP system. It is perfectly reasonable >> for clients want/need to store references so some resources. A >> clients that does so needs to understand redirections when >> dereferencing stored URIs, though. > > I didn't say it was unRESTful. But, if a client > bookmarks/deep-links/hard-codes URIs, then I would say it is more > tightly coupled to the server than if it just kept one root-URI and > then used HATEOAS/followed hyperlinks from there to get to the deeper > resources.. Do you agree? No. When following a link extracted from representations there is always a delay between when the server generated the URIs and when the client dereferences them. This interval can be quite large even for clients that are not bookmarking with caching. Storing those URIs for later use is, in some respects at least, just and extension of the delays between URI generation and dereference that always exist in an REST system. With normal client execution and caching the server does have some say over the life time of URI visibility. However, this view requires an assumption of well behaved clients. Even with clients that follow the rules the expected lifetime a URI cannot be relied on due to things like computers being put to sleep, clients getting jammed for long periods, etc. All in all, i think it is much safer for a system to assume that once a URI has been handed to a client, that it could be dereferenced at anytime in the future. The act of removing support for a previously issued URI should require an affirmative defense and should usually be handled with a 410 Gone response. Redirection is continued support in my mind. -- Peter Williams http://barelyenough.org
On Fri, Sep 25, 2009 at 7:44 AM, Peter Williams <pezra@...> wrote: > > URI construction is something that should be avoided for all the > reasons already mentioned in this thread. However, giving up "cool > URIs" as a way to reduce the possibility of URI construction would be > a very unfortunate choice. Yea, that's kind of a baby/bathwater thing. "Cool" URIs I think will naturally happen simply because of the nature of the architecture. It will also happen because developers are the ones who must create these things, code them, and type them in. While at an API and architectural level URIs are opaque, at an implementation level they're most certainly not. If someone deep links, then, truthfully, that's an unsupported client dependency. While it would be polite to keep track of old URI, to redirect if necessary, etc., after a time that because legacy cruft that will likely simply be removed from the system. You could argue that with v1 of a scheme, the URI works. On v2, the URI is redirected (if practical), or 410'd, acknowledging that it once existed, but no longer. Finally, by v3, the client will simply get a 404 or some other generic, catchall response unrelated to the specific URI. However, this does bring up an interesting chicken/egg problem. At some root, all these rules are out the door, because at some root, you'll want to document the entry points in to the system. These URIs will very likely be constructed, and therefore transparent. If for no other reason than efficiency, it seems unrealistic that an application must re-examine that API for every invocation. How far does one push it? For example: http://example.com/ returns a list of URIs for the various resources it publishes. One is the person/people collection. http://example.com/people returns a list of everyone in the system, but that's absurd for many cases, so there's a search facility http://example.com/people?userId=1234 returns a collection of all people with a userId of 1234 (in this case, only 1). This result has a link to get the user details: http://example.com/person/1234 Now, I don't think it's realistic that an application would need to start a transaction at the root, "discover" the people resource URI, search for the user to discover the Users reference URI, and finally calling that to get the actual user data. So, it's not a 100% rule, it's a guideline that promotes robustness, but at the end of the day, folks have to use this stuff too. It may be interesting to publish a document that holds the details of an API. The client can use that document as a local caching indiciator. As long as the document is unchanged, the client can cache the URIs it discovers, and know it doesn't have to look them up any more. If the doc does change, then the client can fall back in to "discovery" mode (for example, in this case, crawling from the root of the service again). Doesn't even have to be a "real" document, it can just be a caching indicator. Regards, Will Hartung
On Fri, Sep 25, 2009 at 4:40 PM, Will Hartung <willh@...> wrote: > > So, it's not a 100% rule, it's a guideline that promotes robustness, > but at the end of the day, folks have to use this stuff too. > It would be very helpful to see examples of client code that follows HATEOS practices. When you see code examples for REST they are usually the server side.
Will Hartung wrote:
>
> If for no other reason than efficiency, it seems unrealistic that an
> application must re-examine that API for every invocation.
>
BTW, traditional RPC systems have the same exact issue. How often
should an RPC client ping its Naming Service?
(Somebody already mentioned this) For REST (+ HTTP) this naming-lookup
frequency is built into the application protocol (Cache-Control)
> How far does one push it?
>
> For example:
>
> http://example.com/ <http://example.com/> returns a list of URIs for the
> various resources
> it publishes. One is the person/people collection.
>
This provides a set of Link relationships. A GET /example.com may
return a Cache-Control header to state how long the representation is
valid for, which in turn could mean how long are the Link relationships
valid for.
> http://example.com/people <http://example.com/people> returns a list of
> everyone in the system,
> but that's absurd for many cases, so there's a search facility
>
> http://example.com/people?userId=1234
> <http://example.com/people?userId=1234> returns a collection of all
> people with a userId of 1234 (in this case, only 1).
>
> This result has a link to get the user details:
> http://example.com/person/1234 <http://example.com/person/1234>
>
> Now, I don't think it's realistic that an application would need to
> start a transaction at the root, "discover" the people resource URI,
> search for the user to discover the Users reference URI, and finally
> calling that to get the actual user data.
>
It is realistic for the application to initially "discover" all the
Links by surfing the resources. This is the same thing that RPC systems
do when interacting with a naming service.
> So, it's not a 100% rule, it's a guideline that promotes robustness,
> but at the end of the day, folks have to use this stuff too.
>
It can and should be a 100% rule. Otherwise you lose the decoupling
attributes of opaque URIs (and REST). Again, RPC systems have the same
exact problem (Sorry, I know I keep repeating myself).
Please correct me if I'm wrong, but there is no reason your links can't
publish Templates instead of specific Href URIs. For example, instead
of one "people" link, the link might be:
<link rel="peopleById http://relationships.com/peopleById"
template="http://example.com/people/{userId}" type="application/xml"/>
Then, for the definition of the "peopleById" link relationship it says
it has a template parameter {userId}.
> It may be interesting to publish a document that holds the details of
> an API. The client can use that document as a local caching
> indiciator. As long as the document is unchanged, the client can cache
> the URIs it discovers, and know it doesn't have to look them up any
> more. If the doc does change, then the client can fall back in to
> "discovery" mode (for example, in this case, crawling from the root of
> the service again). Doesn't even have to be a "real" document, it can
> just be a caching indicator.
>
This "magic" document you talk about already exists. It is the document
you received the links from.
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
On Fri, Sep 25, 2009 at 2:10 PM, Bill Burke <bburke@...> wrote:
> Please correct me if I'm wrong, but there is no reason your links can't
> publish Templates instead of specific Href URIs. For example, instead of
> one "people" link, the link might be:
>
> <link rel="peopleById http://relationships.com/peopleById"
> template="http://example.com/people/{userId}" type="application/xml"/>
>
> Then, for the definition of the "peopleById" link relationship it says it
> has a template parameter {userId}.
>
>
>> It may be interesting to publish a document that holds the details of
>> an API. The client can use that document as a local caching
>> indiciator. As long as the document is unchanged, the client can cache
>> the URIs it discovers, and know it doesn't have to look them up any
>> more. If the doc does change, then the client can fall back in to
>> "discovery" mode (for example, in this case, crawling from the root of
>> the service again). Doesn't even have to be a "real" document, it can
>> just be a caching indicator.
>>
>
> This "magic" document you talk about already exists. It is the document you
> received the links from.
I like all of these. The "home page" of a host could basically be a
summary of the API needed to work with it, and with little work it
could be both human readable, usable in a browser, links to detailed
documentation, and machine crawlable for discovery of the URIs
themselves.
Now just mix in a framework that make publishing and maintaining that
document easy to coordinate with the resources in your application.
Regards,
Will Hartung
Will Hartung wrote:
> On Fri, Sep 25, 2009 at 2:10 PM, Bill Burke <bburke@...> wrote:
>> Please correct me if I'm wrong, but there is no reason your links can't
>> publish Templates instead of specific Href URIs. For example, instead of
>> one "people" link, the link might be:
>>
>> <link rel="peopleById http://relationships.com/peopleById"
>> template="http://example.com/people/{userId}" type="application/xml"/>
>>
>> Then, for the definition of the "peopleById" link relationship it says it
>> has a template parameter {userId}.
>>
>>
>>> It may be interesting to publish a document that holds the details of
>>> an API. The client can use that document as a local caching
>>> indiciator. As long as the document is unchanged, the client can cache
>>> the URIs it discovers, and know it doesn't have to look them up any
>>> more. If the doc does change, then the client can fall back in to
>>> "discovery" mode (for example, in this case, crawling from the root of
>>> the service again). Doesn't even have to be a "real" document, it can
>>> just be a caching indicator.
>>>
>> This "magic" document you talk about already exists. It is the document you
>> received the links from.
>
> I like all of these. The "home page" of a host could basically be a
> summary of the API needed to work with it, and with little work it
> could be both human readable, usable in a browser, links to detailed
> documentation, and machine crawlable for discovery of the URIs
> themselves.
>
I also like Link headers as a compliment to document links. Then you
can just do HEAD to get your relationships instead of having to parse a
document.
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
Hi all, I prefer the term "ledible" over "cool" URI. So it's more a psychological/philosophical thing than a technical requirement. I find this interpretation backed by Roys thesis[1]: 6.2.4 Binding Semantics to URI As mentioned above, a resource can have many identifiers. In other words, there may exist two or more different URI that have equivalent semantics when used to access a server. It is also possible to have two URI that result in the same mechanism being used upon access to the server, and yet those URI identify two different resources because they don't mean the same thing. Semantics are a by-product of the act of assigning resource identifiers and populating those resources with representations. At no time whatsoever do the server or client software need to know or understand the meaning of a URI -- they merely act as a conduit through which the creator of a resource (a human naming authority) can associate representations with the semantics identified by the URI. In other words, there are no resources on the server; just mechanisms that supply answers across an abstract interface defined by resources. It may seem odd, but this is the essence of what makes the Web work across so many different implementations. It is the nature of every engineer to define things in terms of the characteristics of the components that will be used to compose the finished product. The Web doesn't work that way. The Web architecture consists of constraints on the communication model between components, based on the role of each component during an application action. This prevents the components from assuming anything beyond the resource abstraction, thus hiding the actual mechanisms on either side of the abstract interface. Regards, Nicolai [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm#sec_6_2_4
Hola Pablo.
--- In rest-discuss@yahoogroups.com, "pablo.fernandez@..." <fernandezpablo85@...> wrote:
> What shocked me was this comment:
>
> "If you are specifying a URI naming scheme in your API (like /app/person/{id}) then your API is RPC, not REST."
>
Ok, there are several things we must be aware when using URIs, and the first one is that REST is not a way to call methods encoding method names and parameters in URIs, not even verbs or nouns that represent an specific action. That would be RPC.
The Sample URI does not actually represent RPC. If you write:
/del/person/{id} and del is a verb telling the server to delete, then you are using RPC. If you write:
/app/person/{id}?method=create then you are using RPC. Got it?
> Is that true? for what they (people on that thread) say, the client should be unaware of the URL's, kinda "spidering" the API through its linked resoruce rather than directly making the calls to an specific URL.
And here comes the second one. URIs are a way to reference a resource, a name that is actually used by the SERVER! Yes, URIs are meaningless to the client. A person may be referenced as /app/person/{id} or as /app/{person_id}. Amazon was used as an example in this thread, well that is a perfect example. Server can change the the URI whenever it wants, and the URI may contain semantics for the server, but not for the client. Clients that use templates to create their URIs I may say are not totally RESTfull.
Clients do not need to compose URIs, they need to discover them in the payload, which content and type are negotiated, and then use the HTTP semantics to operate against them.
Be careful with semantics, many people think that the content (Hypermedia) you got should tell you the operations. That is under discussion, and I had some ideas still not very mature about the case.
>
> If this is the case.. Shall we rather say that there's no RESTful API's but RESTful clients of a particular API?
There is not such things as RESTFull clients. REST is a style for networked app's architectures. The API boom, to me, is just a way to get the REST name on top of a non-REST architecture, just by saying "Ok, I know my app is built upon a SOA style, but I've just created a REST API..." when they only mean they exposed the app to web using fixed URIs. See the point?
>
> Why many people (like myself, in the case that things stated above are true) have such a misconception of REST?
>
Good question. I once wrote that the regular developer was so tired of SOAP web services, that when someone reading the REST dissertation tried to explain it using a back and forth interchange of URIs as an example, everybody though REST was as easy as defining URIs. REST is NOT easy, and not thought as a replacement for SOAP services, and it is NOT suitable for all Apps in the web.
The URI guys, I put them into a category named "URI Jugglers" in the thread
http://tech.groups.yahoo.com/group/rest-discuss/message/13422
There are other types, you can go and check how do you see REST, to see if you fit in one. No one read it or no one wanted to write in which category they fit, or if they actually think one of the categories is right.
Saludos!
William Martinez Pomares
http://acoscomp.com/wblog
(Can someone please check if it's possible to turn off the yahoo rubbish? It's driving me insane when replying.) > On Sat, Sep 26, 2009 at 12:08 AM, Bill Burke <bburke@...> wrote: > I also like Link headers as a compliment to document links. Then you can just do HEAD to get your relationships instead of having to parse a document. I see link headers aka "Web Linking" (the name of the draft) as an alternative to hypertext (e.g. "document links") rather than just a complement to it. That is, I assume both the document itself and the URL referencing it are opaque which allows me both flexibility in implementation and more importantly, support for arbitrary formats such as images, videos, virtual machines, etc. If you don't use the HTTP headers for metadata such as Web Linking<http://tools.ietf.org/html/draft-nottingham-http-link-header>and Web Categories <http://tools.ietf.org/html/draft-johnston-http-category-header>then you either have to create an alternative representation with its own URL and/or content type or use a wrapper like SOAP or Atom. The beauty of this of course is that you can do a HEAD to get the metadata alone or a GET for both. Sam
OFX (http://www.ofx.net) is a classic RPC system. The client sends a request body containing all the information about what actions to perform on which resources, and the server responds. The OFX data type, application/ofx, is THE existing xml format for financial data. I would not wish to create an entirely new mime type that does the same thing. Talk about reinventing wheels. Is anyone doing any work to make this more RESTful? Examples of the kind of things I mean: * making the resources (bank accounts, transfers, etc) addressable with URIs * use http verbs. GET to get an account, POST to create a new transfer, PUT to modify something, DELETE.. We all know the drill * extend the schema of application/ofx to contain hyperlinks * replace ofx's authentication model, which puts the authentication info in the request body, with http basic or digest Of course the question does arise of what benefit could come from this, since ofx has been going fine for years without being RESTful. I think the "business case for REST" thread that we had on here has a lot to say. -- Sent from my mobile device
Reviewing all messages, I wonder... Are the ´nice looking URIs´ one of the main reasons why people don´t "get" REST? I believe that if instead of having nice URIs, we had ugly obfuscated ones, the clients would have to "spider" the resources for the links (The URIs would be ugly enough to discourage the client from creating them), effectively leaving HATEOAS as the only way of using the API. I´m NOT in favour of ugly URIs though but it was something that just came to my mind and wanted to share it.
I think it would help, but for most entities you don't mind that the URL is friendly so people will bookmark it, effectively creating a deep link. An architectural smell that can be identified during a code review would be deep linking, or any sort of URL construction instead of a "query" and follow pattern. On Sat, Sep 26, 2009 at 10:29 AM, pablo.fernandez@... < fernandezpablo85@...> wrote: > Reviewing all messages, I wonder... > > Are the ´nice looking URIs´ one of the main reasons why people don´t "get" > REST? > > I believe that if instead of having nice URIs, we had ugly obfuscated ones, > the clients would have to "spider" the resources for the links (The URIs > would be ugly enough to discourage the client from creating them), > effectively leaving HATEOAS as the only way of using the API. > > I´m NOT in favour of ugly URIs though but it was something that just came > to my mind and wanted to share it. > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Sep 26, 2009, at 7:29 PM, pablo.fernandez@... wrote: > Reviewing all messages, I wonder... > > Are the ´nice looking URIs´ one of the main reasons why people don´t > "get" REST? > > I believe that if instead of having nice URIs, we had ugly > obfuscated ones, the clients would have to "spider" the resources > for the links (The URIs would be ugly enough to discourage the > client from creating them), effectively leaving HATEOAS as the only > way of using the API. I sometimes use this technique to force myself not to rely on URI syntax. I think it is amazing how much more evident the need for discovery becomes once you fin d yourself looking at /jhsysge882/duuud instead of /customers/76 It is just too tempting to think 'Oh yeah - see, that's a customer'. Jan > > I´m NOT in favour of ugly URIs though but it was something that just > came to my mind and wanted to share it. > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Noah Campbell wrote: > > > I think it would help, but for most entities you don't mind that the URL > is friendly so people will bookmark it, effectively creating a deep link. > > An architectural smell that can be identified during a code review would > be deep linking, or any sort of URL construction instead of a "query" > and follow pattern. Apart from "deep-link" being a legal rather than technical term (and IIRC, rejected by the courts of many jurisdictions), why do you object to this. If something has been told that the URI for a resource is X, why shouldn't they store X and reuse it later?
I think we need to split the "cool uri" discussion in two: human hackability, which can be great for developer or user discoverability, and identifier persistence. I've been working on a module for OpenRasta that replaces all generated URIs with GUIDs on development-time applications, to enforce that clients developed against a development environment don't try and be smart about how URIs are constructed. I think there's value in this, as it switches the REST antipattern of URI-focus to a resource focus. S > To: rest-discuss@yahoogroups.com > From: jon@... > Date: Sat, 26 Sep 2009 21:05:37 +0100 > Subject: Re: [rest-discuss] Re: Newbie REST Question > > Noah Campbell wrote: > > > > > > I think it would help, but for most entities you don't mind that the URL > > is friendly so people will bookmark it, effectively creating a deep link. > > > > An architectural smell that can be identified during a code review would > > be deep linking, or any sort of URL construction instead of a "query" > > and follow pattern. > > Apart from "deep-link" being a legal rather than technical term (and > IIRC, rejected by the courts of many jurisdictions), why do you object > to this. > > If something has been told that the URI for a resource is X, why > shouldn't they store X and reuse it later? > > > > ------------------------------------ > > Yahoo! Groups Links > > > _________________________________________________________________ Get the best of MSN on your mobile http://clk.atdmt.com/UKM/go/147991039/direct/01/
A new w3c Note of note: http://www.w3.org/TR/gov-data/ I agree with Elliotte, this Note applies to far more problem areas than just open eGov. I've been working on an interesting problem of late, that I'm not at liberty to share with the group. Let's just say that it's as far away from government data as you can possibly imagine, yet this w3c Note totally applies. (With a wink and a nod to some lurkers.) I particularly like the term "self-documenting" when referring to the hypermedia itself; "self-describing" in REST refers to methods, response codes and headers in whatever protocol is used. Lots of folks get tripped up when learning REST, into thinking that the entity is meant to be self-describing. If you've properly applied the HEAS constraint, your API isn't self-descriptive, it's self-documented. My only criticism of the Note, is that the first step concerns itself with URI design. I prefer to start a REST project by identifying my resources as "types" and showing their relationships in a bubble chart; only then do I proceed to designing the URI allocation scheme. Starting with URI design as the Note recommends, often leads to mis-nested, or missing, data hierarchies. This is at odds with how REST's "identification of resources" constraint should be applied, such that "the forces that influence system behavior" (like URI pattern) "flow naturally, in harmony with the system". Sure, you can still hyperlink your way around this, but the added complexity of such an approach falls short of the desired properties a REST architecture is meant to evoke. So make a bubble chart -- you can add new resources, sub-resources, and relationships as you discover them, by adding new lines or bubbles. I've seen UML put to this use, too. I've said before that there are no shortcuts to RESTful architecture. Well, there is one... My criticism of REST-* remains unchanged. A disciplined approach to networked-software design is laid out by Roy's thesis: make yourself a tree diagram, applying and removing constraints as you go, to achieve the desired system properties. You may just discover that your problem is best solved by a C2 architecture -- which you'll never know if you start by choosing an architectural style first, and then trying to shoehorn your solution into it. The shortcut, is to prove that your solution resembles a distributed hypermedia application. The precondition for applying REST's Uniform Interface and Layered System constraints is referred to as the "client- cache-stateless-server" constraint. Any system built in accordance with the advice given in this w3c Note will meet this precondition, proving that it is indeed a distributed hypermedia application. It says so right there in the Note -- such a system can be used as a RESTful API. What it doesn't say is, "if and only if you've applied the constraints making up REST's Uniform Interface". So that's my REST shortcut: Develop a distributed hypermedia application in accordance with the w3c POGD Note. *Then* ask how to "make it RESTful" by adding more constraints to the existing set of client-cache-stateless-server constraints you've already shown to have applied by following the best-practices advice in the Note. (The Note also makes no mention of caching, however, if it's followed there's really no inherently-uncacheable data present. Just turn on the httpd's caching, if it isn't already. So I'm taking it as a given.) Apologies to Dr. House... erm... Fielding if I've oversimplified... ;-) -Eric
I think for entities it's fine, especially if you want to leverage caching middleware. However, these entities may have links that need to be followed and not bookmarked. On Sat, Sep 26, 2009 at 1:05 PM, Jon Hanna <jon@...> wrote: > Noah Campbell wrote: > > > > > > I think it would help, but for most entities you don't mind that the URL > > is friendly so people will bookmark it, effectively creating a deep link. > > > > An architectural smell that can be identified during a code review would > > be deep linking, or any sort of URL construction instead of a "query" > > and follow pattern. > > Apart from "deep-link" being a legal rather than technical term (and > IIRC, rejected by the courts of many jurisdictions), why do you object > to this. > > If something has been told that the URI for a resource is X, why > shouldn't they store X and reuse it later? > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Sep 25, 2009, at 4:52 AM, Sam Johnston wrote: > Apologies for the top post but Yahoo's message mangling is just too > annoying... > > There is a way to turn this behavior off for yourself. Go to http://tech.groups.yahoo.com/group/rest-discuss/join (after login), and then choose "Traditional" under "Step 3: Message Preference" at the bottom of the page. HTH. Subbu
> This provides a set of Link relationships. A GET /example.com may > return a Cache-Control header to state how long the representation is > valid for, which in turn could mean how long are the Link > relationships > valid for. > Though technically valid to say so, it would be difficult to write client code like this. To support such a behavior, clients will have behave like caches, and that's a tall order. Subbu
> Though technically valid to say so, it would be difficult to write > client code like this. To support such a behavior, clients will have > behave like caches, and that's a tall order. I'm unsure how various platforms work, but on the MS side, the majority f frameworks I see (wininet or the .net HttpWebRequest) do implement local caching as part of the API. Furthermore, I believe that the way XHR works goes through the local cache too. I'm really interested to see beyond those two specific scenarios, how many clients out there do not implement caching as part of the client API. What has been your experience? S _________________________________________________________________ Get the best of MSN on your mobile http://clk.atdmt.com/UKM/go/147991039/direct/01/
Other than the two mentioned below, I am not aware of any. In the absence of such support in the framework or runtime, it is safe to keep client code cache agnostic (with the exception of conditional requests). Subbu On Sep 26, 2009, at 4:32 PM, Sebastien Lambla wrote: > > Though technically valid to say so, it would be difficult to write > > client code like this. To support such a behavior, clients will have > > behave like caches, and that's a tall order. > > I'm unsure how various platforms work, but on the MS side, the > majority f frameworks I see (wininet or the .net HttpWebRequest) do > implement local caching as part of the API. > > Furthermore, I believe that the way XHR works goes through the local > cache too. > > I'm really interested to see beyond those two specific scenarios, > how many clients out there do not implement caching as part of the > client API. What has been your experience? > > S > > Upgrade to Internet Explorer 8 Optimised for MSN. Download Now
I think the guidance to produce content that is both human readable and machine readable is very valuable. There are too many cases where people are building a human web interface and a separate API where it is completely unnecessary. Darrel
I'll have to respectfully disagree. Cache-Control semantics are far from rocket science. In fact, they are trivial compared to the caching semantics enterprise develoeprs are used to dealing with (specifically with databases). Also, if a client did not take advantage of cache semantics it would be missing out on a key feature of the Web. Subbu Allamaraju wrote: > Other than the two mentioned below, I am not aware of any. In the > absence of such support in the framework or runtime, it is safe to keep > client code cache agnostic (with the exception of conditional requests). > > Subbu > > On Sep 26, 2009, at 4:32 PM, Sebastien Lambla wrote: > >> > Though technically valid to say so, it would be difficult to write >> > client code like this. To support such a behavior, clients will have >> > behave like caches, and that's a tall order. >> >> I'm unsure how various platforms work, but on the MS side, the >> majority f frameworks I see (wininet or the .net HttpWebRequest) do >> implement local caching as part of the API. >> >> Furthermore, I believe that the way XHR works goes through the local >> cache too. >> >> I'm really interested to see beyond those two specific scenarios, how >> many clients out there do not implement caching as part of the client >> API. What has been your experience? >> >> S >> >> Upgrade to Internet Explorer 8 Optimised for MSN. Download Now > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Why build such fuctionality into clients when a proxy cache in the client env can do it with minimal cost? On Sep 26, 2009, at 5:01 PM, Bill Burke <bburke@...> wrote: > I'll have to respectfully disagree. Cache-Control semantics are far > from rocket science. In fact, they are trivial compared to the > caching semantics enterprise develoeprs are used to dealing with > (specifically with databases). Also, if a client did not take > advantage of cache semantics it would be missing out on a key > feature of the Web. > > > Subbu Allamaraju wrote: >> Other than the two mentioned below, I am not aware of any. In the >> absence of such support in the framework or runtime, it is safe to >> keep client code cache agnostic (with the exception of conditional >> requests). >> Subbu >> On Sep 26, 2009, at 4:32 PM, Sebastien Lambla wrote: >>> > Though technically valid to say so, it would be difficult to write >>> > client code like this. To support such a behavior, clients will >>> have >>> > behave like caches, and that's a tall order. >>> >>> I'm unsure how various platforms work, but on the MS side, the >>> majority f frameworks I see (wininet or the .net HttpWebRequest) >>> do implement local caching as part of the API. >>> >>> Furthermore, I believe that the way XHR works goes through the >>> local cache too. >>> >>> I'm really interested to see beyond those two specific scenarios, >>> how many clients out there do not implement caching as part of the >>> client API. What has been your experience? >>> >>> S >>> >>> Upgrade to Internet Explorer 8 Optimised for MSN. Download Now > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com
It seems a much more complicated solution for several reasons.. some points I can remember: 1) The email account used by the application is write only, my rest api do not receive emails ... Emails are used only to notifiy the users about some event in my business scenario.. 2) To add a dependency to an SMTP account in my application add also several corner cases to handle.. (over-engineering eventually) 3) I need to parse the response emails to find the confirmation token, and there is no way to guarantee the user or his email client will not change the message format.. That's the ugly part because if something goes wrong here I need to create a second email and then I need to establish a robust framework just to workaround issues here :) So, to keep it simple: any chance to use an URL confirmation and still remain hateoas ? registration is a so common scenario that I believe someone already designed this before.. * perhaps not a surprise that all popular REST services like twitter has a secondary web-app to handle registration :) On Sat, Sep 26, 2009 at 4:19 PM, Markus KARG <markus.karg@...> wrote: > Why not replacing clicking the link by just answering the email (reply-to)? > >> -----Original Message----- >> From: Felipe Gaúcho [mailto:fgaucho@...] >> Sent: Samstag, 26. September 2009 15:48 >> To: rest-discuss@yahoogroups.com; users@... >> Subject: [Jersey] confirmation URL ? GET ? >> >> Hi there, >> >> my first email here... >> >> question: I have a registration flow that includes the following steps: >> >> 1) client send a POST with the new user's data. >> 2) the server sends an email to the new user containing a >> "Confirmation URL" >> 3) the user clicks on the URL, confirming his registration request... >> >> So far so good, the basics of a registration use case. >> >> Now the question: >> >> The URL in the email should be a GET, right ? (otherwise I am not sure >> how it can work from an email.. ) >> >> but, if I use GET to transform the state of a resource in the server I >> am abusing the rest protocol - (GET not-idempotent since the status of >> the user will change from NEW to ACTIVE) >> >> So, what is the alternative ? >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: users-unsubscribe@... >> For additional commands, e-mail: users-help@jersey.dev.java.net > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: users-unsubscribe@....java.net > For additional commands, e-mail: users-help@... > > -- Looking for a client application for this service: http://fgaucho.dyndns.org:8080/arena-http/wadl
Hi there, my first email here... question: I have a registration flow that includes the following steps: 1) client send a POST with the new user's data. 2) the server sends an email to the new user containing a "Confirmation URL" 3) the user clicks on the URL, confirming his registration request... So far so good, the basics of a registration use case. Now the question: The URL in the email should be a GET, right ? (otherwise I am not sure how it can work from an email.. ) but, if I use GET to transform the state of a resource in the server I am abusing the rest protocol - (GET not-idempotent since the status of the user will change from NEW to ACTIVE) So, what is the alternative ?
Felipe, On Sep 26, 2009, at 3:48 PM, Felipe Gaúcho wrote: > Hi there, > > my first email here... welcome :-) > > question: I have a registration flow that includes the following > steps: > > 1) client send a POST with the new user's data. > 2) the server sends an email to the new user containing a > "Confirmation URL" > 3) the user clicks on the URL, confirming his registration request... > > So far so good, the basics of a registration use case. > > Now the question: > > The URL in the email should be a GET, right ? (otherwise I am not sure > how it can work from an email.. ) > > but, if I use GET to transform the state of a resource in the server I > am abusing the rest protocol - (GET not-idempotent since the status of > the user will change from NEW to ACTIVE) Exactly. > > So, what is the alternative ? Send a URL to an HTML page that includes a POST form with a button the user clicks on to confirm. (or send an HTML email with a form (not sure if the email client supports the form submission though)). Jan > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@acm.org Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
>> So, what is the alternative ? > > Send a URL to an HTML page that includes a POST form with a button the > user clicks on to confirm. > > (or send an HTML email with a form (not sure if the email client > supports the form submission though)). It is a matter of a tradeoff between usability and safety. Confirming by just clicking on the link is a well-established usage pattern on the web. Most users will miss the flow if there is another HTML form or some other user interaction on that page. When implementing this, just make sure to not fail the request if the user clicks on the link again (i.e. implement as idempotent). Subbu
Actually something like a separate form step is needed to help prevent xsrf anyway. On Sunday, September 27, 2009, Subbu Allamaraju <subbu@...> wrote: >>> So, what is the alternative ? >> >> Send a URL to an HTML page that includes a POST form with a button the >> user clicks on to confirm. >> >> (or send an HTML email with a form (not sure if the email client >> supports the form submission though)). > > It is a matter of a tradeoff between usability and safety. Confirming > by just clicking on the link is a well-established usage pattern on > the web. Most users will miss the flow if there is another HTML form > or some other user interaction on that page. > > When implementing this, just make sure to not fail the request if the > user clicks on the link again (i.e. implement as idempotent). > > Subbu > > > ------------------------------------ > > Yahoo! Groups Links > > > >
+1 for placing a cache in the client environment. I'm always impressed by a developers eagerness to build a cache layer in code; granted, they have an RPC layer that doesn't allow for intermediate proxies. I'm assuming "client" means a dependent service that's managed by the same IT group that manages the originating service. On Sat, Sep 26, 2009 at 6:14 PM, Subbu Allamaraju <subbu@...> wrote: > Why build such fuctionality into clients when a proxy cache in the > client env can do it with minimal cost? > > > On Sep 26, 2009, at 5:01 PM, Bill Burke <bburke@...> wrote: > > > I'll have to respectfully disagree. Cache-Control semantics are far > > from rocket science. In fact, they are trivial compared to the > > caching semantics enterprise develoeprs are used to dealing with > > (specifically with databases). Also, if a client did not take > > advantage of cache semantics it would be missing out on a key > > feature of the Web. > > > > > > Subbu Allamaraju wrote: > >> Other than the two mentioned below, I am not aware of any. In the > >> absence of such support in the framework or runtime, it is safe to > >> keep client code cache agnostic (with the exception of conditional > >> requests). > >> Subbu > >> On Sep 26, 2009, at 4:32 PM, Sebastien Lambla wrote: > >>> > Though technically valid to say so, it would be difficult to write > >>> > client code like this. To support such a behavior, clients will > >>> have > >>> > behave like caches, and that's a tall order. > >>> > >>> I'm unsure how various platforms work, but on the MS side, the > >>> majority f frameworks I see (wininet or the .net HttpWebRequest) > >>> do implement local caching as part of the API. > >>> > >>> Furthermore, I believe that the way XHR works goes through the > >>> local cache too. > >>> > >>> I'm really interested to see beyond those two specific scenarios, > >>> how many clients out there do not implement caching as part of the > >>> client API. What has been your experience? > >>> > >>> S > >>> > >>> Upgrade to Internet Explorer 8 Optimised for MSN. Download Now > > > > -- > > Bill Burke > > JBoss, a division of Red Hat > > http://bill.burkecentral.com > > > ------------------------------------ > > Yahoo! Groups Links > > > >
ok, so if I just do a get to a page, and this page do a POST to my rest server, it will be HATEOAS compliant ?? I am ready to do that, but I see this just as a proxy ... the GET done to the first server (the web page) has a side effect anyway :) but ok, if anyone else has constraints against that.. I will do that :) On Sun, Sep 27, 2009 at 5:09 PM, John Panzer <jpanzer@acm.org> wrote: > Actually something like a separate form step is needed to help prevent > xsrf anyway. > > On Sunday, September 27, 2009, Subbu Allamaraju <subbu@...> wrote: >>>> So, what is the alternative ? >>> >>> Send a URL to an HTML page that includes a POST form with a button the >>> user clicks on to confirm. >>> >>> (or send an HTML email with a form (not sure if the email client >>> supports the form submission though)). >> >> It is a matter of a tradeoff between usability and safety. Confirming >> by just clicking on the link is a well-established usage pattern on >> the web. Most users will miss the flow if there is another HTML form >> or some other user interaction on that page. >> >> When implementing this, just make sure to not fail the request if the >> user clicks on the link again (i.e. implement as idempotent). >> >> Subbu >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > -- Looking for a client application for this service: http://fgaucho.dyndns.org:8080/arena-http/wadl
humm.. yes, I tought about a code in the email that should be used to validate the registration in a web site.. so the email contains: GET url to the confirmation form... the page containing the form has a POST button to validate the registration.... it adds even more security to the whole process, while also add more usability complications to the users.. this trade off is complicated because it seems I am penalizing the users to preserve hateoas :) On Sun, Sep 27, 2009 at 5:09 PM, John Panzer <jpanzer@...> wrote: > Actually something like a separate form step is needed to help prevent > xsrf anyway. > > On Sunday, September 27, 2009, Subbu Allamaraju <subbu@subbu.org> wrote: >>>> So, what is the alternative ? >>> >>> Send a URL to an HTML page that includes a POST form with a button the >>> user clicks on to confirm. >>> >>> (or send an HTML email with a form (not sure if the email client >>> supports the form submission though)). >> >> It is a matter of a tradeoff between usability and safety. Confirming >> by just clicking on the link is a well-established usage pattern on >> the web. Most users will miss the flow if there is another HTML form >> or some other user interaction on that page. >> >> When implementing this, just make sure to not fail the request if the >> user clicks on the link again (i.e. implement as idempotent). >> >> Subbu >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > -- Looking for a client application for this service: http://fgaucho.dyndns.org:8080/arena-http/wadl
Not when the GET is not tied to user authentication and some other unsafe action. But you are right that executing actions by GETtable links may lead to CSRF. On Sep 27, 2009, at 8:09 AM, John Panzer wrote: > Actually something like a separate form step is needed to help prevent > xsrf anyway. > > On Sunday, September 27, 2009, Subbu Allamaraju <subbu@...> > wrote: >>>> So, what is the alternative ? >>> >>> Send a URL to an HTML page that includes a POST form with a button >>> the >>> user clicks on to confirm. >>> >>> (or send an HTML email with a form (not sure if the email client >>> supports the form submission though)). >> >> It is a matter of a tradeoff between usability and safety. Confirming >> by just clicking on the link is a well-established usage pattern on >> the web. Most users will miss the flow if there is another HTML form >> or some other user interaction on that page. >> >> When implementing this, just make sure to not fail the request if the >> user clicks on the link again (i.e. implement as idempotent). >> >> Subbu >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >>
Not when the GET is not tied to user authentication and some other unsafe action. But you are right that executing actions by GETtable links may lead to CSRF. On Sep 27, 2009, at 8:09 AM, John Panzer wrote: > Actually something like a separate form step is needed to help prevent > xsrf anyway. > > On Sunday, September 27, 2009, Subbu Allamaraju <subbu@...> > wrote: >>>> So, what is the alternative ? >>> >>> Send a URL to an HTML page that includes a POST form with a button >>> the >>> user clicks on to confirm. >>> >>> (or send an HTML email with a form (not sure if the email client >>> supports the form submission though)). >> >> It is a matter of a tradeoff between usability and safety. Confirming >> by just clicking on the link is a well-established usage pattern on >> the web. Most users will miss the flow if there is another HTML form >> or some other user interaction on that page. >> >> When implementing this, just make sure to not fail the request if the >> user clicks on the link again (i.e. implement as idempotent). >> >> Subbu >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >>
Negligible risk in this, though - if it was a concern then you could implement the initial URI as a landing page with javascript that automatically makes the relevant POST (or PUT?) request. Subbu Allamaraju wrote: > Not when the GET is not tied to user authentication and some other > unsafe action. But you are right that executing actions by GETtable > links may lead to CSRF. > > On Sep 27, 2009, at 8:09 AM, John Panzer wrote: > >> Actually something like a separate form step is needed to help prevent >> xsrf anyway. >> >> On Sunday, September 27, 2009, Subbu Allamaraju <subbu@...> >> wrote: >> >>> It is a matter of a tradeoff between usability and safety. Confirming >>> by just clicking on the link is a well-established usage pattern on >>> the web. Most users will miss the flow if there is another HTML form >>> or some other user interaction on that page. >>> >>> When implementing this, just make sure to not fail the request if the >>> user clicks on the link again (i.e. implement as idempotent). >>> >>> Subbu >>> >>>
Hello Felipe. I want to respond to several lines there, including your question, so please bear with me. 1. "abusing the rest protocol". Hummm. Ok, REST is not a protocol, it is an architectural style. If you mean HTTP, good, still HTTP is not "the" rest protocol, it is the protocol for hypermedia transfer and the actual de facto standard when creating REST things. 2. REST is a style for the web. By creating an application in the web, you are not creating a "REST architected" application automatically. You should decide if the constrains and benefits of using REST will fit your needs. 3. Idempotency. Not sure of the word, but an operation is idempotent when the final state is the same after executing the operation several times. If you issue the get the first time, and the user is ends up active, there is no problem if the second time the user is still active. See? 4. Now, GET should not be used to change state, granted. Actually, what you are doing is reading the URL for a key and validating that key is related to the user at hand, right? Then what you are abusing is the actual GET semantic. Adding posts or other code into the email will make it unsecure, and probably will not work with antivirus and spam things. So, a proposed way is to add that key as plain as it is, with no URL in the email, and direct the user to a page where the ID is to be written manually. You can even set a procedure where the user only enters the email, and upon reception of the key, he can enter and add all other info. That avoids you capturing all his info and in the case the user does not confirm, his info will stay there. Hopes this helps! William Martinez Pomares. --- In rest-discuss@yahoogroups.com, Felipe Gaúcho <fgaucho@...> wrote: > > Hi there, > > my first email here... > > question: I have a registration flow that includes the following steps: > > 1) client send a POST with the new user's data. > 2) the server sends an email to the new user containing a "Confirmation URL" > 3) the user clicks on the URL, confirming his registration request... > > So far so good, the basics of a registration use case. > > Now the question: > > The URL in the email should be a GET, right ? (otherwise I am not sure > how it can work from an email.. ) > > but, if I use GET to transform the state of a resource in the server I > am abusing the rest protocol - (GET not-idempotent since the status of > the user will change from NEW to ACTIVE) > > So, what is the alternative ? >
On Sat, Sep 26, 2009 at 4:00 PM, Noah Campbell <noahcampbell@...> wrote: > > I think for entities it's fine, especially if you want to leverage caching > middleware. However, these entities may have links that need to > be followed and not bookmarked. The only difference between following a link and a "bookmark" is the lifespan of the link itself. Since we're dealing with a stateless protocol, a link is a link is a link, with appropriate payloads. Is there some mechanism to tell whether a link is "valid" or not other than following it and auditing the response? Whether that link is 10ms or 10years old, the client behavior is the same: a bad link is a bad link is a bad link. I don't know how to convey a retry mechanism, or policy, or whatever. So it seems to me it boils down to what "out of band" behavior is promised to the resource consumers has more applicability on the use of an "old link" than anything else. Regards, Will Hartung (willh@mirthcorp.com)
I was thinking about the finality of links after I posted it. There is nothing you can do to prevent a link from being bookmarked and I'd suggest that you don't worry about it. All webservers have the defaults semantics built in when a link is requested that is no longer exists it returns a 404. If a service is much more robust, accessing a dead link would redirect it to something meaningful, maybe a the "top" of the app or the "top" of the entity. This is optional and requires some more thought in how you implement your service but may be worth its weight in gold when you need to upgrade. It is worth exploring more, in my opinion. -Noah On Mon, Sep 28, 2009 at 6:33 PM, Will Hartung <willh@...> wrote: > On Sat, Sep 26, 2009 at 4:00 PM, Noah Campbell <noahcampbell@...> > wrote: > > > > I think for entities it's fine, especially if you want to leverage > caching > > middleware. However, these entities may have links that need to > > be followed and not bookmarked. > > The only difference between following a link and a "bookmark" is the > lifespan of the link itself. Since we're dealing with a stateless > protocol, a link is a link is a link, with appropriate payloads. > > Is there some mechanism to tell whether a link is "valid" or not other > than following it and auditing the response? Whether that link is 10ms > or 10years old, the client behavior is the same: a bad link is a bad > link is a bad link. I don't know how to convey a retry mechanism, or > policy, or whatever. > > So it seems to me it boils down to what "out of band" behavior is > promised to the resource consumers has more applicability on the use > of an "old link" than anything else. > > Regards, > > Will Hartung > (willh@...) >
There is some truth to "nothing you can do about it", but I think it *is* fair to allow the server to indicate some estimate of how long the links included in the representation should be considered valid. Since we can already use an "Expires" header to define how long it is OK to cache this response (if it was a GET request), it seems to me that a client should also be able to assume that the links included in the representation will be valid for at least that long (either directly, or because the server will redirect you to an updated URI if needed). They might be valid for a lot longer than that, of course, but this seems like a good design to communicate the minimum. Thoughts? Craig McClanahan On Mon, Sep 28, 2009 at 8:29 PM, Noah Campbell <noahcampbell@...>wrote: > > > I was thinking about the finality of links after I posted it. There is > nothing you can do to prevent a link from being bookmarked and I'd suggest > that you don't worry about it. All webservers have the defaults semantics > built in when a link is requested that is no longer exists it returns a 404. > > > If a service is much more robust, accessing a dead link would redirect it > to something meaningful, maybe a the "top" of the app or the "top" of the > entity. This is optional and requires some more thought in how you > implement your service but may be worth its weight in gold when you need to > upgrade. > > It is worth exploring more, in my opinion. > > -Noah > > On Mon, Sep 28, 2009 at 6:33 PM, Will Hartung <willh@...> wrote: > >> On Sat, Sep 26, 2009 at 4:00 PM, Noah Campbell <noahcampbell@...> >> wrote: >> > >> > I think for entities it's fine, especially if you want to leverage >> caching >> > middleware. However, these entities may have links that need to >> > be followed and not bookmarked. >> >> The only difference between following a link and a "bookmark" is the >> lifespan of the link itself. Since we're dealing with a stateless >> protocol, a link is a link is a link, with appropriate payloads. >> >> Is there some mechanism to tell whether a link is "valid" or not other >> than following it and auditing the response? Whether that link is 10ms >> or 10years old, the client behavior is the same: a bad link is a bad >> link is a bad link. I don't know how to convey a retry mechanism, or >> policy, or whatever. >> >> So it seems to me it boils down to what "out of band" behavior is >> promised to the resource consumers has more applicability on the use >> of an "old link" than anything else. >> >> Regards, >> >> Will Hartung >> (willh@...) >> > > >
For whatever it's worth, quoting here from RESTful Web Services by Sam Ruby and Leonard Richardson ... "A GET or HEAD request is a request to read some data, not a request to change any server state. ... This is not to say that GET and HEAD requests can't have side effects. Some resources are hit counters that increment every time a client GETs them. Most web servers log every incoming request to a log file. These are side effects: the server state, and even the resource state, is changing in response to a GET request. But the client didn't ask for the side effects, and it's not responsible for them. A client should never make a GET or HEAD request just for the side effects, and the side effects should never be so big that the client might wish it hadn't made the request." Personally, I don't think GETting a confirmation page is too much of an abuse of GET. And email confirmation is already mixing HTTP semantics with SMTP. But that's just my opinion. -L --- In rest-discuss@yahoogroups.com, Felipe Ga�cho <fgaucho@...> wrote: > > Hi there, > > my first email here... > > question: I have a registration flow that includes the following steps: > > 1) client send a POST with the new user's data. > 2) the server sends an email to the new user containing a "Confirmation URL" > 3) the user clicks on the URL, confirming his registration request... > > So far so good, the basics of a registration use case. > > Now the question: > > The URL in the email should be a GET, right ? (otherwise I am not sure > how it can work from an email.. ) > > but, if I use GET to transform the state of a resource in the server I > am abusing the rest protocol - (GET not-idempotent since the status of > the user will change from NEW to ACTIVE) > > So, what is the alternative ? >
Hullo Craig, I am not sure this will be needed. The Expires header seems to have a well defined meaning, which is that the client can expect that the resource will not have changed until a certain amount of time has passed. This focus on the resource seems to be a separate thread of thought from the change or deprecation of the link (URI) itself. I believe that the situation of a change or deprecation of the URI is handled nicely via several different response codes as defined by the 404 NOT FOUND, 410 GONE, 301 MOVED PERMANENTLY, 303 SEE OTHER, or 307 TEMPORARY REDIRECT directives a server can return. The developer of the service can choose the appropriate one to return. I don't think there is anything else to be said about that, but I could be wrong. :) Regards, Bediako On Tue, Sep 29, 2009 at 2:27 AM, Craig McClanahan <craigmcc@...>wrote: > > > There is some truth to "nothing you can do about it", but I think it *is* > fair to allow the server to indicate some estimate of how long the links > included in the representation should be considered valid. Since we can > already use an "Expires" header to define how long it is OK to cache this > response (if it was a GET request), it seems to me that a client should also > be able to assume that the links included in the representation will be > valid for at least that long (either directly, or because the server will > redirect you to an updated URI if needed). > > They might be valid for a lot longer than that, of course, but this seems > like a good design to communicate the minimum. > > Thoughts? > > Craig McClanahan > > > On Mon, Sep 28, 2009 at 8:29 PM, Noah Campbell <noahcampbell@...>wrote: > >> >> >> I was thinking about the finality of links after I posted it. There is >> nothing you can do to prevent a link from being bookmarked and I'd suggest >> that you don't worry about it. All webservers have the defaults semantics >> built in when a link is requested that is no longer exists it returns a 404. >> >> >> If a service is much more robust, accessing a dead link would redirect it >> to something meaningful, maybe a the "top" of the app or the "top" of the >> entity. This is optional and requires some more thought in how you >> implement your service but may be worth its weight in gold when you need to >> upgrade. >> >> It is worth exploring more, in my opinion. >> >> -Noah >> >> On Mon, Sep 28, 2009 at 6:33 PM, Will Hartung <willh@...>wrote: >> >>> On Sat, Sep 26, 2009 at 4:00 PM, Noah Campbell <noahcampbell@...> >>> wrote: >>> > >>> > I think for entities it's fine, especially if you want to leverage >>> caching >>> > middleware. However, these entities may have links that need to >>> > be followed and not bookmarked. >>> >>> The only difference between following a link and a "bookmark" is the >>> lifespan of the link itself. Since we're dealing with a stateless >>> protocol, a link is a link is a link, with appropriate payloads. >>> >>> Is there some mechanism to tell whether a link is "valid" or not other >>> than following it and auditing the response? Whether that link is 10ms >>> or 10years old, the client behavior is the same: a bad link is a bad >>> link is a bad link. I don't know how to convey a retry mechanism, or >>> policy, or whatever. >>> >>> So it seems to me it boils down to what "out of band" behavior is >>> promised to the resource consumers has more applicability on the use >>> of an "old link" than anything else. >>> >>> Regards, >>> >>> Will Hartung >>> (willh@...) >>> >> >> > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly, Think Lucid www.lucidtechnics.com (p) 202.683.7486 (f) 703.563.6279
groovepapa wrote: > For whatever it's worth, quoting here from RESTful Web Services by Sam Ruby and Leonard Richardson ... > > "A GET or HEAD request is a request to read some data, not a request to change any server state. ... > This is not to say that GET and HEAD requests can't have side effects. Some resources are hit counters that increment every time a client GETs them. Most web servers log every incoming request to a log file. These are side effects: the server state, and even the resource state, is changing in response to a GET request. But the client didn't ask for the side effects, and it's not responsible for them. A client should never make a GET or HEAD request just for the side effects, and the side effects should never be so big that the client might wish it hadn't made the request." > I don't agree with permitting a GET to affect resource state - it is ok for a GET request to trigger an intermediary mechanism to update some other related resource (e.g. POST to a related hit counter resource), but the state of the target resource should not be altered as a result of a GET request. - Mike
On Tue, Sep 29, 2009 at 7:10 AM, Bediako George <bediakogeorge@...> wrote: > > Hullo Craig, > > I am not sure this will be needed. The Expires header seems to > have a well defined meaning, which is that the client can expect > that the resource will not have changed until a certain amount > of time has passed. This focus on the resource seems to be a > separate thread of thought from the change or deprecation of the link (URI) itself. > > I believe that the situation of a change or deprecation of the URI > is handled nicely via several different response codes as defined > by the 404 NOT FOUND, 410 GONE, 301 MOVED PERMANENTLY, > 303 SEE OTHER, or 307 TEMPORARY REDIRECT directives > a server can return. The developer of the service can choose the appropriate one to return. Yea, I agree with this. Expires is really for the validity of the resource, the links are more generated constructs related to the resource, but not the resource itself. They're, in fact, potentially completely dynamic. I think there are valid use cases where each time you get a resource, the links could be different. It's up to the application to decide how far it wants to take robust link handling. Regards, Will Hartung (willh@...)
And I think all of this is an example of where REST isn't a universal architecture for all problems. The idiom in place, in this case, of sending a confirmation email with a link for the user to click on I think has shown to be very usable in practice for this case. Most of the solutions I see proposed break the idiom, and complicate the user experience. In many security settings, two common properties for authentication are "something you have" and "something you know". In this case the "something you have" is the email account, whereas the "something you know" is the password you entered, or will soon be entering. The only way to prove you "have" the email account is through this "out of band" email. I would flag this as "not REST" and move on rather than trying to "square peg/round hole" it. Regards, Will Hartung (willh@...)
This is another take on the 'resource type' issue:
On Aug 28, 2009, at 3:10 AM, Jan Algermissen wrote:
> Stefan,
>
> a bit late, but here is another suggestion to approach this:
>
> Resources have semantics by representing certain 'things' of the
> domain space (e.g. a lock). These semantics include what happens when
> you interact through HTTP with them (e.g. "PUT /lock" creates the lock
> or "DELETE /lock" deletes the lock). This is the result of turning
> specialized APIs into a uniform API.
>
> In this sense the resources have a type and clients use this
> information to achieve the goals that constitute a given RESTful API.
>
> The important thing in my opinion is that the resources do not have
> these types out of themselves but that what matters is by what link
> the client discovered them. E.g. given <link rel="lock" href="/locks/
> 344"/> a client could know that it can use /locks/344 to establish a
> lock on the link source resource (by way of the definition of th elink
> relation). For the moment the client will think of /locks/344 as being
> 'a lock'.
I am thinking about using "hypermedia context"[1] for the notion of the
acquired linking (and document appearance) knowledge about a resource.
For example, when a client comes across a <collection> element in an
Atom Pub service document and if that <collection> includes a category
foo then the resource the <collection> element refers to is known by
the client to be in that certain 'hypermedia context'.
A specification could name the described context as 'the foo
collection'.
'Hypermedia context' emphasizes that the 'classification' is all about
how the resource appears in the client's built-up applicaton state.
I am not quite there yet, but I think there are interesting ways to
formalize 'hypermedia context' as a set of individual 'link tests' that
evaluate to true. E.g. if have-link(x , 'edit-media', y) then y "can be
used to modify a Media Resource associated with that Entry"[2]
It also touches on the idea of duck typing[3].
Jan
[1] http://algermissen.blogspot.com/2009/09/hypermedia-context.html
[2] http://tools.ietf.org/html/rfc5023#section-11.2
[3] http://www.propylon.com/news/ctoarticles/040224_duckmodeling.html
>
> Likewise, API specifications will use type-like language to describe
> how the API goals are achieved. Atom Pub for example writes:
>
> "4.2 Documents and Resource Classification
> A Resource whose IRI is listed in a Collection is called a Member
> Resource.
> [...]
> "
> I do not see how this notion of 'type' could be avoided.
> Jan
>
>
>
>
>
>
> On Sep 2, 2008, at 1:41 AM, Stefan Tilkov wrote:
>
>> What do you call the concept of "classes" or "types" of resources in
>> your RESTful designs? E.g. when you decide to turn each "customer"
>> into its own identifiable resource - http://example.com/customers/
>> 1234
>> - what does http://example.com/customers/{id} describe? Both
>> "resource
>> class" and "resource type" would work, but don't seem really
>> convincing.
>>
>> Stefan
>> --
>> Stefan Tilkov, http://www.innoq.com/blog/st/
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
It would be kinda sad if REST was solely limited to what a browser could or couldn't do. We're moving into new territory here people! As REST goes beyond the browser as a client, new patterns and ideas are going to crop up. This, IMO, is one of those things that sounds pretty interesting... Craig McClanahan wrote: > > > There is some truth to "nothing you can do about it", but I think it > *is* fair to allow the server to indicate some estimate of how long the > links included in the representation should be considered valid. Since > we can already use an "Expires" header to define how long it is OK to > cache this response (if it was a GET request), it seems to me that a > client should also be able to assume that the links included in the > representation will be valid for at least that long (either directly, or > because the server will redirect you to an updated URI if needed). > > They might be valid for a lot longer than that, of course, but this > seems like a good design to communicate the minimum. > > Thoughts? > > Craig McClanahan > > On Mon, Sep 28, 2009 at 8:29 PM, Noah Campbell <noahcampbell@... > <mailto:noahcampbell@...>> wrote: > > > > I was thinking about the finality of links after I posted it. There > is nothing you can do to prevent a link from being bookmarked and > I'd suggest that you don't worry about it. All webservers have the > defaults semantics built in when a link is requested that is no > longer exists it returns a 404. > > > If a service is much more robust, accessing a dead link would > redirect it to something meaningful, maybe a the "top" of the app or > the "top" of the entity. This is optional and requires some more > thought in how you implement your service but may be worth its > weight in gold when you need to upgrade. > > It is worth exploring more, in my opinion. > > -Noah > > > On Mon, Sep 28, 2009 at 6:33 PM, Will Hartung <willh@... > <mailto:willh@...>> wrote: > > On Sat, Sep 26, 2009 at 4:00 PM, Noah Campbell > <noahcampbell@... <mailto:noahcampbell@...>> wrote: > > > > I think for entities it's fine, especially if you want to > leverage caching > > middleware. However, these entities may have links that need to > > be followed and not bookmarked. > > The only difference between following a link and a "bookmark" is the > lifespan of the link itself. Since we're dealing with a stateless > protocol, a link is a link is a link, with appropriate payloads. > > Is there some mechanism to tell whether a link is "valid" or not > other > than following it and auditing the response? Whether that link > is 10ms > or 10years old, the client behavior is the same: a bad link is a bad > link is a bad link. I don't know how to convey a retry mechanism, or > policy, or whatever. > > So it seems to me it boils down to what "out of band" behavior is > promised to the resource consumers has more applicability on the use > of an "old link" than anything else. > > Regards, > > Will Hartung > (willh@... <mailto:willh@...>) > > > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill, On Sep 30, 2009, at 4:52 PM, Bill Burke wrote: > It would be kinda sad if REST was solely limited to what a browser > could > or couldn't do. Can you explain what part in the conversation this sentence relates to? > We're moving into new territory here people! Hmm, I think I don't get it - what new territory? (There is really only a single design space: hypermedia specs. So I do not really see the new territory)...? > As REST > goes beyond the browser as a client, Again - hmm. How can REST go beyond the browser? REST is an architectural style that cannot change (if it changes it is not REST anymore but something new[1]) The term 'browser' relates to HTTP, not REST. Which is it you want to move 'beyond the browser'? And - more importantly - why? I have yet to see what is missing (besides hypermedia specs for the evolving machine to machine uses of HTTP) and hence what it is you call 'new patterns and ideas'. > new patterns and ideas are going to > crop up. This, IMO, is one of those things that sounds pretty > interesting... > Jan [1] Not that this is necessarily bad. > Craig McClanahan wrote: >> >> >> There is some truth to "nothing you can do about it", but I think it >> *is* fair to allow the server to indicate some estimate of how long >> the >> links included in the representation should be considered valid. >> Since >> we can already use an "Expires" header to define how long it is OK to >> cache this response (if it was a GET request), it seems to me that a >> client should also be able to assume that the links included in the >> representation will be valid for at least that long (either >> directly, or >> because the server will redirect you to an updated URI if needed). >> >> They might be valid for a lot longer than that, of course, but this >> seems like a good design to communicate the minimum. >> >> Thoughts? >> >> Craig McClanahan >> >> On Mon, Sep 28, 2009 at 8:29 PM, Noah Campbell >> <noahcampbell@... >> <mailto:noahcampbell@...>> wrote: >> >> >> >> I was thinking about the finality of links after I posted it. >> There >> is nothing you can do to prevent a link from being bookmarked and >> I'd suggest that you don't worry about it. All webservers have >> the >> defaults semantics built in when a link is requested that is no >> longer exists it returns a 404. >> >> >> If a service is much more robust, accessing a dead link would >> redirect it to something meaningful, maybe a the "top" of the >> app or >> the "top" of the entity. This is optional and requires some more >> thought in how you implement your service but may be worth its >> weight in gold when you need to upgrade. >> >> It is worth exploring more, in my opinion. >> >> -Noah >> >> >> On Mon, Sep 28, 2009 at 6:33 PM, Will Hartung <willh@... >> <mailto:willh@...>> wrote: >> >> On Sat, Sep 26, 2009 at 4:00 PM, Noah Campbell >> <noahcampbell@... <mailto:noahcampbell@...>> >> wrote: >>> >>> I think for entities it's fine, especially if you want to >> leverage caching >>> middleware. However, these entities may have links that need to >>> be followed and not bookmarked. >> >> The only difference between following a link and a >> "bookmark" is the >> lifespan of the link itself. Since we're dealing with a >> stateless >> protocol, a link is a link is a link, with appropriate >> payloads. >> >> Is there some mechanism to tell whether a link is "valid" or >> not >> other >> than following it and auditing the response? Whether that link >> is 10ms >> or 10years old, the client behavior is the same: a bad link >> is a bad >> link is a bad link. I don't know how to convey a retry >> mechanism, or >> policy, or whatever. >> >> So it seems to me it boils down to what "out of band" >> behavior is >> promised to the resource consumers has more applicability on >> the use >> of an "old link" than anything else. >> >> Regards, >> >> Will Hartung >> (willh@... <mailto:willh@...>) >> >> >> >> > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Roy says[0], "A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types." I have situations where formats such as Atom are fairly well-suited to represent my resources. If I re-use the format and keep the content type "atom+xml", then client's wouldn't know my specific usage of atom (e.g. semantics behind some of my "rel" attributes). I could, I suppose, reuse the format but declare it under a new content type that implies some semantics. I suppose the concern is that by reusing a media type it takes some 'rel', request history, or other context to give the client more information to adequately process it. A decent example of the dilemma can be seen with opensearch's usage[1] of atom for results. The links to rel=(next|previous|first|last) aren't necessarily understood by knowing atom alone. They have specific might have specific meaning in opensearch's usage of those particular rel values. So is it silly to reuse a format, but declare a new ContentType as basically a specific usage of an existing content type? Or, more generally, how are people doing these things within your enterprise (where content type explosion seems to be worse than on the wild internet)? Thanks, --tim [0] - http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven [1] - http://www.opensearch.org/Specifications/OpenSearch/1.1#OpenSearch_description_elements
On Sep 30, 2009, at 12:56 PM, Tim Williams wrote: > Roy says[0], "A REST API should spend almost all of its descriptive > effort in defining the media type(s) used for representing resources > and driving application state, or in defining extended relation names > and/or hypertext-enabled mark-up for existing standard media types." > > I have situations where formats such as Atom are fairly well-suited to > represent my resources. If I re-use the format and keep the content > type "atom+xml", then client's wouldn't know my specific usage of atom > (e.g. semantics behind some of my "rel" attributes). Link relations are universal, independent of the format in which they are found, though perhaps only used with a target that is in a specific format (by practice, not by standard). The fact that some media type specifications also introduce new link relations does not make the relations specific to those media types. ....Roy
I wonder if there is a subtle distinction between (i) the server organizing
its URI space in a predictable manner and (ii) the client using that
knowledge to construct/derive URIs to manipulate the server? (i) does not
seem to be in violation of REST, whereas (ii) seems to be a violation of
HATEOAS.
On Fri, Sep 25, 2009 at 2:23 AM, Stefan Tilkov <stefan.tilkov@...>wrote:
>
>
> On Sep 24, 2009, at 11:41 PM, Noah Campbell wrote:
>
> "If you are specifying a URI naming scheme in your API (like
> /app/person/{id}) then your API is RPC, not REST."
>
>
> I disagree with this; it may be non-RESTful, but that doesn't make it RPC -
> rather some intermediate first step towards a RESTful API (and much better
> than the typical SOAP/WSDL HTTP abuse).
>
> Subbu has written a great article on the topic:
> http://www.infoq.com/articles/subbu-allamaraju-rest
>
> Best,
> Stefan
>
>
>
--
Bediako George
Partner - Lucid Technics, LLC
Think Clearly, Think Lucid
www.lucidtechnics.com
(p) 202.683.7486 (f) 703.563.6279
On Wed, Sep 30, 2009 at 4:06 PM, Roy T. Fielding <fielding@...> wrote: > On Sep 30, 2009, at 12:56 PM, Tim Williams wrote: > >> Roy says[0], "A REST API should spend almost all of its descriptive >> effort in defining the media type(s) used for representing resources >> and driving application state, or in defining extended relation names >> and/or hypertext-enabled mark-up for existing standard media types." >> >> I have situations where formats such as Atom are fairly well-suited to >> represent my resources. If I re-use the format and keep the content >> type "atom+xml", then client's wouldn't know my specific usage of atom >> (e.g. semantics behind some of my "rel" attributes). > > Link relations are universal, independent of the format in > which they are found, though perhaps only used with a target > that is in a specific format (by practice, not by standard). > The fact that some media type specifications also introduce > new link relations does not make the relations specific to > those media types. Brilliant, a little googling turns up this[0] and this[1], which was good news - despite the confusing title on the first one. I've been "doing" it right by chance so far. The relations (and their registration) seem to be the key to TEOAS. Is this the sorta stuff that would have been in the missing chapter - or have I simply missed it? Thanks, --tim [0] - http://www.iana.org/assignments/link-relations/link-relations.xhtml [1] - http://tools.ietf.org/html/draft-nottingham-http-link-header-06#section-6.2
On Sep 30, 2009, at 11:11 PM, Bediako George wrote:
>
>
> I wonder if there is a subtle distinction between (i) the server
> organizing its URI space in a predictable manner and (ii) the client
> using that knowledge to construct/derive URIs to manipulate the
> server? (i) does not seem to be in violation of REST, whereas (ii)
> seems to be a violation of HATEOAS.
(i) is a violation of REST if it is part of the contract between
server and client.
It's as easy as this: once the client makes use of the 'predictable
way' of organisation the server cannot anymore change the way it
organises the URI space. REST deliberately aims to avoid that.
Jan
>
> On Fri, Sep 25, 2009 at 2:23 AM, Stefan Tilkov <stefan.tilkov@...
> > wrote:
>
>
> On Sep 24, 2009, at 11:41 PM, Noah Campbell wrote:
>
>> "If you are specifying a URI naming scheme in your API (like /app/
>> person/{id}) then your API is RPC, not REST."
>
> I disagree with this; it may be non-RESTful, but that doesn't make
> it RPC - rather some intermediate first step towards a RESTful API
> (and much better than the typical SOAP/WSDL HTTP abuse).
>
> Subbu has written a great article on the topic:
> http://www.infoq.com/articles/subbu-allamaraju-rest
>
> Best,
> Stefan
>
>
>
>
> --
> Bediako George
> Partner - Lucid Technics, LLC
> Think Clearly, Think Lucid
> www.lucidtechnics.com
> (p) 202.683.7486 (f) 703.563.6279
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
2009/9/21 Bill Burke <bburke@...>: > Benjamin Carlyle wrote: >> I think that many of us are at the point with where it will be useful to >> start moving forwards with REST-aligned specifications and supporting >> standards bodies that is targeted "off the web" at enterprises. This could >> roughly take one of two forms: >> 1. A set of specifications based on a range of technologies that focus on >> REST constraint compliance, or >> 2. A more HTTP-centric set of specifications working on providing features >> not available on the Web and probably not consistent with REST constraints >> such as pub/sub, transactions, reliable POST, etc. >> >> Methinks a little from column A, and a little from column B. ... > These are all great points. Â The output of REST-* is not meant to be an > academic exercise. Â We want to create specifications that can be implemented > and solve specific problems. Â While our goal is to be architecturally pure, > software, in general, is very rarely architecturely pure when in its final > form. Â As a result, initial iterations (and even final ones) maybe a mix of > both HTTP and REST-centric designs. Â Remember, WE JUST STARTED! I know how you feel. The architectures I build at the moment are all real-time, so while REST is a template I have weakened statelessness and essentially substituted cache for pub/sub. This is actually changing over time and the architecture is just starting to come into closer alignment with REST as developers begin to understand that even in a real-time system poll+cache turns out to solve many problems better. Incidentally, I have a neat little HTTP-friendly pub/sub protocol that I'm working at the moment to free from IP strictures within my organisation. I'd love to talk to you guys about it. In trying to keep things clean from an architectural perspective, I would carefully consider defining up front a set of portfolios for different standards and that rules that must apply to each in order to fit a given portfolio. For example: * REST-style HTTP: Fully complies with REST style architecture in word and in deed. No transactions. No pessimistic locks. No pub/sub. Fully stateless and all the rest. May still use features not widely deployed on the Web. * Transitional HTTP: Breaks REST constraints in some way, probably by breaking statelessness of client/server. Very likely they still fit the Uniform Interface/Contract constraint but pay relatively little heed to other constraints. These specifications are designed to fit with existing enterprise use, to allow that use to continue into the brave new HTTP world, and to allow enterprises to more easily transition to REST-style HTTP should they find they wish to do so. * HTTP Integration: Mappings of HTTP semantics into other contexts to allow services who don't natively speak HTTP to utilise a simple gateway to achieve HTTP integration. > This, IMO, does not mean we should change the name of the site or to coin a > new buzzword. Â The goal we are striving for is to be RESTful. REST is our > ideal. You understand, of course, that this is a subject that the REST camp are very touchy about :) Roy has stated his preference many times for new architectural styles to use their own names and not to try and reuse or redefine REST. Roy has also worked hard for many years to avoid REST the architectural style being confused with HTTP or the Web. Calling this REST-* is akin to renaming web services to SOA-* ;) The backlash you have already seen here I think is similar to what you could expect from that renaming. I can see why you want to proceed this way, for marketing and political expedience... but it is seen I think by Roy as a throwback to the dark old days. When Roy's not happy the community's not happy. I think you may be able to improve the situation by careful wording and classification of standards, but it now looks inevitable that we will see the ad come out "now you can buy a REST, and you don't even have to change your architecture... the middleware will do it all for you". On a practical level this may mean you get less involvement by the people who could really make a difference in defining these specifications, and that would be a shame. I guess that if the name is fixed (as it would seem to be) we will all just have to live within the strictures of that situation and do what we can to clarify within that scope. Perhaps my suggestions above could help in that regard. Benjamin.
Bill, Here are mine: http://soundadvice.id.au/blog/2009/06/13#stateless :) Benjamin. 2009/9/22 Bill Burke <bburke@...> > Here's my thoughts on the compatibility of Transactions and REST. Maybe > > now you can see where I am coming from. > http://bill.burkecentral.com/2009/09/21/credit-cards-transactions-and-rest/ > >
Benjamin Carlyle wrote: > 2009/9/21 Bill Burke <bburke@...>: >> Benjamin Carlyle wrote: >>> I think that many of us are at the point with where it will be useful to >>> start moving forwards with REST-aligned specifications and supporting >>> standards bodies that is targeted "off the web" at enterprises. This could >>> roughly take one of two forms: >>> 1. A set of specifications based on a range of technologies that focus on >>> REST constraint compliance, or >>> 2. A more HTTP-centric set of specifications working on providing features >>> not available on the Web and probably not consistent with REST constraints >>> such as pub/sub, transactions, reliable POST, etc. >>> >>> Methinks a little from column A, and a little from column B. > ... >> These are all great points. The output of REST-* is not meant to be an >> academic exercise. We want to create specifications that can be implemented >> and solve specific problems. While our goal is to be architecturally pure, >> software, in general, is very rarely architecturely pure when in its final >> form. As a result, initial iterations (and even final ones) maybe a mix of >> both HTTP and REST-centric designs. Remember, WE JUST STARTED! > > I know how you feel. The architectures I build at the moment are all > real-time, so while REST is a template I have weakened statelessness > and essentially substituted cache for pub/sub. This is actually > changing over time and the architecture is just starting to come into > closer alignment with REST as developers begin to understand that even > in a real-time system poll+cache turns out to solve many problems > better. > "...starting to come into closer alignment with REST." Some of the people that will be getting involved simply did a HTTP facade over existing APIs (Transaction guy was one. JBPM guys did another that wasn't published on REST-*.org). These guys don't know anything about REST, ....YET....I need their help. There is no possible way I can do all the work. So initial submissions won't be perfect somebody guides them in the right direction. > Incidentally, I have a neat little HTTP-friendly pub/sub protocol that > I'm working at the moment to free from IP strictures within my > organisation. I'd love to talk to you guys about it. > I'm looking at pubsubhubub and webhooks as well for ideas. > * Transitional HTTP: Breaks REST constraints in some way, probably by > breaking statelessness of client/server. I definitely see that happening now and then. Some things will be slightly RPCish as well until they can become refactored. Sometimes I see myself walking a very fine line when defining link semantics and they seem to many times look RPCish. >> This, IMO, does not mean we should change the name of the site or to coin a >> new buzzword. The goal we are striving for is to be RESTful. REST is our >> ideal. > > You understand, of course, that this is a subject that the REST camp > are very touchy about :) Roy has stated his preference many times for > new architectural styles to use their own names and not to try and > reuse or redefine REST. Roy has also worked hard for many years to > avoid REST the architectural style being confused with HTTP or the > Web. Calling this REST-* is akin to renaming web services to SOA-* ;) > The backlash you have already seen here I think is similar to what you > could expect from that renaming. > The REST camp should be touchy. They should be hostile. We need skeptism. I just hope that we get the acknowledgement when we do produce something that is RESTful. BTW, I'm starting to seem the same skeptism and hostileness from the WS-* crowd. Well, specifically JJ, but I think REST is a swear word in JJ's dialect of French. IMO, this is all good. Enterprise IT needs REST. The WS-* guys need to realize they need an architectural shift. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Wed, Sep 30, 2009 at 6:52 PM, Bill Burke <bburke@...> wrote: > > > Benjamin Carlyle wrote: >> 2009/9/21 Bill Burke <bburke@...>: >>> Benjamin Carlyle wrote: >>>> I think that many of us are at the point with where it will be useful to >>>> start moving forwards with REST-aligned specifications and supporting >>>> standards bodies that is targeted "off the web" at enterprises. This could >>>> roughly take one of two forms: >>>> 1. A set of specifications based on a range of technologies that focus on >>>> REST constraint compliance, or >>>> 2. A more HTTP-centric set of specifications working on providing features >>>> not available on the Web and probably not consistent with REST constraints >>>> such as pub/sub, transactions, reliable POST, etc. >>>> >>>> Methinks a little from column A, and a little from column B. >> ... >>> These are all great points. The output of REST-* is not meant to be an >>> academic exercise. We want to create specifications that can be implemented >>> and solve specific problems. While our goal is to be architecturally pure, >>> software, in general, is very rarely architecturely pure when in its final >>> form. As a result, initial iterations (and even final ones) maybe a mix of >>> both HTTP and REST-centric designs. Remember, WE JUST STARTED! >> >> I know how you feel. The architectures I build at the moment are all >> real-time, so while REST is a template I have weakened statelessness >> and essentially substituted cache for pub/sub. This is actually >> changing over time and the architecture is just starting to come into >> closer alignment with REST as developers begin to understand that even >> in a real-time system poll+cache turns out to solve many problems >> better. >> > > "...starting to come into closer alignment with REST." Some of the > people that will be getting involved simply did a HTTP facade over > existing APIs (Transaction guy was one. JBPM guys did another that > wasn't published on REST-*.org). These guys don't know anything about > REST, ....YET....I need their help. There is no possible way I can do > all the work. So initial submissions won't be perfect somebody guides > them in the right direction. > >> Incidentally, I have a neat little HTTP-friendly pub/sub protocol that >> I'm working at the moment to free from IP strictures within my >> organisation. I'd love to talk to you guys about it. >> > > I'm looking at pubsubhubub and webhooks as well for ideas. > > >> * Transitional HTTP: Breaks REST constraints in some way, probably by >> breaking statelessness of client/server. > > I definitely see that happening now and then. Some things will be > slightly RPCish as well until they can become refactored. Sometimes I > see myself walking a very fine line when defining link semantics and > they seem to many times look RPCish. > > >>> This, IMO, does not mean we should change the name of the site or to coin a >>> new buzzword. The goal we are striving for is to be RESTful. REST is our >>> ideal. >> >> You understand, of course, that this is a subject that the REST camp >> are very touchy about :) Roy has stated his preference many times for >> new architectural styles to use their own names and not to try and >> reuse or redefine REST. Roy has also worked hard for many years to >> avoid REST the architectural style being confused with HTTP or the >> Web. Calling this REST-* is akin to renaming web services to SOA-* ;) >> The backlash you have already seen here I think is similar to what you >> could expect from that renaming. >> > > The REST camp should be touchy. They should be hostile. We need > skeptism. I just hope that we get the acknowledgement when we do > produce something that is RESTful. No doubt that you'll be able to produce something that's RESTful, but because you can doesn't mean you should. Have you considered that these so called "middleware services" might just be best left alone? I mean, the same REST constraints that evoke desired properties in much of my service to service interaction have predictable disadvantages, that would evoke undesired properties in, say, my messaging architecture. I'm completely happy with a hybrid architecture with well-defined criteria for when to use what type of interaction and, given the seeming mismatch, I'm surprised if others aren't too... --tim
On Wed, Sep 30, 2009 at 1:00 AM, Jan Algermissen <algermissen1971@...> wrote: > I am thinking about using "hypermedia context"[1] for the notion of the > acquired linking (and document appearance) knowledge about a resource. > For example, when a client comes across a <collection> element in an > Atom Pub service document and if that <collection> includes a category > foo then the resource the <collection> element refers to is known by > the client to be in that certain 'hypermedia context'. > > A specification could name the described context as 'the foo > collection'. > > 'Hypermedia context' emphasizes that the 'classification' is all about > how the resource appears in the client's built-up applicaton state. > > I am not quite there yet, but I think there are interesting ways to > formalize 'hypermedia context' as a set of individual 'link tests' that > evaluate to true. E.g. if have-link(x , 'edit-media', y) then y "can be > used to modify a Media Resource associated with that Entry"[2] > > It also touches on the idea of duck typing[3]. I like the direction of this idea. The idea of the hypermedia context as a nameable concept might remove the necessity to even informally think about the type/category/flavor of the resources. Are hypermedia contexts a way to categorize (potential) application states by the transition links and information available at that state? -- Peter Williams http://barelyenough.org
On Wed, Sep 30, 2009 at 5:40 PM, Jan Algermissen <algermissen1971@...> wrote: > On Sep 30, 2009, at 11:11 PM, Bediako George wrote: > > I wonder if there is a subtle distinction between (i) the server > > organizing its URI space in a predictable manner and (ii) the client > > using that knowledge to construct/derive URIs to manipulate the > > server? (i) does not seem to be in violation of REST, whereas (ii) > > seems to be a violation of HATEOAS. > > (i) is a violation of REST if it is part of the contract between > server and client. > > It's as easy as this: once the client makes use of the 'predictable > way' of organisation the server cannot anymore change the way it > organises the URI space. REST deliberately aims to avoid that. Either both are or neither are. If (i) means "organizing its URI space in a stable manner", ie so that bookmarking URIs works over long periods of time, then the practice of bookmarking is a violation of REST -- assuming that REST means minimizing dependency on stable URIs and instead navigating to potentially transient URIs via stable media types and HATEOAS. I raised this point<http://tech.groups.yahoo.com/group/rest-discuss/message/12093>back in February. Every time a client (whether a user client - aka browser - or not) bookmarks a link (aka stores it on the client side), it creates a dependency on the current URI space -- however it is organized. It doesn't matter if the URIs are opaque GUIDs or human-intuitive path-based. REST seems to suggest that creating (or encouraging) such dependencies is uncool. But Tim Berners-Lee et al seem to believe that being able to depend on stable URIs IS cool <http://www.w3.org/Provider/Style/URI>: *It is the the duty of a [URI space designer] to allocate URIs which you will be able to stand by in 2 years, in 20 years, in 200 years. This needs thought, and organization, and commitment.* (TBL gives advice on how to achieve such URI stability<http://www.w3.org/Provider/Style/URI> .) If a cool (well-designed) URI space should be stable for 200 years, then it should be OK for clients to either bookmark them as received or to generate them from templates. So if REST discourages either URI bookmarking OR generating URIs from templates, then it is in tension with one of the fundamental principles of the Web: Cool URIs don't change. I think this is somewhat ironic. I'm not sure I agree with Roy that this tension is like sexual tension<http://tech.groups.yahoo.com/group/rest-discuss/message/12101>, but it IS an interesting comparison(!): *If there is a tension between the desire to bookmark and the fact that REST encourages folks to break up an application into a state machine of reusable resource states, then I would consider it to be more like sexual tension. Just because you have it doesn't mean it is bad, and one way to improve things is to make the more important resource links look sexier than the less important ones.* -- Nick
On Wed, Sep 30, 2009 at 5:50 PM, Benjamin Carlyle < benjamincarlyle@...> wrote: > 2009/9/21 Bill Burke <bburke@...>: > > > Benjamin Carlyle wrote: > In trying to keep things clean from an architectural perspective, I > would carefully consider defining up front a set of portfolios for > different standards and that rules that must apply to each in order to > fit a given portfolio. For example: > * REST-style HTTP: Fully complies with REST style architecture in word > and in deed. No transactions. No pessimistic locks. No pub/sub. Fully > stateless and all the rest. May still use features not widely deployed > on the Web. > * Transitional HTTP: Breaks REST constraints in some way, probably by > breaking statelessness of client/server. Very likely they still fit > the Uniform Interface/Contract constraint but pay relatively little > heed to other constraints. These specifications are designed to fit > with existing enterprise use, to allow that use to continue into the > brave new HTTP world, and to allow enterprises to more easily > transition to REST-style HTTP should they find they wish to do so. > * HTTP Integration: Mappings of HTTP semantics into other contexts to > allow services who don't natively speak HTTP to utilise a simple > gateway to achieve HTTP integration. > > > This, IMO, does not mean we should change the name of the site or to coin a > > new buzzword. The goal we are striving for is to be RESTful. REST is our > > ideal. > > You understand, of course, that this is a subject that the REST camp > are very touchy about :) Roy has stated his preference many times for > new architectural styles to use their own names and not to try and > reuse or redefine REST. Roy has also worked hard for many years to > avoid REST the architectural style being confused with HTTP or the > Web. Calling this REST-* is akin to renaming web services to SOA-* ;) > The backlash you have already seen here I think is similar to what you > could expect from that renaming. Since the long run goal of REST-* is to adhere all the REST constraints, but in the short run it may adhere to LESS than all the constraints, why not change the name to RESTless! Actually, the CEO of service-now.com recently introduced this Malapropism<http://www.cio.com/article/448770/Service_now.com_Starts_Up_SOA> (!): *I**ts CEO's vision for Service-now.com for the startup's underlying architecture was that the software must be "simple, approachable, configurable, and easy to integrate" and had to be as "restless and stateless as possible."* (FYI,He corrects himself in the comments to the article.) I think RESTless would be a great name for this effort! How much less than REST the style becomes will be up to those contributing to the effort. Since the name itself conveys that the "style" is a relaxation of some of REST's constraints, Roy et al should not be too offended since he encourages thoughtful experimentation with the relaxation of REST's constraints. -- Nick
Perhaps a 'restful bookmark' would include the 'hypermedia path' taken to find that URI so that it can backtrack if the URI-spacechanges? Alexandros On Thu, Oct 1, 2009 at 12:19 PM, Nick Gall <nick.gall@...> wrote: > > > On Wed, Sep 30, 2009 at 5:40 PM, Jan Algermissen <algermissen1971@...> > wrote: > > On Sep 30, 2009, at 11:11 PM, Bediako George wrote: > > > I wonder if there is a subtle distinction between (i) the server > > > organizing its URI space in a predictable manner and (ii) the client > > > using that knowledge to construct/derive URIs to manipulate the > > > server? (i) does not seem to be in violation of REST, whereas (ii) > > > seems to be a violation of HATEOAS. > > > > (i) is a violation of REST if it is part of the contract between > > server and client. > > > > It's as easy as this: once the client makes use of the 'predictable > > way' of organisation the server cannot anymore change the way it > > organises the URI space. REST deliberately aims to avoid that. > > Either both are or neither are. > If (i) means "organizing its URI space in a stable manner", ie so that > bookmarking URIs works over long periods of time, then the practice of > bookmarking is a violation of REST -- assuming that REST means minimizing > dependency on stable URIs and instead navigating to potentially transient > URIs via stable media types and HATEOAS. > I raised this point<http://tech.groups.yahoo.com/group/rest-discuss/message/12093>back in February. Every time a client (whether a user client - aka browser - > or not) bookmarks a link (aka stores it on the client side), it creates a > dependency on the current URI space -- however it is organized. It doesn't > matter if the URIs are opaque GUIDs or human-intuitive path-based. REST > seems to suggest that creating (or encouraging) such dependencies is uncool. > > But Tim Berners-Lee et al seem to believe that being able to depend on > stable URIs IS cool <http://www.w3.org/Provider/Style/URI>: > > *It is the the duty of a [URI space designer] to allocate URIs which you > will be able to stand by in 2 years, in 20 years, in 200 years. This needs > thought, and organization, and commitment.* > > > (TBL gives advice on how to achieve such URI stability<http://www.w3.org/Provider/Style/URI> > .) > > If a cool (well-designed) URI space should be stable for 200 years, then it > should be OK for clients to either bookmark them as received or to generate > them from templates. > > So if REST discourages either URI bookmarking OR generating URIs from > templates, then it is in tension with one of the fundamental principles of > the Web: Cool URIs don't change. I think this is somewhat ironic. > > I'm not sure I agree with Roy that this tension is like sexual tension<http://tech.groups.yahoo.com/group/rest-discuss/message/12101>, > but it IS an interesting comparison(!): > > *If there is a tension between the desire to bookmark and the fact that > REST encourages folks to break up an application into a state machine of > reusable resource states, then I would consider it to be more like sexual > tension. Just because you have it doesn't mean it is bad, and one way to > improve things is to make the more important resource links look sexier than > the less important ones.* > > > -- Nick > >
On Wed, Sep 30, 2009 at 4:06 PM, Roy T. Fielding <fielding@...> wrote: > On Sep 30, 2009, at 12:56 PM, Tim Williams wrote: > >> Roy says[0], "A REST API should spend almost all of its descriptive >> effort in defining the media type(s) used for representing resources >> and driving application state, or in defining extended relation names >> and/or hypertext-enabled mark-up for existing standard media types." >> >> I have situations where formats such as Atom are fairly well-suited to >> represent my resources. If I re-use the format and keep the content >> type "atom+xml", then client's wouldn't know my specific usage of atom >> (e.g. semantics behind some of my "rel" attributes). > > Link relations are universal, independent of the format in > which they are found, though perhaps only used with a target > that is in a specific format (by practice, not by standard). > The fact that some media type specifications also introduce > new link relations does not make the relations specific to > those media types. The more I look into link relations, the messier it gets. I'm wondering if there's a mailing list/group/etc where discussions on link relations take place? For example, I'm looking for an appropriate link relation for: o) a project descriptor (DOAP) - it seems that rel=meta is in common use but I haven't found a spec where it's defined yet. o) a system's status (something like these an RSS feed[0]) - nothing found. and o) is there any thought on relation namespacing - more flexible than html profiles? This question is less about these specific questions (though if you got input I'd like it) and more generally wondering where the discussion takes place? It seems that there are efforts at the whatwg[1] and with this ietf[2] draft. Are people just defining their own [as I had previously done] or is there some other discussion list where this sort of conversation takes place? Thanks, --tim [0] - http://status.aws.amazon.com/ [1] - http://wiki.whatwg.org/wiki/RelExtensions [2] - http://tools.ietf.org/html/draft-nottingham-http-link-header-06
On Thu, Oct 1, 2009 at 6:05 AM, Alexandros Marinos <al3xgr@...> wrote: > > Perhaps a 'restful bookmark' would include the 'hypermedia path' > taken to find that URI so that it can backtrack if the > URI-spacechanges? I don't think this works for the most common use case of bookmarking in automata. Namely, using URIs to identify a resource in a remote system that was chosen by a human in an arbitrary manner. For example, an order is handed to an order fulfillment system that contains a set of URIs from the inventory system. Each URI indicates a product that should be shipped to the customer. The is no valid way to store the hypermedia path to those product because the users probably chose it by searching and select third item in the results. But tomorrow the third item for that search term might be different. Actually, i am having a hard time imagining a situation where the hypermedia path could be reliably stored that it would be useful. -- Peter Williams http://barelyenough.org
The creation of bookmark/URI dependency is unavoidable and their creation is always up to the client. Servers have the necessary mechanisms with HTTP by which to indicate that a resource has changed location - so the choice to accommodate this kind of 'URI dependent' client behavior for a given resource is down to the implementation - Mike Nick Gall wrote > > Either both are or neither are. > > If (i) means "organizing its URI space in a stable manner", ie so that > bookmarking URIs works over long periods of time, then the practice of > bookmarking is a violation of REST -- assuming that REST means > minimizing dependency on stable URIs and instead navigating to > potentially transient URIs via stable media types and HATEOAS. > > I raised this point > <http://tech.groups.yahoo.com/group/rest-discuss/message/12093> back > in February. Every time a client (whether a user client - aka browser > - or not) bookmarks a link (aka stores it on the client side), it > creates a dependency on the current URI space -- however it is > organized. It doesn't matter if the URIs are opaque GUIDs or > human-intuitive path-based. REST seems to suggest that creating (or > encouraging) such dependencies is uncool. > > But Tim Berners-Lee et al seem to believe that being able to depend on > stable URIs IS cool <http://www.w3.org/Provider/Style/URI>: > > /It is the the duty of a [URI space designer] to allocate URIs > which you will be able to stand by in 2 years, in 20 years, in 200 > years. This needs thought, and organization, and commitment./ > > > (TBL gives advice on how to achieve such URI stability > <http://www.w3.org/Provider/Style/URI>.) > > If a cool (well-designed) URI space should be stable for 200 years, > then it should be OK for clients to either bookmark them as received > or to generate them from templates. > > So if REST discourages either URI bookmarking OR generating URIs from > templates, then it is in tension with one of the fundamental > principles of the Web: Cool URIs don't change. I think this is > somewhat ironic. > > I'm not sure I agree with Roy that this tension is like sexual tension > <http://tech.groups.yahoo.com/group/rest-discuss/message/12101>, but > it IS an interesting comparison(!): > > /If there is a tension between the desire to bookmark and the fact > that REST encourages folks to break up an application into a state > machine of reusable resource states, then I would consider it to > be more like sexual tension. Just because you have it doesn't mean > it is bad, and one way to improve things is to make the more > important resource links look sexier than the less important ones./ > > > -- Nick > > >
On Thu, Oct 1, 2009 at 6:54 AM, Mike Kelly <mike@...> wrote: > The creation of bookmark/URI dependency is unavoidable and their > creation is always up to the client. Servers have the necessary > mechanisms with HTTP by which to indicate that a resource has changed > location - so the choice to accommodate this kind of 'URI dependent' > client behavior for a given resource is down to the implementation That is true, of course, but it is also deeply unsatisfactory. One of the things that REST does rather effectively is allow for serendipitous reuse by reducing the level of cross domain agreement required for such reuse. Saying it is up to implementation is really saying a the client and server have to agree before it is ok for a client to store a link. A general consensus that storing links for reuse at a later time is presumptively acceptable would facilitate more serendipitous reuse. The lack of such a consensus means that many uses of any service are highly dodgy until you have tracked down it's implementer and extracted an agreement that you may store links for later reuse. Of course, this raise the implementation effort required for servers. However, in an HTTP environment the costs of this are quite manageable. In my mind it seems well worth the costs in almost all situations. Particularly when clients exist that are outside of the control of the service implementer. -- Peter Williams http://barelyenough.org
On Thu, Oct 1, 2009 at 8:54 AM, Mike Kelly <mike@...> wrote: > > The creation of bookmark/URI dependency is unavoidable and their creation is always up to the client. Servers have the necessary mechanisms with HTTP by which to indicate that a resource has changed location - so the choice to accommodate this kind of 'URI dependent' client behavior for a given resource is down to the implementation If a server implements such a mechanism (a requirement of being cool), then a client can legitimately depend on a bookmark or a URI template and rely upon the server to map from the old URI space to the new one (eg redirect). The only downside to the client in relying on a bookmark or URI template is some loss in performance due to the redirection. The redirection mechanism could provide a URI (via the HTTP link header) to the new template or at least a document for developers explaining the change and what they must do to update the client code. -- Nick
Some posts ago I suggested REST-- for REST less less, but I don't think the idea was accepted :) ... Probably is not as marketable as the other... Nick Gall wrote: > > > On Wed, Sep 30, 2009 at 5:50 PM, Benjamin Carlyle > <benjamincarlyle@... > <mailto:benjamincarlyle@...>> wrote: > > 2009/9/21 Bill Burke <bburke@... <mailto:bburke@...>>: > > > > > Benjamin Carlyle wrote: > > In trying to keep things clean from an architectural perspective, I > > would carefully consider defining up front a set of portfolios for > > different standards and that rules that must apply to each in order to > > fit a given portfolio. For example: > > * REST-style HTTP: Fully complies with REST style architecture in word > > and in deed. No transactions. No pessimistic locks. No pub/sub. Fully > > stateless and all the rest. May still use features not widely deployed > > on the Web. > > * Transitional HTTP: Breaks REST constraints in some way, probably by > > breaking statelessness of client/server. Very likely they still fit > > the Uniform Interface/Contract constraint but pay relatively little > > heed to other constraints. These specifications are designed to fit > > with existing enterprise use, to allow that use to continue into the > > brave new HTTP world, and to allow enterprises to more easily > > transition to REST-style HTTP should they find they wish to do so. > > * HTTP Integration: Mappings of HTTP semantics into other contexts to > > allow services who don't natively speak HTTP to utilise a simple > > gateway to achieve HTTP integration. > > > > > This, IMO, does not mean we should change the name of the site or > to coin a > > > new buzzword. The goal we are striving for is to be RESTful. REST > is our > > > ideal. > > > > You understand, of course, that this is a subject that the REST camp > > are very touchy about :) Roy has stated his preference many times for > > new architectural styles to use their own names and not to try and > > reuse or redefine REST. Roy has also worked hard for many years to > > avoid REST the architectural style being confused with HTTP or the > > Web. Calling this REST-* is akin to renaming web services to SOA-* ;) > > The backlash you have already seen here I think is similar to what you > > could expect from that renaming. > > Since the long run goal of REST-* is to adhere all the REST > constraints, but in the short run it may adhere to LESS than all the > constraints, why not change the name to RESTless! Actually, the CEO of > service-now.com recently introduced this Malapropism > <http://www.cio.com/article/448770/Service_now.com_Starts_Up_SOA>(!): > > /I//ts CEO's vision for Service-now.com for the startup's > underlying architecture was that the software must be "simple, > approachable, configurable, and easy to integrate" and had to be > as "restless and stateless as possible."/ > > > (FYI,He corrects himself in the comments to the article.) > > I think RESTless would be a great name for this effort! How much less > than REST the style becomes will be up to those contributing to the > effort. Since the name itself conveys that the "style" is a relaxation > of some of REST's constraints, Roy et al should not be too offended > since he encourages thoughtful experimentation with the relaxation of > REST's constraints. > > -- Nick > >
On Thu, Oct 1, 2009 at 11:20 AM, Mike Kelly <mike@...> wrote: > If the inefficiency of redirected bookmarks was of potential concern to a client, then a solution to that would be to react to a 301 Moved Permanently response by updating the referring bookmark's URI with the value of the response's Location header. Agreed. > It is my understanding that generating URIs from templates should be avoided, and that leveraging HATEOAS from pre-established entry points (i.e. "bookmarks") is the RESTful alternative. Any change to this entry point (i.e. it responds with a 301) could be handled as described above. You are making the entry point URI vs. transitional URI distinction I raised back in Feb: It seems that perhaps there is an implicit REST constraint that is beginning to become more explicit. Roughly, REST distinguishes two types of URIs: 1. "entry point" type URIs, which may be bookmarked indefinitely. These are Cool URIs. 2. "transitional" type URIs, which may not be bookmarked indefinitely. These are unCool URIs. I call then "transitional" given that their role is typically to enable transition to the next state. I'm not wedded to the names (you could call them internal/external), but I do think this distinction between types of URIs is an important aspect of REST that, so far, has not been clearly outlined. It certainly seems to be in play in the permathread debates regarding whether "URIs should be RESTful or not" (and whether that designation is even a meaningful one). I also think the distinction is a bit at odds with the common understanding of URIs on the Web that ANY URI should be a bookmarkable URI, ie that ALL URIs should strive to be Cool. Feel free to make such a distinction, but acknowledge that such a distinction goes against the Web ethos that ALL URIs should be cool (ie should not change) not just entry point URIs. In other words, REST seems to imply that only entry point URIs need be cool, while Web Architecture implies that ALL URI should be cool. Someone should justify the (sexual) tension between these principles. -- Nick
I think I forgot to include the list in the last email by mistake: Nick Gall wrote: > On Thu, Oct 1, 2009 at 8:54 AM, Mike Kelly <mike@...> wrote: > >> The creation of bookmark/URI dependency is unavoidable and their creation is always up to the client. Servers have the necessary mechanisms with HTTP by which to indicate that a resource has changed location - so the choice to accommodate this kind of 'URI dependent' client behavior for a given resource is down to the implementation >> > > If a server implements such a mechanism (a requirement of being cool), > then a client can legitimately depend on a bookmark or a URI template > and rely upon the server to map from the old URI space to the new one > (eg redirect). The only downside to the client in relying on a > bookmark or URI template is some loss in performance due to the > redirection. The redirection mechanism could provide a URI (via the > HTTP link header) to the new template or at least a document for > developers explaining the change and what they must do to update the > client code. > > -- Nick > > > If the inefficiency of redirected bookmarks was of potential concern to a client, then a solution to that would be to react to a 301 Moved Permanently response by updating the referring bookmark's URI with the value of the response's Location header. http://tools.ietf.org/html/rfc2616#section-10.3.2 "10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new returned by the server, where possible. This response is cacheable unless indicated otherwise." It is my understanding that generating URIs from templates should be avoided, and that leveraging HATEOAS from pre-established entry points (i.e. "bookmarks") is the RESTful alternative. Any change to this entry point (i.e. it responds with a 301) could be handled as described above. - Mike
Nick Gall wrote: > > It is my understanding that generating URIs from templates should be > avoided, and that leveraging HATEOAS from pre-established entry points > (i.e. "bookmarks") is the RESTful alternative. Any change to this > entry point (i.e. it responds with a 301) could be handled as > described above. > > You are making the entry point URI vs. transitional URI distinction I > raised back in Feb: > > It seems that perhaps there is an implicit REST constraint that is > beginning to become more explicit. Roughly, REST distinguishes two > types of URIs: > > 1. "entry point" type URIs, which may be bookmarked > indefinitely. These are Cool URIs. > 2. "transitional" type URIs, which may not be bookmarked > indefinitely. These are unCool URIs. I call then > "transitional" given that their role is typically to enable > transition to the next state. > > I'm not wedded to the names (you could call them > internal/external), but I do think this distinction between types > of URIs is an important aspect of REST that, so far, has not been > clearly outlined. It certainly seems to be in play in the > permathread debates regarding whether "URIs should be RESTful or > not" (and whether that designation is even a meaningful one). I > also think the distinction is a bit at odds with the common > understanding of URIs on the Web that ANY URI should be a > bookmarkable URI, ie that ALL URIs should strive to be Cool. > > > Feel free to make such a distinction, but acknowledge that such a > distinction goes against the Web ethos that ALL URIs should be cool > (ie should not change) not just entry point URIs. > > In other words, REST seems to imply that only entry point URIs need be > cool, while Web Architecture implies that ALL URI should be cool. > Someone should justify the (sexual) tension between these principles. I don't, personally, make any distinction. Entry point URIs are exactly the same as any other URI; where the identified resource can be moved and subsequently indicated to clients with a 301 response. Is a URI that begins responding with a 301 to a new location considered 'uncool' as of that point? I think it's very cool to let clients know that! :) - Mike
Perhaps there is no need to distinguish "entry point" and "transitional" URIs, If you treat all URIs as transitional, who cares when the transition occurs (2ms or 200 years), provided the server plays nice and uses the appropriate means to communicate this to the client. If this is true then any concerns about client bookmarking may not be worth the bother. On Thu, Oct 1, 2009 at 11:38 AM, Nick Gall <nick.gall@...> wrote: > On Thu, Oct 1, 2009 at 11:20 AM, Mike Kelly <mike@...> wrote: > > If the inefficiency of redirected bookmarks was of potential concern to a > client, then a solution to that would be to react to a 301 Moved Permanently > response by updating the referring bookmark's URI with the value of the > response's Location header. > > Agreed. > > > It is my understanding that generating URIs from templates should be > avoided, and that leveraging HATEOAS from pre-established entry points (i.e. > "bookmarks") is the RESTful alternative. Any change to this entry point > (i.e. it responds with a 301) could be handled as described above. > > You are making the entry point URI vs. transitional URI distinction I > raised back in Feb: > > It seems that perhaps there is an implicit REST constraint that is > beginning to become more explicit. Roughly, REST distinguishes two types of > URIs: > > 1. "entry point" type URIs, which may be bookmarked indefinitely. These > are Cool URIs. > 2. "transitional" type URIs, which may not be bookmarked indefinitely. > These are unCool URIs. I call then "transitional" given that their role is > typically to enable transition to the next state. > > I'm not wedded to the names (you could call them internal/external), but I > do think this distinction between types of URIs is an important aspect of > REST that, so far, has not been clearly outlined. It certainly seems to be > in play in the permathread debates regarding whether "URIs should be RESTful > or not" (and whether that designation is even a meaningful one). I also > think the distinction is a bit at odds with the common understanding of URIs > on the Web that ANY URI should be a bookmarkable URI, ie that ALL URIs > should strive to be Cool. > > > Feel free to make such a distinction, but acknowledge that such a > distinction goes against the Web ethos that ALL URIs should be cool (ie > should not change) not just entry point URIs. > > In other words, REST seems to imply that only entry point URIs need be > cool, while Web Architecture implies that ALL URI should be cool. Someone > should justify the (sexual) tension between these principles. > > -- Nick > -- Bediako George Partner - Lucid Technics, LLC Think Clearly, Think Lucid www.lucidtechnics.com (p) 202.683.7486 (f) 703.563.6279
Nick Gall wrote: > On Thu, Oct 1, 2009 at 8:54 AM, Mike Kelly <mike@...> wrote: > >> The creation of bookmark/URI dependency is unavoidable and their creation is always up to the client. Servers have the necessary mechanisms with HTTP by which to indicate that a resource has changed location - so the choice to accommodate this kind of 'URI dependent' client behavior for a given resource is down to the implementation >> > > If a server implements such a mechanism (a requirement of being cool), > then a client can legitimately depend on a bookmark or a URI template > and rely upon the server to map from the old URI space to the new one > (eg redirect). The only downside to the client in relying on a > bookmark or URI template is some loss in performance due to the > redirection. The redirection mechanism could provide a URI (via the > HTTP link header) to the new template or at least a document for > developers explaining the change and what they must do to update the > client code. > > -- Nick > If the inefficiency of redirected bookmarks was of potential concern to a client, then a solution to that would be to react to a 301 Moved Permanently response by updating the referring bookmark's URI with the value of the response's Location header. http://tools.ietf.org/html/rfc2616#section-10.3.2 "10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new returned by the server, where possible. This response is cacheable unless indicated otherwise." It is my understanding that generating URIs from templates should be avoided, and that leveraging HATEOAS from pre-established entry points (i.e. "bookmarks") is the RESTful alternative. Any change to this entry point (i.e. it responds with a 301) could be handled as described above. - Mike
Hullo Benjamin, I must admit I am having some trouble understanding the distinction you make between server state and application state. In principal I get the theoretical difference, but I think the examples you give don't necessarily illustrate the point, and in one case confuses me. Taking Bill's credit card transaction example, if the client authorizes the server to charge the credit card and a record of that charge is created, this means that the number of resources on the server side will increase every time a client authorizes a charge to its credit card. What kind of state is this, application or service? Also looking at your pessimistic locking example, it seems to me that the requirement to "clean up" locks on the server is not a necessary requirement. It should suffice to have the locks expire, and to have the server ignore the presence of all expired locks. It seems to me that this could be done in a manner that would not require the server to remember the state. Do you see why I am confused? In reading your post, it seems that the pessimistic lock example could be implemented in a style that would not break the guidelines you suggested. It also seems that in the case of a credit card transaction authorization, according to your example, that storing it as a resource would be simply be an attempt to transfer what could be considered trying to convert application state into service state. I am struggling with this concept, and would value any input you (or anyone else) may have that would help to clarify this. Of course if the answer is simply "restful statelessness means no server side database dummy" then I completely understand. I hope that isn't the answer however. :) On Wed, Sep 30, 2009 at 6:06 PM, Benjamin Carlyle < benjamincarlyle@...> wrote: > > > Bill, > > Here are mine: http://soundadvice.id.au/blog/2009/06/13#stateless > > :) > > Benjamin. > > 2009/9/22 Bill Burke <bburke@...> > >> Here's my thoughts on the compatibility of Transactions and REST. >> Maybe >> now you can see where I am coming from. >> >> http://bill.burkecentral.com/2009/09/21/credit-cards-transactions-and-rest/ >> >> > -- Bediako George Partner - Lucid Technics, LLC Think Clearly, Think Lucid www.lucidtechnics.com (p) 202.683.7486 (f) 703.563.6279
On Thu, Oct 1, 2009 at 9:01 AM, Bediako George <bediakogeorge@...> wrote: > > Perhaps there is no need to distinguish "entry point" and "transitional" > URIs, If you treat all URIs as transitional, who cares when the transition > occurs (2ms or 200 years), provided the server plays nice and uses > the appropriate means to communicate this to the client. > > If this is true then any concerns about client bookmarking may not be worth the bother. What this really points to is a hidden complexity requirement on the part of the client. It's a matter of robustness. A trivial client can use URI via hard code, bookmark, re-use, build from templates, or whatever other means it likes. It can ignore HATEOS, making assumptions about work flow along the way. Arguably, much like many think REST is "pretty urls", or just RPC over HTTP, many also think writing a client in this way is perfectly normal and accepted. A better client will be able to at least follow any redirects that a robust server may send them, regardless of where they got their original URI. The "Best" client will be able to take a entry point URL, leverage HATEOS to perform its tasks, and even ideally record for posterity any 302 it gets and never hit them again (including the entry point URL). But all of these make the client more and more complex. The simplest client can be a shell script doing some curl hacking following a strict, hardcoded recipe with little logic whatsoever. Of course an attraction to REST is that very simple clients can leverage the resources. They may not be robust, they may be brittle to change, but if the system is generally stable, for many cases, the client pretty much works. Similarly on the server side, a robust server is one that's sending well design links and references, operates HTTP politely, "knows" it's old URLs so it can redirect to new URLs, etc. A simple server sends little more than 200s, 404s, and 500s. A simple server relies on URI construction, with no links. Long term, if the foundation is correct, a simple server can be reworked up to being a more complete REST server by tweaking it's payloads and including the proper links. Similarly, a simple client can grow in sophistication and become a "better" client. But you need a good server to have a good client. Trivial clients work well with robust servers and crummy servers. A robust client won't work as well with a trivial server, as the payload lack the information it requires. In the end, the lifespan of the URIs are not important. An underlying assumption is simply that URIs WILL change, and there are mechanisms in place to deal with changing URIs. So the real question isn't URI lifespan, it's leveraging and responding well to the protocols already in place. Regards, Will Hartung (willh@...)
Hello. Dilip posted a news item at InfoQ <http://www.infoq.com/news/2009/09/restlet-extension-micorosoft> about a RESTFul bridge between Java and .NET ADO. It brought to my memory the Restlet project, something I came across when looking for REST in Java solutions. Mostly like a servlet, it provides semantics for several REST concepts. Still, I can't conceptually understand what the Restlet stands for. Documentation does not aid either, since the first pages I read does not explain technically much of it, just tells you in big words what it does: support REST using java. Let me explain: a servlet is a little process that is mapped to one or several URLs, and it usually creates, dynamically, a page to show. Now, what does the restlet represent? Is it a resource? Does it fit with the REST constrains? And more related to the post: The so called bridge is just a class generator to access remote data using .NET ADO services, meaning I can access that data using the RESTful api created using the RESTLet framework. I guess the .NET interface is good for java developer that creates the inner guts of a resource, but REST has no business there. Actually, I think it may not be good to think that using RESTlet itself, we can "convert" the .NET ADO api to a RESTfull api. What do you think? William Martinez.
--- In rest-discuss@yahoogroups.com, Benjamin Carlyle <benjamincarlyle@...> wrote: > I know how you feel. The architectures I build at the moment are all > real-time, so while REST is a template I have weakened statelessness > and essentially substituted cache for pub/sub. This is actually > changing over time and the architecture is just starting to come into > closer alignment with REST as developers begin to understand that even > in a real-time system poll+cache turns out to solve many problems > better. > Benjamin, I suggest you take a look at CCXML (http://www.w3.org/TR/ccxml/). It's a hypermedia format for interfacing with a call control system -- it essentially places a RESTful layer on top of a real-time, asynchronous event driven system. The model used might be something you might be able to borrow from even if the domain is different from what you are working on. The thing that might stand out is that the call control system is the client in the RESTful client-server relationship which is backwards from what most folks might expect. This makes the call control system a "browser" that multiple services can be targeted to. A key reason it fits better as the client is that calls are a key component of the session state -- they essentially are media sessions. Even if you don't take a look, a key take away that's helped me is this: -- If you are working with a machine to machine system and find that you often need to break the statelessness constraint, then try reversing the roles of client and server and see if that helps. And if you do take a look at think you might want to define a similar state-machine-based hypermedia format, you should take a look at SCXML: http://www.w3.org/TR/scxml/ . It might be a good starting point. Regards, Andrew
On Oct 1, 2009, at 8:38 AM, Nick Gall wrote: > You are making the entry point URI vs. transitional URI distinction > I raised back in Feb: > > It seems that perhaps there is an implicit REST constraint that is > beginning to become more explicit. Roughly, REST distinguishes two > types of URIs: > "entry point" type URIs, which may be bookmarked indefinitely. > These are Cool URIs. > "transitional" type URIs, which may not be bookmarked indefinitely. > These are unCool URIs. I call then "transitional" given that their > role is typically to enable transition to the next state. No, REST has no such distinctions. They might occur in practice, but that is neither here nor there. REST does not distinguish them. > I'm not wedded to the names (you could call them internal/ > external), but I do think this distinction between types of URIs is > an important aspect of REST that, so far, has not been clearly > outlined. It certainly seems to be in play in the permathread > debates regarding whether "URIs should be RESTful or not" (and > whether that designation is even a meaningful one). I also think > the distinction is a bit at odds with the common understanding of > URIs on the Web that ANY URI should be a bookmarkable URI, ie that > ALL URIs should strive to be Cool. > > Feel free to make such a distinction, but acknowledge that such a > distinction goes against the Web ethos that ALL URIs should be cool > (ie should not change) not just entry point URIs. Umm, all of my URIs are cool *and* entry point URIs. TimBL's point was that cool URIs do not change, not that all URIs are cool. > In other words, REST seems to imply that only entry point URIs need > be cool, while Web Architecture implies that ALL URI should be > cool. Someone should justify the (sexual) tension between these > principles. I still don't see where you are deriving these ideas. ....Roy
On Thu, Oct 1, 2009 at 11:53 AM, Mike Kelly <mike@...> wrote: > I don't, personally, make any distinction. Entry point URIs are exactly > the same as any other URI; where the identified resource can be moved > and subsequently indicated to clients with a 301 response. Good. Then we agree. > Is a URI that begins responding with a 301 to a new location considered > 'uncool' as of that point? I think it's very cool to let clients know > that! :) Agreed. But note such 301 behavior, reliably and systematically implemented, would enable a client to safely rely upon URI templates as long as the client did the right thing when use of such a template generated a 301. -- Nick
On Thu, Oct 1, 2009 at 10:15 PM, Roy T. Fielding <fielding@...> wrote: > On Oct 1, 2009, at 8:38 AM, Nick Gall wrote: >> Feel free to make such a distinction, but acknowledge that such a >> distinction goes against the Web ethos that ALL URIs should be cool (ie >> should not change) not just entry point URIs. > > ...TimBL's point > was that cool URIs do not change, not that all URIs are cool. I have to disagree Roy. Clearly not all URIs are cool, but TimBL believes all URIs should be cool. I quoted from TimBLs Cool URIs design note<http://www.w3.org/Provider/Style/URI>above and I'll quote it again here: *It is the the duty of a [URI space designer] to allocate URIs which you will be able to stand by in 2 years, in 20 years, in 200 years. This needs thought, and organization, and commitment.* Nowhere in this note does he ever suggest that only SOME URIs need be cool. His design note clearly suggests (at least to me) that web designers and implementers should strive to make ALL URIs as cool as possible. Here's the closing sentence of the note: *The message here is, however, that many, many things can change and your URIs can and should stay the same. They only can if you think about how you design them.* Where do you get the idea that not all URIs need be or should be cool? (If I am understanding you correctly...) -- Nick
On Oct 1, 2009, at 4:19 AM, Nick Gall wrote: > If (i) means "organizing its URI space in a stable manner", ie so > that bookmarking URIs works over long periods of time, then the > practice of bookmarking is a violation of REST -- assuming that > REST means minimizing dependency on stable URIs and instead > navigating to potentially transient URIs via stable media types and > HATEOAS. Why would navigating imply transient URIs? There are times when the best RESTful design will conflict with UI guidelines (split-page processing) or browser limitations. The goal here is not to run away screaming whenever something doesn't match the ideal -- it is to recognize there is a problem and find a way to compensate for it (e.g., by changing the mode of interaction, using better clients, introducing code to compress multiform steps into a single network interaction, or just buying bigger machines). Minimizing dependencies does not mean that one cannot have a rich and well-defined URI space. Such dependencies can be programmatically adjusted as whole trees of resources just as easily as any single resource. However, that doesn't mean that REST implies one particular design of URI space is somehow "more RESTful" than some other design, which was the original point being argued. Aside from the legacy cache issues, any given organized structure is just as good as any other organized structure, and both are better than a disorganized structure. That's what we mean by "it just doesn't matter to REST how you design your URIs." IIRC, the complaint was about APIs that have a standard URI structure. The problem with that is not the structure, but rather the way that the structure is communicated to clients (by antiquated agreement rather than by following your nose). The follow-your-nose style has the built-in capacity for indirection and redirection at the time of the current interaction, whereas standard layouts encourage the client to bake-in assumptions that will inevitably break. That does not mean it isn't RESTful to have permanent URIs. REST is still dependent on at least one starting URI, and thus it is always going to be better for the URIs to be permanent. It just isn't necessary to structure the application around such permanence. Bookmarks are resource identifiers from a prior interaction. They are bread-crumbs. Sometimes they get eaten. REST doesn't enter into the discussion until one of the URIs is actively traversed and the application enters one of its initial states. After that point the user agent is led by the nose through the application. If the application is designed correctly, then the only times that the user agent will pause long enough to make a bookmark will be at one of the application steady states, which should correspond to one of the cool URIs. In other words, a RESTful architecture will expose the cool URIs (and only the cool URIs) to the user. REST cannot remove all the points of coupling. It can reduce those points of coupling to the set of URIs that an organization is willing to maintain over time for bookmarks (cool URIs). How many of those you have is entirely up to the organization. It is certainly possible for all of the URIs to be maintained. ....Roy
On Oct 1, 2009, at 8:43 PM, Nick Gall wrote: > On Thu, Oct 1, 2009 at 10:15 PM, Roy T. Fielding > <fielding@...> wrote: > > On Oct 1, 2009, at 8:38 AM, Nick Gall wrote: > >> Feel free to make such a distinction, but acknowledge that such a > >> distinction goes against the Web ethos that ALL URIs should be > cool (ie > >> should not change) not just entry point URIs. > > > > ...TimBL's point > > was that cool URIs do not change, not that all URIs are cool. > > I have to disagree Roy. Clearly not all URIs are cool, but TimBL > believes all URIs should be cool. I quoted from TimBLs Cool URIs > design noteabove and I'll quote it again here: > >> It is the the duty of a [URI space designer] to allocate URIs >> which you will be able to stand by in 2 years, in 20 years, in 200 >> years. This needs thought, and organization, and commitment. >> > Nowhere in this note does he ever suggest that only SOME URIs need > be cool. His design note clearly suggests (at least to me) that > web designers and implementers should strive to make ALL URIs as > cool as possible. Here's the closing sentence of the note: > >> The message here is, however, that many, many things can change >> and your URIs can and should stay the same. They only can if you >> think about how you design them. >> > Where do you get the idea that not all URIs need be or should be > cool? (If I am understanding you correctly...) Umm, maybe the several hundred conversations I've had on the topic with TimBL in the room. Cool URIs are permanent, so if you want to be cool then make permanence a design criteria. That's all there is to it. Nobody is going to argue against too much URI permanence. There is certainly nothing about that in conflict with REST, so if you perceive a conflict then I suggest you look at your reasoning and kill the paper tiger. ....Roy
Hi William, Thanks for looking into Restlet (note it is written in lower case). Lots of good questions! I'm not sure if this list is the best place to discuss a specific framework (see our specific mailing lists: http://www.restlet.org/community/lists) but I'll reply here anyway: The org.restlet.Restlet class can be thought as the equivalent of the javax.servlet.http.HttpServlet class indeed. It offers a much more complete and abstracted view of the underlying HTTP model than in Servlet world though (HTTP headers are mostly hidden for example). The framework goes beyond Servlet by offering many subclasses such as Filter, Router, Redirector, Authenticator, etc. See this diagram from the tutorial: http://www.restlet.org/documentation/2.0/tutorial#conclusion Restlet is also both a client-side and server-side framework, with connectors for other (pseudo-)protocols than just HTTP and HTTPS, like SMTP, POP3, FILE, etc. See complete features list: http://www.restlet.org/about/features When the Servlet API was designed, REST concepts weren't formalized and understood as today, so the mapping isn't perfect. In Restlet, each major REST concept has an equivalent class such as the Connector, Component, Representation classes or the Uniform Java interface. You might be interested to read our introduction paper which explains the genesis of the Restlet API and the relationship to the Servlet API: http://www.restlet.org/about/introduction Now, the equivalent of a REST resource in Restlet (version 2.0) are the ServerResource and the ClientResource classes, extending a base UniformResource. They are the closest materialization of a specific REST resource you can find. Each instance of a ServerResource for example is associated to a specific target URI reference. This paper an example of resources development: http://www.restlet.org/documentation/2.0/firstResource Regarding our extension of ADO.NET Data Services, it has two parts. An optional generator mechanism (Generator class) and a runtime layer (Query and Session classes). This API uses ClientResource underneath to issue RESTful HTTP call to a remote ADO.NET Data Services. In our interoperability scenario, Restlet is the client-side and ADO.NET DS is the server-side exposing the RESTful API (based on HTTP and Atom). I hope it clarifies a bit! :) Best regards, Jerome Louvel -- Restlet ~ Founder and Lead developer ~ http://www.restlet.org Noelios Technologies ~ Co-founder ~ http://www.noelios.com William Martinez Pomares a �crit : > > > Hello. > > Dilip posted a news item at InfoQ > <http://www.infoq.com/news/2009/09/restlet-extension-micorosoft>about a > RESTFul bridge between Java and .NET ADO. > It brought to my memory the Restlet project, something I came across > when looking for REST in Java solutions. > Mostly like a servlet, it provides semantics for several REST concepts. > Still, I can't conceptually understand what the Restlet stands for. > Documentation does not aid either, since the first pages I read does not > explain technically much of it, just tells you in big words what it > does: support REST using java. > > Let me explain: a servlet is a little process that is mapped to one or > several URLs, and it usually creates, dynamically, a page to show. Now, > what does the restlet represent? Is it a resource? Does it fit with the > REST constrains? > > And more related to the post: The so called bridge is just a class > generator to access remote data using .NET ADO services, meaning I can > access that data using the RESTful api created using the RESTLet > framework. I guess the .NET interface is good for java developer that > creates the inner guts of a resource, but REST has no business there. > Actually, I think it may not be good to think that using RESTlet itself, > we can "convert" the .NET ADO api to a RESTfull api. > > What do you think? > > William Martinez. > >
Thank you all for the excellent discussion and the suggestions about the confirmation URL X RESTful Web-Services, I wrote a short summary of our discussion, including my implementation decisions: http://weblogs.java.net/blog/felipegaucho/archive/2009/10/02/pedantic-guide-restful-registration-use-case I hope you like it, Felipe Gaúcho -- Looking for a client application for this service: http://fgaucho.dyndns.org:8080/arena-http/wadl
On Thu, Oct 1, 2009 at 8:09 AM, Tim Williams <williamstw@...> wrote: > On Wed, Sep 30, 2009 at 4:06 PM, Roy T. Fielding <fielding@...> wrote: >> On Sep 30, 2009, at 12:56 PM, Tim Williams wrote: >> >>> Roy says[0], "A REST API should spend almost all of its descriptive >>> effort in defining the media type(s) used for representing resources >>> and driving application state, or in defining extended relation names >>> and/or hypertext-enabled mark-up for existing standard media types." >>> >>> I have situations where formats such as Atom are fairly well-suited to >>> represent my resources. If I re-use the format and keep the content >>> type "atom+xml", then client's wouldn't know my specific usage of atom >>> (e.g. semantics behind some of my "rel" attributes). >> >> Link relations are universal, independent of the format in >> which they are found, though perhaps only used with a target >> that is in a specific format (by practice, not by standard). >> The fact that some media type specifications also introduce >> new link relations does not make the relations specific to >> those media types. > > The more I look into link relations, the messier it gets. I'm > wondering if there's a mailing list/group/etc where discussions on > link relations take place? For example, I'm looking for an > appropriate link relation for: > > o) a project descriptor (DOAP) - it seems that rel=meta is in common > use but I haven't found a spec where it's defined yet. In partial answer to my own question, META seems defined here: http://www.w3.org/TR/relations.html --tim
Tim Williams wrote:
>
>
> On Thu, Oct 1, 2009 at 8:09 AM, Tim Williams <williamstw@...
> <mailto:williamstw%40gmail.com>> wrote:
> > On Wed, Sep 30, 2009 at 4:06 PM, Roy T. Fielding <fielding@...
> <mailto:fielding%40gbiv.com>> wrote:
> >> On Sep 30, 2009, at 12:56 PM, Tim Williams wrote:
> >>
> >>> Roy says[0], "A REST API should spend almost all of its descriptive
> >>> effort in defining the media type(s) used for representing resources
> >>> and driving application state, or in defining extended relation names
> >>> and/or hypertext-enabled mark-up for existing standard media types."
> >>>
> >>> I have situations where formats such as Atom are fairly well-suited to
> >>> represent my resources. If I re-use the format and keep the content
> >>> type "atom+xml", then client's wouldn't know my specific usage of atom
> >>> (e.g. semantics behind some of my "rel" attributes).
> >>
> >> Link relations are universal, independent of the format in
> >> which they are found, though perhaps only used with a target
> >> that is in a specific format (by practice, not by standard).
> >> The fact that some media type specifications also introduce
> >> new link relations does not make the relations specific to
> >> those media types.
> >
> > The more I look into link relations, the messier it gets. I'm
> > wondering if there's a mailing list/group/etc where discussions on
> > link relations take place? For example, I'm looking for an
> > appropriate link relation for:
> >
> > o) a project descriptor (DOAP) - it seems that rel=meta is in common
> > use but I haven't found a spec where it's defined yet.
>
> In partial answer to my own question, META seems defined here:
>
> http://www.w3.org/TR/relations.html <http://www.w3.org/TR/relations.html>
>
So where (or if) do Links + URI Templates come in? Is it ok to define a
link relationship that is a template following RFC-xxx and to specify in
the link definition what template parameters are expected when following
the link template?
for example:
<link rel="CustomersByZip"
template="http://example.com/customers?zip={zipcode}" type="xml"/>
In the "CustomerByZip" definition, it say that "zipcode" is a parameter
you have to provide to the URL.
The URI remains mostly opaque, but the user is still providing input.
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
On Fri, Oct 2, 2009 at 9:12 AM, Bill Burke <bburke@...> wrote:
>
>
> Tim Williams wrote:
>>
>>
>> On Thu, Oct 1, 2009 at 8:09 AM, Tim Williams <williamstw@...
>> <mailto:williamstw%40gmail.com>> wrote:
>> > On Wed, Sep 30, 2009 at 4:06 PM, Roy T. Fielding <fielding@...
>> <mailto:fielding%40gbiv.com>> wrote:
>> >> On Sep 30, 2009, at 12:56 PM, Tim Williams wrote:
>> >>
>> >>> Roy says[0], "A REST API should spend almost all of its descriptive
>> >>> effort in defining the media type(s) used for representing resources
>> >>> and driving application state, or in defining extended relation names
>> >>> and/or hypertext-enabled mark-up for existing standard media types."
>> >>>
>> >>> I have situations where formats such as Atom are fairly well-suited
>> to
>> >>> represent my resources. If I re-use the format and keep the content
>> >>> type "atom+xml", then client's wouldn't know my specific usage of
>> atom
>> >>> (e.g. semantics behind some of my "rel" attributes).
>> >>
>> >> Link relations are universal, independent of the format in
>> >> which they are found, though perhaps only used with a target
>> >> that is in a specific format (by practice, not by standard).
>> >> The fact that some media type specifications also introduce
>> >> new link relations does not make the relations specific to
>> >> those media types.
>> >
>> > The more I look into link relations, the messier it gets. I'm
>> > wondering if there's a mailing list/group/etc where discussions on
>> > link relations take place? For example, I'm looking for an
>> > appropriate link relation for:
>> >
>> > o) a project descriptor (DOAP) - it seems that rel=meta is in common
>> > use but I haven't found a spec where it's defined yet.
>>
>> In partial answer to my own question, META seems defined here:
>>
>> http://www.w3.org/TR/relations.html <http://www.w3.org/TR/relations.html>
>>
>
> So where (or if) do Links + URI Templates come in? Is it ok to define a
> link relationship that is a template following RFC-xxx and to specify in the
> link definition what template parameters are expected when following the
> link template?
>
> for example:
>
> <link rel="CustomersByZip"
> template="http://example.com/customers?zip={zipcode}" type="xml"/>
>
>
> In the "CustomerByZip" definition, it say that "zipcode" is a parameter you
> have to provide to the URL.
>
> The URI remains mostly opaque, but the user is still providing input.
Well, I'm just now fully grasping this
"link-relations-are-universal-and-independent-of-media-type" thing,
but precedence has been set I think. The OpenSearch spec[0]
introduced the rel=results link relation, which is essentially that.
I'm not yet smart enough to know if that's a Good Thing or not.
The more I learn about this, I'm gathering that the reality of the
current state of link-relations is a good bit behind Roy's universal
ideal. They're such an essential part of HATEOAS, I'm surprised by
the overall lack of discussion around them.
--tim
[0] - http://www.opensearch.org/Specifications/OpenSearch/1.1#The_.22Url.22_element
Using link doesn't seem all that natural to me. In HTML that question is
answered pretty simply:
<form action="http://example.com/customers<http://example.com/customers?zip=>"
method="GET" name="CustomersByZip">
<input type="text" name="zip" />
</form>
I personally find the syntax more expressive for the purpose of "actions".
The server expresses that this is an action vs. a document relationship via
two aspects
- <form action> clearly expresses that distinction better than <link>
- "name" seems to be a better fit than "rel"/relationship
-Solomon
On Fri, Oct 2, 2009 at 9:12 AM, Bill Burke <bburke@...> wrote:
>
>
> So where (or if) do Links + URI Templates come in? Is it ok to define a
> link relationship that is a template following RFC-xxx and to specify in
> the link definition what template parameters are expected when following
> the link template?
>
> for example:
>
> <link rel="CustomersByZip"
> template="http://example.com/customers?zip={zipcode}" type="xml"/>
>
> In the "CustomerByZip" definition, it say that "zipcode" is a parameter
> you have to provide to the URL.
>
> The URI remains mostly opaque, but the user is still providing input.
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com
>
>
>
Tim,
I fully agree with you here. I keep taking a look at HTML (which Roy was
and is heavily involved with) for inspiration as a key component in moving
the universal aspects of HATEOAS forward. While I don't think that HTML is
perfect for data oriented APIs, it seems to have quite a few better
ingredients than atom in relation to hypertext description.
I hope that this kind of discussion does continue.
-Solomon
On Fri, Oct 2, 2009 at 9:27 AM, Tim Williams <williamstw@...> wrote:
>
>
> On Fri, Oct 2, 2009 at 9:12 AM, Bill Burke <bburke@...<bburke%40redhat.com>>
> wrote:
> >
> >
> > Tim Williams wrote:
> >>
> >>
> >> On Thu, Oct 1, 2009 at 8:09 AM, Tim Williams <williamstw@...<williamstw%40gmail.com>
> >> <mailto:williamstw%40gmail.com <williamstw%2540gmail.com>>> wrote:
> >> > On Wed, Sep 30, 2009 at 4:06 PM, Roy T. Fielding <fielding@...<fielding%40gbiv.com>
> >> <mailto:fielding%40gbiv.com <fielding%2540gbiv.com>>> wrote:
> >> >> On Sep 30, 2009, at 12:56 PM, Tim Williams wrote:
> >> >>
> >> >>> Roy says[0], "A REST API should spend almost all of its descriptive
> >> >>> effort in defining the media type(s) used for representing
> resources
> >> >>> and driving application state, or in defining extended relation
> names
> >> >>> and/or hypertext-enabled mark-up for existing standard media
> types."
> >> >>>
> >> >>> I have situations where formats such as Atom are fairly well-suited
> >> to
> >> >>> represent my resources. If I re-use the format and keep the
> content
> >> >>> type "atom+xml", then client's wouldn't know my specific usage of
> >> atom
> >> >>> (e.g. semantics behind some of my "rel" attributes).
> >> >>
> >> >> Link relations are universal, independent of the format in
> >> >> which they are found, though perhaps only used with a target
> >> >> that is in a specific format (by practice, not by standard).
> >> >> The fact that some media type specifications also introduce
> >> >> new link relations does not make the relations specific to
> >> >> those media types.
> >> >
> >> > The more I look into link relations, the messier it gets. I'm
> >> > wondering if there's a mailing list/group/etc where discussions on
> >> > link relations take place? For example, I'm looking for an
> >> > appropriate link relation for:
> >> >
> >> > o) a project descriptor (DOAP) - it seems that rel=meta is in common
> >> > use but I haven't found a spec where it's defined yet.
> >>
> >> In partial answer to my own question, META seems defined here:
> >>
> >> http://www.w3.org/TR/relations.html <
> http://www.w3.org/TR/relations.html>
> >>
> >
> > So where (or if) do Links + URI Templates come in? Is it ok to define a
> > link relationship that is a template following RFC-xxx and to specify in
> the
> > link definition what template parameters are expected when following the
> > link template?
> >
> > for example:
> >
> > <link rel="CustomersByZip"
> > template="http://example.com/customers?zip={zipcode}" type="xml"/>
> >
> >
> > In the "CustomerByZip" definition, it say that "zipcode" is a parameter
> you
> > have to provide to the URL.
> >
> > The URI remains mostly opaque, but the user is still providing input.
>
> Well, I'm just now fully grasping this
> "link-relations-are-universal-and-independent-of-media-type" thing,
> but precedence has been set I think. The OpenSearch spec[0]
> introduced the rel=results link relation, which is essentially that.
> I'm not yet smart enough to know if that's a Good Thing or not.
>
> The more I learn about this, I'm gathering that the reality of the
> current state of link-relations is a good bit behind Roy's universal
> ideal. They're such an essential part of HATEOAS, I'm surprised by
> the overall lack of discussion around them.
>
> --tim
>
> [0] -
> http://www.opensearch.org/Specifications/OpenSearch/1.1#The_.22Url.22_element
>
>
>
On Fri, Oct 2, 2009 at 9:48 AM, Bill Burke <bburke@...> wrote: > > > Solomon Duskis wrote: >> >> Using link doesn't seem all that natural to me. In HTML that question is >> answered pretty simply: >> >> <form action="http://example.com/customers >> <http://example.com/customers?zip=>" method="GET" name="CustomersByZip"> >> <input type="text" name="zip" /> >> </form> >> > > I've said this before when you posted this idea...But... > > <form> is rendering metadata meant to help a *Human Being* make a decision. > Machine based clients are already going to know how to fill out the "form" > ahead of time so the rendering information isn't needed when transmitting > representations. Only the link (or link template) is interesting to a > machine based client. I'm not sure, I think the form solution is valid when the server has numerous, but constrained, permutations that are allowed. Someone responded to one of my first questions to this list with forms as a good solution to a search problem. For example, a server might support the following variations of "next states" from a page of results: - Pages (1-N) - allow jumping to any one. - Sort critiera - a finite list of sortable fields. - Facet - a finite list of filtering facets based on a result set. - etc. This is easily handled with a form (with constrained option and boolean semantics built in), not so much with either URIs or URITemplates. Of course, when the variable data is just an unconstrained string, I suppose either is good. Overall, I don't think forms are limited to "rendering metadata". --tim
On Fri, Oct 2, 2009 at 12:21 AM, Roy T. Fielding <fielding@...> wrote: >> Where do you get the idea that not all URIs need be or should be cool? (If I am understanding you correctly...) > > Umm, maybe the several hundred conversations I've had on > the topic with TimBL in the room. Cool URIs are permanent, > so if you want to be cool then make permanence a design > criteria. That's all there is to it. Agreed. > Nobody is going to > argue against too much URI permanence. There is certainly > nothing about that in conflict with REST, so if you perceive > a conflict then I suggest you look at your reasoning and > kill the paper tiger. I'm glad to hear you confirm that there is no real conflict between URI permanence and REST. I'm also glad to hear you confirm that there is no real conflict between designs that depend on URI permanence and REST, eg out-of-band URI templates. (Which is how I read your other reply<http://tech.groups.yahoo.com/group/rest-discuss/message/13606> .) While others may use the word "conflict", for the record, I don't believe I used the word "conflict" in this thread -- I used the word "tension". And I quoted an email of yours<http://tech.groups.yahoo.com/group/rest-discuss/message/12101>from back in February that seemed to indicate that you did not completely disagree with the "tension" characterization: *If there is a tension between the desire to bookmark and the fact that REST encourages folks to break up an application into a state machine of reusable resource states, then I would consider it to be more like sexual tension. Just because you have it doesn't mean it is bad, and one way to improve things is to make the more important resource links look sexier than the less important ones.* I suppose the fundamental tension here (and perhaps in sexual tension as well -- who knows) is the tension between the desire for permanence and stability vs. the desire for adaptability and change. -- Nick
Sure, HTML forms are usually entered by a human, but there's nothing
inherently part of the <form> elements that I selected that requires human
intervention. I do concede that HTML elements include "checkbox" and
"radiobutton", but those don't have to be used in a <form> element meant for
machine usage. It wouldn't be all that difficult to come up with a set of
<form> elements that would be better suited for machine interactions
The main point in favor of HTML forms:
- The server wants a specific ACTION (rather than a document
relationship)
- The server expects a specific HTTP METHOD (rather than using out of
band information to guess the method)
- The server gives a NAME for the action (rather than a relationship type
for the action)
- The server gives a DETAILED BREAKDOWN of what it expects, potentially
including constraints
The main detractions of <link> + "rel":
- ATOM's <link> generally indicates a relationship between documents.
rest-discuss has tried to overcome quite a few issues related to ACTIONS.
- rel stands for relationship, right? "rel" would have to define how a
universal ACTION like CustomersByZip relates to the current document, not
what the action is
I'm not going to give up on this issue.
-Solomon
On Fri, Oct 2, 2009 at 9:48 AM, Bill Burke <bburke@...> wrote:
>
>
> Solomon Duskis wrote:
>
>> Using link doesn't seem all that natural to me. In HTML that question is
>> answered pretty simply:
>>
>> <form action="http://example.com/customers <
>> http://example.com/customers?zip=>" method="GET" name="CustomersByZip">
>> <input type="text" name="zip" />
>> </form>
>>
>>
> I've said this before when you posted this idea...But...
>
> <form> is rendering metadata meant to help a *Human Being* make a decision.
> Machine based clients are already going to know how to fill out the "form"
> ahead of time so the rendering information isn't needed when transmitting
> representations. Only the link (or link template) is interesting to a
> machine based client.
>
> I don't know what the convention is, but links can and do provide URLs for
> their description. That description URL, is, IMO the appropriate place for
> "form" metadata.
>
> Well, at least, that is my theory on how things might or should work....
>
>
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com
>
So are you suggesting that <links> should be used because of "out of band" information found during the development phase? -Solomon On Fri, Oct 2, 2009 at 10:11 AM, Bill Burke <bburke@...> wrote: > I'm not sure you understood my point. My point was not to say URI > templates are better than forms. > > My point was that transmitting a "form" (quotes) is usually not useful to a > machine-based client as the "discovery" phase happened when the programmer > coded the client. > > Personally, I don't like the <link> or Link header format. IMO it should > be something like: > > <link name="..." description="http:/..." href="http://..." type=""/> > > With description being an explicit URL. The client could ask for a > specific rendering media type. In the current format (correct me if I'm > wrong) it seems like the "rel" attribute is overloaded. > > Solomon Duskis wrote: > >> Sure, HTML forms are usually entered by a human, but there's nothing >> inherently part of the <form> elements that I selected that requires human >> intervention. I do concede that HTML elements include "checkbox" and >> "radiobutton", but those don't have to be used in a <form> element meant for >> machine usage. It wouldn't be all that difficult to come up with a set of >> <form> elements that would be better suited for machine interactions >> The main point in favor of HTML forms: >> >> * The server wants a specific ACTION (rather than a document >> relationship) >> * The server expects a specific HTTP METHOD (rather than using out >> of band information to guess the method) >> * The server gives a NAME for the action (rather than a relationship >> type for the action) >> * The server gives a DETAILED BREAKDOWN of what it expects, >> potentially including constraints >> >> >> The main detractions of <link> + "rel": >> >> * ATOM's <link> generally indicates a relationship between >> documents. rest-discuss has tried to overcome quite a few issues >> related to ACTIONS. >> * rel stands for relationship, right? "rel" would have to define >> how a universal ACTION like CustomersByZip relates to the current >> document, not what the action is >> >> I'm not going to give up on this issue. >> >> -Solomon >> >> On Fri, Oct 2, 2009 at 9:48 AM, Bill Burke <bburke@... <mailto: >> bburke@...>> wrote: >> >> >> >> Solomon Duskis wrote: >> >> Using link doesn't seem all that natural to me. In HTML that >> question is answered pretty simply: >> >> <form action="http://example.com/customers >> <http://example.com/customers?zip=>" method="GET" >> name="CustomersByZip"> >> >> <input type="text" name="zip" /> >> </form> >> >> >> I've said this before when you posted this idea...But... >> >> <form> is rendering metadata meant to help a *Human Being* make a >> decision. Machine based clients are already going to know how to >> fill out the "form" ahead of time so the rendering information isn't >> needed when transmitting representations. Only the link (or link >> template) is interesting to a machine based client. >> >> I don't know what the convention is, but links can and do provide >> URLs for their description. That description URL, is, IMO the >> appropriate place for "form" metadata. >> >> Well, at least, that is my theory on how things might or should >> work.... >> >> >> >> -- Bill Burke >> JBoss, a division of Red Hat >> http://bill.burkecentral.com >> >> >> > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
On Fri, Oct 2, 2009 at 10:11 AM, Bill Burke <bburke@...> wrote: > I'm not sure you understood my point. My point was not to say URI templates > are better than forms. > > My point was that transmitting a "form" (quotes) is usually not useful to a > machine-based client as the "discovery" phase happened when the programmer > coded the client. > > Personally, I don't like the <link> or Link header format. IMO it should be > something like: > > <link name="..." description="http:/..." href="http://..." type=""/> > > With description being an explicit URL. The client could ask for a specific > rendering media type. In the current format (correct me if I'm wrong) it > seems like the "rel" attribute is overloaded. What do you mean by "overloaded"? It seems that's it used consistently for one purpose (to indicate the relationship between the "current" URI and another URI) since inception. I think it's confusing only because they've been introduced through media types instead of some separate, independent, mechanism though. It seems important enough to me that link relations should be first-class "things" just like the media formats themselves. --tim
Kristian Nordal wrote: > Hi, > > On Oct 2, 2009, at 12:25 AM, Bediako George wrote: > >> >> >> Hullo Benjamin, >> >> I must admit I am having some trouble understanding the distinction >> you make between server state and application state. In principal I >> get the theoretical difference, but I think the examples you give >> don't necessarily illustrate the point, and in one case confuses me. > > I'm also struggling with the difference between application state and > server state (which I assume is the same as "resource state"). Can > someone point me to a good definition of "application state"? > > Will some kinds of state never stop being "application state", no matter > how or where it's stored? If I were to move for instance typical session > state into it's own resources, and treat those resources as any regular > resource in my application - will those resources for some definitions > of state still be application state (and a violation of the stateless > constraint)? Or does the fact that I've re-modelled it as resources make > it resource state? > Yeah, somebody will have to explain to me why (or if) the Reservation example I gave breaks the stateless constraint of REST. Where I think it doesn't break the constraint is that instead of storing a specific "view" of a resource for a specific client (like the Richardson/Ruby O'Reilly book example on transactions), the state change is modeled as a resource in and of itself. A Reservation still has a lot of meaning to clients other than the Travel Agent. Also, whether or not the Reservation has been fulfilled is a valid state of the resource. Just because I chose to model that state with a specific media type (a generic transactional one) shouldn't matter IMO as its an implementation detail. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
I'm not sure you understood my point. My point was not to say URI templates are better than forms. My point was that transmitting a "form" (quotes) is usually not useful to a machine-based client as the "discovery" phase happened when the programmer coded the client. Personally, I don't like the <link> or Link header format. IMO it should be something like: <link name="..." description="http:/..." href="http://..." type=""/> With description being an explicit URL. The client could ask for a specific rendering media type. In the current format (correct me if I'm wrong) it seems like the "rel" attribute is overloaded. Solomon Duskis wrote: > Sure, HTML forms are usually entered by a human, but there's nothing > inherently part of the <form> elements that I selected that requires > human intervention. I do concede that HTML elements include "checkbox" > and "radiobutton", but those don't have to be used in a <form> element > meant for machine usage. It wouldn't be all that difficult to come up > with a set of <form> elements that would be better suited for machine > interactions > > The main point in favor of HTML forms: > > * The server wants a specific ACTION (rather than a document > relationship) > * The server expects a specific HTTP METHOD (rather than using out > of band information to guess the method) > * The server gives a NAME for the action (rather than a relationship > type for the action) > * The server gives a DETAILED BREAKDOWN of what it expects, > potentially including constraints > > > The main detractions of <link> + "rel": > > * ATOM's <link> generally indicates a relationship between > documents. rest-discuss has tried to overcome quite a few issues > related to ACTIONS. > * rel stands for relationship, right? "rel" would have to define > how a universal ACTION like CustomersByZip relates to the current > document, not what the action is > > I'm not going to give up on this issue. > > -Solomon > > On Fri, Oct 2, 2009 at 9:48 AM, Bill Burke <bburke@... > <mailto:bburke@...>> wrote: > > > > Solomon Duskis wrote: > > Using link doesn't seem all that natural to me. In HTML that > question is answered pretty simply: > > <form action="http://example.com/customers > <http://example.com/customers?zip=>" method="GET" > name="CustomersByZip"> > > <input type="text" name="zip" /> > </form> > > > I've said this before when you posted this idea...But... > > <form> is rendering metadata meant to help a *Human Being* make a > decision. Machine based clients are already going to know how to > fill out the "form" ahead of time so the rendering information isn't > needed when transmitting representations. Only the link (or link > template) is interesting to a machine based client. > > I don't know what the convention is, but links can and do provide > URLs for their description. That description URL, is, IMO the > appropriate place for "form" metadata. > > Well, at least, that is my theory on how things might or should work.... > > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
I don't think that we're far off, but we can do better. Specifically, better in the sense that the media type should be more "self descriptive" relating to the run-time documentation of the server's communication expectations. The REST constraints aim to create dynamic and serendipitous behavior. IMHO, Slightly more inline communication information will be more effective in achieving those results. -Solomon On Fri, Oct 2, 2009 at 10:37 AM, Bill Burke <bburke@...> wrote: > > > Solomon Duskis wrote: > >> So are you suggesting that <links> should be used because of "out of band" >> information found during the development phase? >> >> > Yes, exactly. If a client needs form metadata, it will ask for it. From > what I've read on rest-discuss, the definition of the link relationship and > the media type together is supposed to tell the user how to interact with > the href. Am I right? > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
Hi, On Oct 2, 2009, at 12:25 AM, Bediako George wrote: > > > Hullo Benjamin, > > I must admit I am having some trouble understanding the distinction > you make between server state and application state. In principal I > get the theoretical difference, but I think the examples you give > don't necessarily illustrate the point, and in one case confuses me. I'm also struggling with the difference between application state and server state (which I assume is the same as "resource state"). Can someone point me to a good definition of "application state"? Will some kinds of state never stop being "application state", no matter how or where it's stored? If I were to move for instance typical session state into it's own resources, and treat those resources as any regular resource in my application - will those resources for some definitions of state still be application state (and a violation of the stateless constraint)? Or does the fact that I've re-modelled it as resources make it resource state? > Taking Bill's credit card transaction example, if the client > authorizes the server to charge the credit card and a record of that > charge is created, this means that the number of resources on the > server side will increase every time a client authorizes a charge to > its credit card. What kind of state is this, application or service? > > Also looking at your pessimistic locking example, it seems to me > that the requirement to "clean up" locks on the server is not a > necessary requirement. It should suffice to have the locks expire, > and to have the server ignore the presence of all expired locks. It > seems to me that this could be done in a manner that would not > require the server to remember the state. > > Do you see why I am confused? In reading your post, it seems that > the pessimistic lock example could be implemented in a style that > would not break the guidelines you suggested. It also seems that in > the case of a credit card transaction authorization, according to > your example, that storing it as a resource would be simply be an > attempt to transfer what could be considered trying to convert > application state into service state. > > I am struggling with this concept, and would value any input you (or > anyone else) may have that would help to clarify this. > > Of course if the answer is simply "restful statelessness means no > server side database dummy" then I completely understand. I hope > that isn't the answer however. :) > > On Wed, Sep 30, 2009 at 6:06 PM, Benjamin Carlyle <benjamincarlyle@... > > wrote: > > Bill, > > Here are mine: http://soundadvice.id.au/blog/2009/06/13#stateless > > :) > > Benjamin. > > > 2009/9/22 Bill Burke <bburke@...> > > Here's my thoughts on the compatibility of Transactions and REST. > Maybe > now you can see where I am coming from. > http://bill.burkecentral.com/2009/09/21/credit-cards-transactions-and-rest/ > -- Cheers, Kristian
Tim Williams wrote: > On Fri, Oct 2, 2009 at 10:11 AM, Bill Burke <bburke@...> wrote: >> I'm not sure you understood my point. My point was not to say URI templates >> are better than forms. >> >> My point was that transmitting a "form" (quotes) is usually not useful to a >> machine-based client as the "discovery" phase happened when the programmer >> coded the client. >> >> Personally, I don't like the <link> or Link header format. IMO it should be >> something like: >> >> <link name="..." description="http:/..." href="http://..." type=""/> >> >> With description being an explicit URL. The client could ask for a specific >> rendering media type. In the current format (correct me if I'm wrong) it >> seems like the "rel" attribute is overloaded. > > What do you mean by "overloaded"? It seems that's it used > consistently for one purpose (to indicate the relationship between the > "current" URI and another URI) since inception. I think it's > confusing only because they've been introduced through media types > instead of some separate, independent, mechanism though. It seems > important enough to me that link relations should be first-class > "things" just like the media formats themselves. > NO, I mean I've see others do this: <link rel="foo http://blah.com/..."/> I mix of name and url in one string for the "rel" attribute. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Solomon Duskis wrote: > Using link doesn't seem all that natural to me. In HTML that question > is answered pretty simply: > > <form action="http://example.com/customers > <http://example.com/customers?zip=>" method="GET" name="CustomersByZip"> > <input type="text" name="zip" /> > </form> > I've said this before when you posted this idea...But... <form> is rendering metadata meant to help a *Human Being* make a decision. Machine based clients are already going to know how to fill out the "form" ahead of time so the rendering information isn't needed when transmitting representations. Only the link (or link template) is interesting to a machine based client. I don't know what the convention is, but links can and do provide URLs for their description. That description URL, is, IMO the appropriate place for "form" metadata. Well, at least, that is my theory on how things might or should work.... -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Solomon Duskis wrote: > So are you suggesting that <links> should be used because of "out of > band" information found during the development phase? > Yes, exactly. If a client needs form metadata, it will ask for it. From what I've read on rest-discuss, the definition of the link relationship and the media type together is supposed to tell the user how to interact with the href. Am I right? -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Fri, Oct 2, 2009 at 4:56 AM, Kristian Nordal <kristian.nordal@...> wrote: > I'm also struggling with the difference between application state and > server state (which I assume is the same as "resource state"). Can > someone point me to a good definition of "application state"? It's literally the *state* of the *application*. If you're looking at your bank balance, that's a different state than if you were preparing to submit a bill payment, and once you've submitted the payment, you're in yet another state in the application state machine. Mark.
On Oct 2, 2009, at 4:05 PM, Nick Gall wrote: > I suppose the fundamental tension here (and perhaps in sexual > tension as well -- who knows) is the tension between the desire for > permanence and stability vs. the desire for adaptability and change. > IMO, that's not the point. There are steady states and transient states. The application should be designed such that clients/UAs don't need to store URIs for those transient state. Not every URI can be permanent. Some URIs may, by design, remain "uncool". Subbu
On Fri, Oct 2, 2009 at 3:01 PM, Subbu Allamaraju <subbu@...> wrote: > There are steady states and transient states. The > application should be designed such that clients/UAs don't need to store > URIs for those transient state. Not every URI can be permanent. Some URIs > may, by design, remain "uncool". So maybe we can use URI templates for the cool URIs (my photos, my blog posts, the books that Amazon offers, the songs that BBC radio plays) and HATEOAS (rel links in representations) for the uncool ones. -- Nick -- Nick Gall Phone: +1.781.608.5871 AOL IM: Nicholas Gall Yahoo IM: nick_gall_1117 MSN IM: (same as email) Google Talk: (same as email) Email: nick.gall AT-SIGN gmail DOT com Weblog: http://ironick.typepad.com/ironick/
On Oct 2, 2009, at 9:17 PM, Nick Gall wrote: >> There are steady states and transient states. The >> application should be designed such that clients/UAs don't need to >> store >> URIs for those transient state. Not every URI can be permanent. >> Some URIs >> may, by design, remain "uncool". > > So maybe we can use URI templates for the cool URIs (my photos, my > blog posts, the books that Amazon offers, the songs that BBC radio > plays) and HATEOAS (rel links in representations) for the uncool ones. I wouldn't draw a line like that, especially for the uncool ones. Looking at this thread so far, it seems to me that both TBL's position (that every URI is permanent), and Roy's HATEOAS have been over- analyzed. The reality is a mix. Even the cool ones become uncool, and we can't continue to ignore that and preach coolness as a virtue. What is important is to build applications that take these into consideration and stay resilient. Subbu
On Oct 2, 2009, at 3:40 PM, Bill Burke wrote: > Yeah, somebody will have to explain to me why (or if) the Reservation > example I gave breaks the stateless constraint of REST. Where I think Well - under the disguise of a "transaction", the server is maintaining per-client state. In stead of answering your question directly, let me ask you whether have you examined the scalability characteristics of your proposed design. It may be worthwhile to start from basics, apply each constraint one by one, and whether your approach benefits. The discussion around the kind of resources needed, media types, link rels, link headers vs link elements in some XML format are implementation details. So far, this post does not make a case of why an transactional application should be built the way you propose. Here is a minor point. There is no such thing as a "sub-resource". That term may be part of the mental model of some developers, but it has no consequence to the protocol. Subbu
On Oct 2, 2009, at 4:02 PM, Solomon Duskis wrote: > • rel stands for relationship, right? "rel" would have to define > how a universal ACTION like CustomersByZiprelates to the current > document, not what the action is > Not so. As [1] tries to define, "a link relation type identifies the *semantics* of a link". A well-specified link relation will have to specify everything about the link. For application specific relations (as opposed to general purpose ones like "next" and "prev"), specifying semantics is even more important. In the absence of such semantics, links become meaningless. Subbu [1] http://tools.ietf.org/html/draft-nottingham-http-link-header
On Oct 2, 2009, at 12:17 PM, Nick Gall wrote: > On Fri, Oct 2, 2009 at 3:01 PM, Subbu Allamaraju <subbu@...> > wrote: >> There are steady states and transient states. The >> application should be designed such that clients/UAs don't need to >> store >> URIs for those transient state. Not every URI can be permanent. >> Some URIs >> may, by design, remain "uncool". > > So maybe we can use URI templates for the cool URIs (my photos, my > blog posts, the books that Amazon offers, the songs that BBC radio > plays) and HATEOAS (rel links in representations) for the uncool ones. Nick, you can use link templates in hypertext if you have a standard for specifying them (e.g., Link-Template headers or media type definitions). Computed links predate the Web. No, I don't use that idiotic acronym for the hypertext constraint. ....Roy
--- In rest-discuss@yahoogroups.com, Mark Baker <distobj@...> wrote: > > On Fri, Oct 2, 2009 at 4:56 AM, Kristian Nordal > <kristian.nordal@...> wrote: > > I'm also struggling with the difference between application state and > > server state (which I assume is the same as "resource state"). Can > > someone point me to a good definition of "application state"? > > It's literally the *state* of the *application*. If you're looking at > your bank balance, that's a different state than if you were preparing > to submit a bill payment, and once you've submitted the payment, > you're in yet another state in the application state machine. > > Mark. > Just to add to Mark's definition, and put it in the context of "application" and "application protocol": if we think of an application as being computer behavior that achieves a particular goal, we can describe an application protocol as the specification of the legitimate interactions necessary to realize that behavior, and application state as a snapshot of the instance of execution of an application protocol. ian
Thanks for the reply Subbu, Would it not be the case that most data created by clients would for the most part be "per-client" in nature? For instance if, for the first time ever, I buy a book on Amazon, there are many resources that will be created because of that transaction. In your opinion, will the order, or the credit card authorization, or even my mailing address be considered "per-client" state? On Fri, Oct 2, 2009 at 3:46 PM, Subbu Allamaraju <subbu@...> wrote: > > On Oct 2, 2009, at 3:40 PM, Bill Burke wrote: > > Yeah, somebody will have to explain to me why (or if) the Reservation >> example I gave breaks the stateless constraint of REST. Where I think >> > > Well - under the disguise of a "transaction", the server is maintaining > per-client state. In stead of answering your question directly, let me ask > you whether have you examined the scalability characteristics of your > proposed design. > > It may be worthwhile to start from basics, apply each constraint one by > one, and whether your approach benefits. The discussion around the kind of > resources needed, media types, link rels, link headers vs link elements in > some XML format are implementation details. So far, this post does not make > a case of why an transactional application should be built the way you > propose. > > Here is a minor point. There is no such thing as a "sub-resource". That > term may be part of the mental model of some developers, but it has no > consequence to the protocol. > > Subbu > -- Bediako George Partner - Lucid Technics, LLC Think Clearly, Think Lucid www.lucidtechnics.com (p) 202.683.7486 (f) 703.563.6279
2009/10/2 Subbu Allamaraju <subbu@...> > > On Oct 2, 2009, at 4:02 PM, Solomon Duskis wrote: > > > • rel stands for relationship, right? "rel" would have to define > > how a universal ACTION like CustomersByZiprelates to the current > > document, not what the action is > > Not so. As [1] tries to define, "a link relation type identifies the > *semantics* of a link". A well-specified link relation will have to > specify everything about the link. For application specific relations > (as opposed to general purpose ones like "next" and "prev"), > specifying semantics is even more important. In the absence of such > semantics, links become meaningless. > For "application specific relations" it's best to use URI relations (e.g. rel="http://pubsubhubbub.org/") as you then "own" the relation and can lock it down to certain protocols, content types, etc. Registered relations (the Web Linking draft creates a new common IANA registry) should only be used where they are "broadly useful", such as for "first", "last", "related", "alternate". Accordingly registering "hub" for PubSubHubbub is looking like being rejected because it doesn't help and could actually hurt interoperability (what's a client to do if it encounters "hub" but both rssCloud and PubSubHubbub are using it?). As content types aren't really useful for describing protocols it really doesn't make much sense to try to use it with the relation type as a "compound key" of sorts for identification purposes. Sam
On Fri, Oct 2, 2009 at 3:38 PM, Subbu Allamaraju <subbu@...> wrote: > > On Oct 2, 2009, at 9:17 PM, Nick Gall wrote: > >>> There are steady states and transient states. The >>> application should be designed such that clients/UAs don't need to store >>> URIs for those transient state. Not every URI can be permanent. Some URIs >>> may, by design, remain "uncool". >> >> So maybe we can use URI templates for the cool URIs (my photos, my >> blog posts, the books that Amazon offers, the songs that BBC radio >> plays) and HATEOAS (rel links in representations) for the uncool ones. > > I wouldn't draw a line like that, especially for the uncool ones. Why not? > Looking at this thread so far, it seems to me that both TBL's position (that > every URI is permanent), and Roy's HATEOAS have been over-analyzed. The > reality is a mix. Merely saying "its a mix" isn't helpful. Without some analysis and subsequent advice on issues such as... * how to design cool URIs * how the client and server must work together to achieve coolness * which URLs should be cool and which not * what mechanisms can one use to generate cool URIs vs. mechanisms for uncool URIs ...how can any fledgling REST developer design RESTful systems? > Even the cool ones become uncool, and we can't continue to ignore that and > preach coolness as a virtue. We can and should continue to preach coolness as a virtue, while acknowledging the reality of how difficult it is to be cool (as I think TimBL's design note does) and what to do when coolness fails. > What is important is to build applications that > take these into consideration and stay resilient. Agreed. I'm just trying to aid in the analysis of what constitutes such resilience. -- Nick
Of course. But I don't think "transaction resources" that Bill is describing are similar to credit card authorizations or purchase orders. The example that Bill outlined involves transient per client state. Marking such state as permanent resources is certainly possible, but towards what end? To prove that such things can be done RESTfully? Subbu On Oct 2, 2009, at 10:58 PM, Bediako George wrote: > Thanks for the reply Subbu, > > Would it not be the case that most data created by clients would for > the most part be "per-client" in nature? > > For instance if, for the first time ever, I buy a book on Amazon, > there are many resources that will be created because of that > transaction. In your opinion, will the order, or the credit card > authorization, or even my mailing address be considered "per-client" > state? > > On Fri, Oct 2, 2009 at 3:46 PM, Subbu Allamaraju <subbu@...> > wrote: > > On Oct 2, 2009, at 3:40 PM, Bill Burke wrote: > > Yeah, somebody will have to explain to me why (or if) the Reservation > example I gave breaks the stateless constraint of REST. Where I think > > Well - under the disguise of a "transaction", the server is > maintaining per-client state. In stead of answering your question > directly, let me ask you whether have you examined the scalability > characteristics of your proposed design. > > It may be worthwhile to start from basics, apply each constraint one > by one, and whether your approach benefits. The discussion around > the kind of resources needed, media types, link rels, link headers > vs link elements in some XML format are implementation details. So > far, this post does not make a case of why an transactional > application should be built the way you propose. > > Here is a minor point. There is no such thing as a "sub-resource". > That term may be part of the mental model of some developers, but it > has no consequence to the protocol. > > Subbu > > > > -- > Bediako George > Partner - Lucid Technics, LLC > Think Clearly, Think Lucid > www.lucidtechnics.com > (p) 202.683.7486 (f) 703.563.6279
Yes, as Sec 4.2 describes. On Oct 2, 2009, at 11:06 PM, Sam Johnston wrote: > 2009/10/2 Subbu Allamaraju <subbu@...> > > On Oct 2, 2009, at 4:02 PM, Solomon Duskis wrote: > > > • rel stands for relationship, right? "rel" would have to > define > > how a universal ACTION like CustomersByZiprelates to the current > > document, not what the action is > > Not so. As [1] tries to define, "a link relation type identifies the > *semantics* of a link". A well-specified link relation will have to > specify everything about the link. For application specific relations > (as opposed to general purpose ones like "next" and "prev"), > specifying semantics is even more important. In the absence of such > semantics, links become meaningless. > > For "application specific relations" it's best to use URI relations > (e.g. rel="http://pubsubhubbub.org/") as you then "own" the relation > and can lock it down to certain protocols, content types, etc. > > Registered relations (the Web Linking draft creates a new common > IANA registry) should only be used where they are "broadly useful", > such as for "first", "last", "related", "alternate". Accordingly > registering "hub" for PubSubHubbub is looking like being rejected > because it doesn't help and could actually hurt interoperability > (what's a client to do if it encounters "hub" but both rssCloud and > PubSubHubbub are using it?). > > As content types aren't really useful for describing protocols it > really doesn't make much sense to try to use it with the relation > type as a "compound key" of sorts for identification purposes. > > Sam >
On Oct 2, 2009, at 10:55 PM, Ian wrote: > > > --- In rest-discuss@yahoogroups.com, Mark Baker <distobj@...> wrote: >> >> On Fri, Oct 2, 2009 at 4:56 AM, Kristian Nordal >> <kristian.nordal@...> wrote: >>> I'm also struggling with the difference between application state >>> and >>> server state (which I assume is the same as "resource state"). Can >>> someone point me to a good definition of "application state"? >> >> It's literally the *state* of the *application*. If you're looking >> at >> your bank balance, that's a different state than if you were >> preparing >> to submit a bill payment, and once you've submitted the payment, >> you're in yet another state in the application state machine. >> >> Mark. >> > > Just to add to Mark's definition, and put it in the context of > "application" and "application protocol": if we think of an > application as being computer behavior that achieves a particular > goal, we can describe an application protocol as the specification > of the legitimate interactions necessary to realize that behavior, > and application state as a snapshot of the instance of execution of > an application protocol. Thanks for the definitions. I'm still a bit confused though, so I'm going to try to use an example: Let's say we have an client/ua that is filling out an order (order + line items). In a traditional web application, the order would be in the http session, and we would add/remove line items to that order, and finally place the order. In that case I clearly see that we are talking about application state that is placed on the server. The server keeps track of it, and it's literally the state of the client/ application. But if we were to store and address the order like any other resource, would that change the nature of the state? It would simply be another way of storing the same state, but nevertheless it would be "resources" with the same properties induced by the stateless constraint (visibility, reliability, and salability) - given that they were stored in the a way that make that possible. To me, this looks like exactly the same kind of state (application state), simply stored/ modeled differently. But in that case I don't see how or if it violates the stateless constraint. Would you say that the order in this example is always a "snapshot of the instance execution of an application protocol", and that it will always be application state - no matter how it's modeled? And by placing it on the server it would be in violation of the REST principles, even though the stateless constraint is dealt with? -- Thanks, Kristian
On Oct 2, 2009, at 10:39 PM, Nick Gall wrote: > On Fri, Oct 2, 2009 at 3:38 PM, Subbu Allamaraju <subbu@...> > wrote: >> >> On Oct 2, 2009, at 9:17 PM, Nick Gall wrote: >> >>>> There are steady states and transient states. The >>>> application should be designed such that clients/UAs don't need >>>> to store >>>> URIs for those transient state. Not every URI can be permanent. >>>> Some URIs >>>> may, by design, remain "uncool". >>> >>> So maybe we can use URI templates for the cool URIs (my photos, my >>> blog posts, the books that Amazon offers, the songs that BBC radio >>> plays) and HATEOAS (rel links in representations) for the uncool >>> ones. >> >> I wouldn't draw a line like that, especially for the uncool ones. > > Why not? Because, servers can still communicate all kinds of URIs to clients using some hypertext means. >> Looking at this thread so far, it seems to me that both TBL's >> position (that >> every URI is permanent), and Roy's HATEOAS have been over-analyzed. >> The >> reality is a mix. > > Merely saying "its a mix" isn't helpful. Without some analysis and > subsequent advice on issues such as... I don't disagree that it is not helpful, but such discussions can't be done without some context. >> Even the cool ones become uncool, and we can't continue to ignore >> that and >> preach coolness as a virtue. > > We can and should continue to preach coolness as a virtue, while > acknowledging the reality of how difficult it is to be cool (as I > think TimBL's design note does) and what to do when coolness fails. It is not about the difficulty to keep URIs permanent/long-lived. Some URIs are by design ephemeral. Here is an example. A server may give a client a link to make some updates. For security reasons, that link may be valid for the next two minutes and gone afterwards. Here, the sever did not fail to keep it cool. That link is not meant to be permanent. That's all. Subbu
Hi Christian The state of an order - whether it has zero line items, or five, is resource state, not application state. The state of the order as held in the http session in your example is resource state, not application state. A simple - perhaps overly simple - ordering protocol might be something like: new order created -> adding line items -> order completed -> payment received -> order dispatched. In the observable interactions between client and server, this protocol is never visible "as such": it can only be viewed through the lens of resource state. Over the course of a series of interactions, the "application" (the game being played out between the client and the server) will be in one or other of these states - as viewed from a "God's eye" point of view. Once the application state has progressed to "order completed", for example, it's no longer possible to add manipulate resources so as to add new line items; it is, however, possible to manipulate resources such that the application state transitions to "payment received" (the client would do this by submitting a representation of a payment, perhaps). The client and the server cooperate to execute this protocol, but they do so by transferring representations of resource state, not representations of application state. Application state is never represented "as such"; rather, it's inferred by the client based on on current representations of resource state. If the application is in the "order completed" state, the representation of the order received by the client may very well include a link that has been annotated with the link relation value "payment". This isn't a straightforward representation of application state, however: it's an "invitation" to the client to transfer a representation of a payment to this linked resource. As a side-effect of transferring this representation, the "application" may transition to "payment received". What's important here is that the server is really only interested in maintaining resource state, which includes maintaining the integrity of the lifecycles of the resources under its control, and the invariants that hold between resources (if any). The server can't be sure the client will ever take that step of submitting a payment, so why bother holding onto application state? Application state is something that can be reconstructed "after the fact", by a client, or omniscient observer, based on the disposition of the current set of resource representations. So the order representation is always a representation of resource state. Application state, that "snapshot of the instance of execution of a protocol", can only be inferred or reconstructed from resource state. Hope this is of some help. Apologies if I've confused more than clarified; double apologies if I'm just talking plain nonsense. ian --- In rest-discuss@yahoogroups.com, Kristian Nordal <kristian.nordal@...> wrote: > > > On Oct 2, 2009, at 10:55 PM, Ian wrote: > > > > > > > --- In rest-discuss@yahoogroups.com, Mark Baker <distobj@> wrote: > >> > >> On Fri, Oct 2, 2009 at 4:56 AM, Kristian Nordal > >> <kristian.nordal@> wrote: > >>> I'm also struggling with the difference between application state > >>> and > >>> server state (which I assume is the same as "resource state"). Can > >>> someone point me to a good definition of "application state"? > >> > >> It's literally the *state* of the *application*. If you're looking > >> at > >> your bank balance, that's a different state than if you were > >> preparing > >> to submit a bill payment, and once you've submitted the payment, > >> you're in yet another state in the application state machine. > >> > >> Mark. > >> > > > > Just to add to Mark's definition, and put it in the context of > > "application" and "application protocol": if we think of an > > application as being computer behavior that achieves a particular > > goal, we can describe an application protocol as the specification > > of the legitimate interactions necessary to realize that behavior, > > and application state as a snapshot of the instance of execution of > > an application protocol. > > Thanks for the definitions. I'm still a bit confused though, so I'm > going to try to use an example: > > Let's say we have an client/ua that is filling out an order (order + > line items). In a traditional web application, the order would be in > the http session, and we would add/remove line items to that order, > and finally place the order. In that case I clearly see that we are > talking about application state that is placed on the server. The > server keeps track of it, and it's literally the state of the client/ > application. > > But if we were to store and address the order like any other resource, > would that change the nature of the state? It would simply be another > way of storing the same state, but nevertheless it would be > "resources" with the same properties induced by the stateless > constraint (visibility, reliability, and salability) - given that they > were stored in the a way that make that possible. To me, this looks > like exactly the same kind of state (application state), simply stored/ > modeled differently. But in that case I don't see how or if it > violates the stateless constraint. > > Would you say that the order in this example is always a "snapshot of > the instance execution of an application protocol", and that it will > always be application state - no matter how it's modeled? And by > placing it on the server it would be in violation of the REST > principles, even though the stateless constraint is dealt with? > > -- > Thanks, > Kristian >
Hi Ian, That is an excellent description of state from the server's point of view. However, isn't all this opaque for the client? Subbu On Oct 3, 2009, at 6:16 PM, Ian wrote: > Hi Christian > > The state of an order - whether it has zero line items, or five, is > resource state, not application state. The state of the order as > held in the http session in your example is resource state, not > application state. > > A simple - perhaps overly simple - ordering protocol might be > something like: new order created -> adding line items -> order > completed -> payment received -> order dispatched. > > In the observable interactions between client and server, this > protocol is never visible "as such": it can only be viewed through > the lens of resource state. > > Over the course of a series of interactions, the "application" (the > game being played out between the client and the server) will be in > one or other of these states - as viewed from a "God's eye" point of > view. Once the application state has progressed to "order > completed", for example, it's no longer possible to add manipulate > resources so as to add new line items; it is, however, possible to > manipulate resources such that the application state transitions to > "payment received" (the client would do this by submitting a > representation of a payment, perhaps). > > The client and the server cooperate to execute this protocol, but > they do so by transferring representations of resource state, not > representations of application state. Application state is never > represented "as such"; rather, it's inferred by the client based on > on current representations of resource state. If the application is > in the "order completed" state, the representation of the order > received by the client may very well include a link that has been > annotated with the link relation value "payment". This isn't a > straightforward representation of application state, however: it's > an "invitation" to the client to transfer a representation of a > payment to this linked resource. As a side-effect of transferring > this representation, the "application" may transition to "payment > received". > > What's important here is that the server is really only interested > in maintaining resource state, which includes maintaining the > integrity of the lifecycles of the resources under its control, and > the invariants that hold between resources (if any). The server > can't be sure the client will ever take that step of submitting a > payment, so why bother holding onto application state? Application > state is something that can be reconstructed "after the fact", by a > client, or omniscient observer, based on the disposition of the > current set of resource representations. > > So the order representation is always a representation of resource > state. Application state, that "snapshot of the instance of > execution of a protocol", can only be inferred or reconstructed from > resource state. > > Hope this is of some help. Apologies if I've confused more than > clarified; double apologies if I'm just talking plain nonsense. > > ian > > --- In rest-discuss@yahoogroups.com, Kristian Nordal > <kristian.nordal@...> wrote: >> >> >> On Oct 2, 2009, at 10:55 PM, Ian wrote: >> >>> >>> >>> --- In rest-discuss@yahoogroups.com, Mark Baker <distobj@> wrote: >>>> >>>> On Fri, Oct 2, 2009 at 4:56 AM, Kristian Nordal >>>> <kristian.nordal@> wrote: >>>>> I'm also struggling with the difference between application state >>>>> and >>>>> server state (which I assume is the same as "resource state"). Can >>>>> someone point me to a good definition of "application state"? >>>> >>>> It's literally the *state* of the *application*. If you're looking >>>> at >>>> your bank balance, that's a different state than if you were >>>> preparing >>>> to submit a bill payment, and once you've submitted the payment, >>>> you're in yet another state in the application state machine. >>>> >>>> Mark. >>>> >>> >>> Just to add to Mark's definition, and put it in the context of >>> "application" and "application protocol": if we think of an >>> application as being computer behavior that achieves a particular >>> goal, we can describe an application protocol as the specification >>> of the legitimate interactions necessary to realize that behavior, >>> and application state as a snapshot of the instance of execution of >>> an application protocol. >> >> Thanks for the definitions. I'm still a bit confused though, so I'm >> going to try to use an example: >> >> Let's say we have an client/ua that is filling out an order (order + >> line items). In a traditional web application, the order would be in >> the http session, and we would add/remove line items to that order, >> and finally place the order. In that case I clearly see that we are >> talking about application state that is placed on the server. The >> server keeps track of it, and it's literally the state of the client/ >> application. >> >> But if we were to store and address the order like any other >> resource, >> would that change the nature of the state? It would simply be another >> way of storing the same state, but nevertheless it would be >> "resources" with the same properties induced by the stateless >> constraint (visibility, reliability, and salability) - given that >> they >> were stored in the a way that make that possible. To me, this looks >> like exactly the same kind of state (application state), simply >> stored/ >> modeled differently. But in that case I don't see how or if it >> violates the stateless constraint. >> >> Would you say that the order in this example is always a "snapshot of >> the instance execution of an application protocol", and that it will >> always be application state - no matter how it's modeled? And by >> placing it on the server it would be in violation of the REST >> principles, even though the stateless constraint is dealt with? >> >> -- >> Thanks, >> Kristian >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Sat, Oct 3, 2009 at 1:17 PM, Subbu Allamaraju <subbu@...> wrote: > That is an excellent description of state from the server's point of > view. However, isn't all this opaque for the client? I think if it is machine client, it would need to keep track of the application state, follow the links, know its goal, etc. (As a human client piloting a browser does.)
Thanks for your apologize :) but I'm really more confused. What you say, basically, is that from a operational point of view, a "application state" does not exist, or existing can not be known by us, mere mortals... I think REST lacks lot's of *formal* definitions and one of those is the formal definition of "application". Things like "the game being played out between the client and the server", "computer behavior that achieves a particular goal" or "everything you can do with a computer" are everything but "formal" (in the sense that should be described without ambiguities, like RFC-2119 for example). And without those "formal" definitions is very difficult if not impossible to expand on those concepts because there is no authoritative resource where we can have a common ground of discussion. So basically if we don't have a formal definition of "application" it's even harder to have one of "application state" that is easilly grasped and explained. I really think it will be very usefull to have a place, like a wiki or something, where the community will start to do such a work, along with "best-practices", "rules-of-thumb" and practical things like that, like we started to discuss some posts ago in the thread body@rest that seems to have died under the weigth of the rest-* discussion. Now, re "application state", I'm satisfied by simply think of it as the "state that the resource just sent back to me, the client", wich implies that "resource state" is the efemerous state of a resource just before being sent to the client, that can be anything after that, wich is OK because the only resource state that interest me is the one that will be tranfered to me, the client, in response to my request. Now it's my time to apologize, not for being confusing but probably because I'm over-simplifying and/or just plain wrong. _______________________________________________ 2009/10/3 Ian <iansrobinson@...> > > > Hi Christian > > The state of an order - whether it has zero line items, or five, is > resource state, not application state. The state of the order as held in the > http session in your example is resource state, not application state. > > A simple - perhaps overly simple - ordering protocol might be something > like: new order created -> adding line items -> order completed -> payment > received -> order dispatched. > > In the observable interactions between client and server, this protocol is > never visible "as such": it can only be viewed through the lens of resource > state. > > Over the course of a series of interactions, the "application" (the game > being played out between the client and the server) will be in one or other > of these states - as viewed from a "God's eye" point of view. Once the > application state has progressed to "order completed", for example, it's no > longer possible to add manipulate resources so as to add new line items; it > is, however, possible to manipulate resources such that the application > state transitions to "payment received" (the client would do this by > submitting a representation of a payment, perhaps). > > The client and the server cooperate to execute this protocol, but they do > so by transferring representations of resource state, not representations of > application state. Application state is never represented "as such"; rather, > it's inferred by the client based on on current representations of resource > state. If the application is in the "order completed" state, the > representation of the order received by the client may very well include a > link that has been annotated with the link relation value "payment". This > isn't a straightforward representation of application state, however: it's > an "invitation" to the client to transfer a representation of a payment to > this linked resource. As a side-effect of transferring this representation, > the "application" may transition to "payment received". > > What's important here is that the server is really only interested in > maintaining resource state, which includes maintaining the integrity of the > lifecycles of the resources under its control, and the invariants that hold > between resources (if any). The server can't be sure the client will ever > take that step of submitting a payment, so why bother holding onto > application state? Application state is something that can be reconstructed > "after the fact", by a client, or omniscient observer, based on the > disposition of the current set of resource representations. > > So the order representation is always a representation of resource state. > Application state, that "snapshot of the instance of execution of a > protocol", can only be inferred or reconstructed from resource state. > > Hope this is of some help. Apologies if I've confused more than clarified; > double apologies if I'm just talking plain nonsense. > > ian > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > Kristian Nordal <kristian.nordal@...> wrote: > > > > > > On Oct 2, 2009, at 10:55 PM, Ian wrote: > > > > > > > > > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > Mark Baker <distobj@> wrote: > > >> > > >> On Fri, Oct 2, 2009 at 4:56 AM, Kristian Nordal > > >> <kristian.nordal@> wrote: > > >>> I'm also struggling with the difference between application state > > >>> and > > >>> server state (which I assume is the same as "resource state"). Can > > >>> someone point me to a good definition of "application state"? > > >> > > >> It's literally the *state* of the *application*. If you're looking > > >> at > > >> your bank balance, that's a different state than if you were > > >> preparing > > >> to submit a bill payment, and once you've submitted the payment, > > >> you're in yet another state in the application state machine. > > >> > > >> Mark. > > >> > > > > > > Just to add to Mark's definition, and put it in the context of > > > "application" and "application protocol": if we think of an > > > application as being computer behavior that achieves a particular > > > goal, we can describe an application protocol as the specification > > > of the legitimate interactions necessary to realize that behavior, > > > and application state as a snapshot of the instance of execution of > > > an application protocol. > > > > Thanks for the definitions. I'm still a bit confused though, so I'm > > going to try to use an example: > > > > Let's say we have an client/ua that is filling out an order (order + > > line items). In a traditional web application, the order would be in > > the http session, and we would add/remove line items to that order, > > and finally place the order. In that case I clearly see that we are > > talking about application state that is placed on the server. The > > server keeps track of it, and it's literally the state of the client/ > > application. > > > > But if we were to store and address the order like any other resource, > > would that change the nature of the state? It would simply be another > > way of storing the same state, but nevertheless it would be > > "resources" with the same properties induced by the stateless > > constraint (visibility, reliability, and salability) - given that they > > were stored in the a way that make that possible. To me, this looks > > like exactly the same kind of state (application state), simply stored/ > > modeled differently. But in that case I don't see how or if it > > violates the stateless constraint. > > > > Would you say that the order in this example is always a "snapshot of > > the instance execution of an application protocol", and that it will > > always be application state - no matter how it's modeled? And by > > placing it on the server it would be in violation of the REST > > principles, even though the stateless constraint is dealt with? > > > > -- > > Thanks, > > Kristian > > > > >
Hi Subbu Yes, I think most of this is opaque to clients. Perhaps I implied otherwise when I suggested clients might "infer" application state from received representations: I don't in fact think that's necessary of desirable. It's simpler than that. Clients are interested in achieving particular goals, and they evaluate received representations of resource state in light of those goals; that is, they choose to operate hypermedia controls - links or forms - based on their understanding of how the control's semantic context (ie. link relation value) relates to their current goal. In all this, the client need not necessarily know it's participating in a particular protocol, or be aware of the overall state of the distributed application. Is that in line with what you meant by this being opaque to the client? ian --- In rest-discuss@yahoogroups.com, Subbu Allamaraju <subbu@...> wrote: > > Hi Ian, > > That is an excellent description of state from the server's point of > view. However, isn't all this opaque for the client? > > Subbu > > On Oct 3, 2009, at 6:16 PM, Ian wrote: > > > Hi Christian > > > > The state of an order - whether it has zero line items, or five, is > > resource state, not application state. The state of the order as > > held in the http session in your example is resource state, not > > application state. > > > > A simple - perhaps overly simple - ordering protocol might be > > something like: new order created -> adding line items -> order > > completed -> payment received -> order dispatched. > > > > In the observable interactions between client and server, this > > protocol is never visible "as such": it can only be viewed through > > the lens of resource state. > > > > Over the course of a series of interactions, the "application" (the > > game being played out between the client and the server) will be in > > one or other of these states - as viewed from a "God's eye" point of > > view. Once the application state has progressed to "order > > completed", for example, it's no longer possible to add manipulate > > resources so as to add new line items; it is, however, possible to > > manipulate resources such that the application state transitions to > > "payment received" (the client would do this by submitting a > > representation of a payment, perhaps). > > > > The client and the server cooperate to execute this protocol, but > > they do so by transferring representations of resource state, not > > representations of application state. Application state is never > > represented "as such"; rather, it's inferred by the client based on > > on current representations of resource state. If the application is > > in the "order completed" state, the representation of the order > > received by the client may very well include a link that has been > > annotated with the link relation value "payment". This isn't a > > straightforward representation of application state, however: it's > > an "invitation" to the client to transfer a representation of a > > payment to this linked resource. As a side-effect of transferring > > this representation, the "application" may transition to "payment > > received". > > > > What's important here is that the server is really only interested > > in maintaining resource state, which includes maintaining the > > integrity of the lifecycles of the resources under its control, and > > the invariants that hold between resources (if any). The server > > can't be sure the client will ever take that step of submitting a > > payment, so why bother holding onto application state? Application > > state is something that can be reconstructed "after the fact", by a > > client, or omniscient observer, based on the disposition of the > > current set of resource representations. > > > > So the order representation is always a representation of resource > > state. Application state, that "snapshot of the instance of > > execution of a protocol", can only be inferred or reconstructed from > > resource state. > > > > Hope this is of some help. Apologies if I've confused more than > > clarified; double apologies if I'm just talking plain nonsense. > > > > ian > >
Mark Baker wrote: > > Kristian Nordal wrote: > > > I'm also struggling with the difference between application state > > and server state (which I assume is the same as "resource state"). > > Can someone point me to a good definition of "application state"? > > It's literally the *state* of the *application*. If you're looking at > your bank balance, that's a different state than if you were preparing > to submit a bill payment, and once you've submitted the payment, > you're in yet another state in the application state machine. > Sure, if you mean "steady state". The application is what the user is trying to do. I intend to open my browser, select my news bookmark, browse the headlines, and close the browser. That application, from a REST perspective, is "following a link". The application is complete when the transition to the next steady-state has completed, regardless of how long it takes me to browse the headlines before closing my browser. Assume my browser has no cache, connects to an accelerator component at my ISP, there's nothing between the accelerator and the origin server component, and that I've read REST section 5.3. When I select the bookmark in my client component (browser), I see a Web page begin to incrementally render. Let's take a snapshot of that moment in time, for further analysis. We see the client connector has two connections open to the ISP accelerator's server connector. One connection is streaming the HTML representation to the client. The client has parsed the <head> section, and the second connection is streaming a linked CSS file to the client. The accelerator component has also parsed the HTML representation it is serving (after receiving a 304 response from the origin server, apparently someone else beat me to the news this morning). But it doesn't have the weather map cached, so we see the accelerator's client component has connected to the origin server to prefetch the inline image it isn't supposed to cache (even though it's going to for a moment). Following the trail across the wire to the server connector on the origin server, we see it's waiting for the server to generate the image of the current weather map, which it will then send to the accelerator, which will in turn send it on to me when my browser gets around to requesting it. During this transition between steady-states, the application state consists of the state of all three components involved at the frozen instant in time that I analyzed. So if we analyze steady-states only, the scope of the application is entirely within the client component. In my example snapshot, the scope of the application encompasses three separate components: client, accelerator, origin server. Notice that the origin server knows nothing about the client, only the accelerator, yet its state is (at that moment) part of the application state perceived by the user attempting to "follow a link". Hope this helps, Eric
António Mota wrote: > > I think REST lacks lot's of *formal* definitions and one of those is > the formal definition of "application". > "Application" is clearly defined in REST, sec. 5.3.3: "Since REST is specifically targeted at distributed information systems, it views an application as a cohesive structure of information and control alternatives through which a user can perform a desired task. For example, looking-up a word in an on-line dictionary is one application, as is touring through a virtual museum, or reviewing a set of class notes to study for an exam." The simplest REST application is "following a link". After the steady- state is reached, the client has all the information it needs to allow the user (human or machine) to choose the next steady-state, in pursuit of their overall goal, whether that goal is reading an article or paying for the items in a shopping cart. -Eric
> > The accelerator component has also parsed the HTML representation it > is serving (after receiving a 304 response from the origin server, > apparently someone else beat me to the news this morning). But it > doesn't have the weather map cached, so we see the accelerator's > client component has connected to the origin server to prefetch the > inline image it isn't supposed to cache (even though it's going to > for a moment). > Sorry folks, my bad. Should read, "the accelerator's client connector" because the accelerator is the component. -Eric
Crap. My bad again, folks! The accelerator component doesn't have a server connector. It has a _cache_ connector there. -Eric
António Mota wrote: > > I think REST lacks lot's of *formal* definitions... > I have to disagree. If the formal definitions weren't so precise, I wouldn't have seen the need to embarass myself... twice... in this thread by correcting imprecise wording that I didn't catch until I'd read what I posted a first, then second, time because something still didn't feel right. Any formal definitions you need are laid out in the first four chapters of Roy's thesis. Where terms are not defined, they are footnoted. While the thesis assumes the reader understands the Principle of Generality, it does provide a footnote which gives a reference to the accepted formal definition. Roy takes some terms whose definitions are vague, and gives a precise definition of their use within his thesis. What the dissertation isn't intended to be, is some sort of step-by- step guide describing what an application is, how to use and/or define media types properly, when and why to implement content negotiation, or any such DIY material. Those looking for specific enlightenment within REST will come away confused. It's an architectural style, not a blueprint. -Eric
Suppose client retrieves an employee record GET /employees/552 and then changes the record's surname with POST /employees/552/surname [new name] and the receives 303 See Other Location: /employees/552 Is the client in a steady state now or only after a subsequent GET to / employees/552 to update the changed record representation? Or does that question not make any sense in the absence of additional semantics beyond the HTTP specs? Jan On Oct 3, 2009, at 6:16 PM, Ian wrote: > Hi Christian > > The state of an order - whether it has zero line items, or five, is > resource state, not application state. The state of the order as > held in the http session in your example is resource state, not > application state. > > A simple - perhaps overly simple - ordering protocol might be > something like: new order created -> adding line items -> order > completed -> payment received -> order dispatched. > > In the observable interactions between client and server, this > protocol is never visible "as such": it can only be viewed through > the lens of resource state. > > Over the course of a series of interactions, the "application" (the > game being played out between the client and the server) will be in > one or other of these states - as viewed from a "God's eye" point of > view. Once the application state has progressed to "order > completed", for example, it's no longer possible to add manipulate > resources so as to add new line items; it is, however, possible to > manipulate resources such that the application state transitions to > "payment received" (the client would do this by submitting a > representation of a payment, perhaps). > > The client and the server cooperate to execute this protocol, but > they do so by transferring representations of resource state, not > representations of application state. Application state is never > represented "as such"; rather, it's inferred by the client based on > on current representations of resource state. If the application is > in the "order completed" state, the representation of the order > received by the client may very well include a link that has been > annotated with the link relation value "payment". This isn't a > straightforward representation of application state, however: it's > an "invitation" to the client to transfer a representation of a > payment to this linked resource. As a side-effect of transferring > this representation, the "application" may transition to "payment > received". > > What's important here is that the server is really only interested > in maintaining resource state, which includes maintaining the > integrity of the lifecycles of the resources under its control, and > the invariants that hold between resources (if any). The server > can't be sure the client will ever take that step of submitting a > payment, so why bother holding onto application state? Application > state is something that can be reconstructed "after the fact", by a > client, or omniscient observer, based on the disposition of the > current set of resource representations. > > So the order representation is always a representation of resource > state. Application state, that "snapshot of the instance of > execution of a protocol", can only be inferred or reconstructed from > resource state. > > Hope this is of some help. Apologies if I've confused more than > clarified; double apologies if I'm just talking plain nonsense. > > ian > > --- In rest-discuss@yahoogroups.com, Kristian Nordal > <kristian.nordal@...> wrote: >> >> >> On Oct 2, 2009, at 10:55 PM, Ian wrote: >> >>> >>> >>> --- In rest-discuss@yahoogroups.com, Mark Baker <distobj@> wrote: >>>> >>>> On Fri, Oct 2, 2009 at 4:56 AM, Kristian Nordal >>>> <kristian.nordal@> wrote: >>>>> I'm also struggling with the difference between application state >>>>> and >>>>> server state (which I assume is the same as "resource state"). Can >>>>> someone point me to a good definition of "application state"? >>>> >>>> It's literally the *state* of the *application*. If you're looking >>>> at >>>> your bank balance, that's a different state than if you were >>>> preparing >>>> to submit a bill payment, and once you've submitted the payment, >>>> you're in yet another state in the application state machine. >>>> >>>> Mark. >>>> >>> >>> Just to add to Mark's definition, and put it in the context of >>> "application" and "application protocol": if we think of an >>> application as being computer behavior that achieves a particular >>> goal, we can describe an application protocol as the specification >>> of the legitimate interactions necessary to realize that behavior, >>> and application state as a snapshot of the instance of execution of >>> an application protocol. >> >> Thanks for the definitions. I'm still a bit confused though, so I'm >> going to try to use an example: >> >> Let's say we have an client/ua that is filling out an order (order + >> line items). In a traditional web application, the order would be in >> the http session, and we would add/remove line items to that order, >> and finally place the order. In that case I clearly see that we are >> talking about application state that is placed on the server. The >> server keeps track of it, and it's literally the state of the client/ >> application. >> >> But if we were to store and address the order like any other >> resource, >> would that change the nature of the state? It would simply be another >> way of storing the same state, but nevertheless it would be >> "resources" with the same properties induced by the stateless >> constraint (visibility, reliability, and salability) - given that >> they >> were stored in the a way that make that possible. To me, this looks >> like exactly the same kind of state (application state), simply >> stored/ >> modeled differently. But in that case I don't see how or if it >> violates the stateless constraint. >> >> Would you say that the order in this example is always a "snapshot of >> the instance execution of an application protocol", and that it will >> always be application state - no matter how it's modeled? And by >> placing it on the server it would be in violation of the REST >> principles, even though the stateless constraint is dealt with? >> >> -- >> Thanks, >> Kristian >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
A separate reply from earlier... Please point out exactly what is transient in the example? The reservation? No. It doesn't go away as its a record of the purchase. The transactional-state of the reservation? No, because really it is just a representation of the "fulfilled" or "unfulfilled" state. You could say the the transaction resource itself is transient as it was only used by the client to fulfill a greater task: both an airline and hotel reservation. But, what it turns into is a record of the entire transaction with the Travel Agent. For example, what if a law enforcement agency was investigating a crime. They would follow the links: ticket -> reservation reservation -> transaction transaction -> transaction-participants transaction-participants -> hotel reservation hotel-reservation -> room room -> arrest. Subbu Allamaraju wrote: > > > Of course. But I don't think "transaction resources" that Bill is > describing are similar to credit card authorizations or purchase > orders. The example that Bill outlined involves transient per client > state. Marking such state as permanent resources is certainly > possible, but towards what end? To prove that such things can be done > RESTfully? > > Subbu > > On Oct 2, 2009, at 10:58 PM, Bediako George wrote: > > > Thanks for the reply Subbu, > > > > Would it not be the case that most data created by clients would for > > the most part be "per-client" in nature? > > > > For instance if, for the first time ever, I buy a book on Amazon, > > there are many resources that will be created because of that > > transaction. In your opinion, will the order, or the credit card > > authorization, or even my mailing address be considered "per-client" > > state? > > > > On Fri, Oct 2, 2009 at 3:46 PM, Subbu Allamaraju <subbu@... > <mailto:subbu%40subbu.org>> > > wrote: > > > > On Oct 2, 2009, at 3:40 PM, Bill Burke wrote: > > > > Yeah, somebody will have to explain to me why (or if) the Reservation > > example I gave breaks the stateless constraint of REST. Where I think > > > > Well - under the disguise of a "transaction", the server is > > maintaining per-client state. In stead of answering your question > > directly, let me ask you whether have you examined the scalability > > characteristics of your proposed design. > > > > It may be worthwhile to start from basics, apply each constraint one > > by one, and whether your approach benefits. The discussion around > > the kind of resources needed, media types, link rels, link headers > > vs link elements in some XML format are implementation details. So > > far, this post does not make a case of why an transactional > > application should be built the way you propose. > > > > Here is a minor point. There is no such thing as a "sub-resource". > > That term may be part of the mental model of some developers, but it > > has no consequence to the protocol. > > > > Subbu > > > > > > > > -- > > Bediako George > > Partner - Lucid Technics, LLC > > Think Clearly, Think Lucid > > www.lucidtechnics.com > > (p) 202.683.7486 (f) 703.563.6279 > > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
António Mota wrote: > > Thanks for your apologize :) but I'm really more confused. What you > say, basically, is that from a operational point of view, a > "application state" does not exist, or existing can not be known by > us, mere mortals... > In a RESTful system, once a user has completed their series of application interactions, a history of the steady-states is contained in the browser history. Since the ancillary requests which make up each steady-state can come from other servers than the representation, or from caches, us mere mortals can't deduce the "application" from the perspective of the server. POST, PUT and DELETE operations will show up in the log, but not give us a complete picture. REST describes a "data view" of the series of operations which make up the transition from one steady-state to the next. To quote from it, "[T]he application details are hidden from the server by the generic connector interface..." and furthermore, "Each application defines goals for the underlying system, against which the system's performance can be measured." A RESTful system may have a godzillion possible "applications". None will likely ever encompass the entire scope of resources on the origin server. But it is certainly possible to identify common applications that any user will need to execute to achieve the goals of the site. For an e-commerce site, one such application is the shopping cart. We can't reliably measure its performance from the server, because we can't tell from the server logs what constitutes an "application". But we can simulate RESTful shopping-cart interaction using a shell script and libcurl. We can run a series of shopping-cart interactions, of various sizes, and even run them simultaneously from widely dispersed shell accounts. The purpose is to measure user- or machine- perceived performance (depending on media type, a machine can start transferring ancillary resource representations before the requested representation has finished loading -- or not) where it counts, which is on the client. So REST makes it possible to not only build a better shopping cart, but enables you to benchmark it as well, and identify which state transitions are taking too long. The results will allow you to identify the cause of all those dropped shopping carts you see in the server logs -- maybe you weren't aware that for some users, checkout was taking several minutes longer than necessary. Consider these curl-scripted application simulations as unit tests for REST systems. Every time you identify a common interaction pattern for a RESTful service, you have something you can call an application, script from start to finish, and measure the performance across all the ensuing state transitions, as well as ensure that Content-Length is being sent rather than Transfer-Encoding: chunked and other tests you can run against the generated output from the curl scripts. -Eric
Jan Algermissen wrote: > > Suppose client retrieves an employee record > > GET /employees/552 > > and then changes the record's surname with > > POST /employees/552/surname > > [new name] > > and the receives > > 303 See Other > Location: /employees/552 > > Is the client in a steady state now or only after a subsequent GET > to / employees/552 to update the changed record representation? > Only after. A redirect is not a steady-state containing a hypermedia representation presenting the user with a selection of further state transitions to choose from. -Eric > > Or does that question not make any sense in the absence of > additional semantics beyond the HTTP specs? > > Jan > > > > > > On Oct 3, 2009, at 6:16 PM, Ian wrote: > > > Hi Christian > > > > The state of an order - whether it has zero line items, or five, > > is resource state, not application state. The state of the order > > as held in the http session in your example is resource state, not > > application state. > > > > A simple - perhaps overly simple - ordering protocol might be > > something like: new order created -> adding line items -> order > > completed -> payment received -> order dispatched. > > > > In the observable interactions between client and server, this > > protocol is never visible "as such": it can only be viewed through > > the lens of resource state. > > > > Over the course of a series of interactions, the > > "application" (the game being played out between the client and the > > server) will be in one or other of these states - as viewed from a > > "God's eye" point of view. Once the application state has > > progressed to "order completed", for example, it's no longer > > possible to add manipulate resources so as to add new line items; > > it is, however, possible to manipulate resources such that the > > application state transitions to "payment received" (the client > > would do this by submitting a representation of a payment, perhaps). > > > > The client and the server cooperate to execute this protocol, but > > they do so by transferring representations of resource state, not > > representations of application state. Application state is never > > represented "as such"; rather, it's inferred by the client based > > on on current representations of resource state. If the application > > is in the "order completed" state, the representation of the order > > received by the client may very well include a link that has been > > annotated with the link relation value "payment". This isn't a > > straightforward representation of application state, however: it's > > an "invitation" to the client to transfer a representation of a > > payment to this linked resource. As a side-effect of transferring > > this representation, the "application" may transition to "payment > > received". > > > > What's important here is that the server is really only interested > > in maintaining resource state, which includes maintaining the > > integrity of the lifecycles of the resources under its control, > > and the invariants that hold between resources (if any). The > > server can't be sure the client will ever take that step of > > submitting a payment, so why bother holding onto application state? > > Application state is something that can be reconstructed "after the > > fact", by a client, or omniscient observer, based on the > > disposition of the current set of resource representations. > > > > So the order representation is always a representation of resource > > state. Application state, that "snapshot of the instance of > > execution of a protocol", can only be inferred or reconstructed > > from resource state. > > > > Hope this is of some help. Apologies if I've confused more than > > clarified; double apologies if I'm just talking plain nonsense. > > > > ian > > > > --- In rest-discuss@yahoogroups.com, Kristian Nordal > > <kristian.nordal@...> wrote: > >> > >> > >> On Oct 2, 2009, at 10:55 PM, Ian wrote: > >> > >>> > >>> > >>> --- In rest-discuss@yahoogroups.com, Mark Baker <distobj@> wrote: > >>>> > >>>> On Fri, Oct 2, 2009 at 4:56 AM, Kristian Nordal > >>>> <kristian.nordal@> wrote: > >>>>> I'm also struggling with the difference between application > >>>>> state and > >>>>> server state (which I assume is the same as "resource state"). > >>>>> Can someone point me to a good definition of "application > >>>>> state"? > >>>> > >>>> It's literally the *state* of the *application*. If you're > >>>> looking at > >>>> your bank balance, that's a different state than if you were > >>>> preparing > >>>> to submit a bill payment, and once you've submitted the payment, > >>>> you're in yet another state in the application state machine. > >>>> > >>>> Mark. > >>>> > >>> > >>> Just to add to Mark's definition, and put it in the context of > >>> "application" and "application protocol": if we think of an > >>> application as being computer behavior that achieves a particular > >>> goal, we can describe an application protocol as the specification > >>> of the legitimate interactions necessary to realize that behavior, > >>> and application state as a snapshot of the instance of execution > >>> of an application protocol. > >> > >> Thanks for the definitions. I'm still a bit confused though, so I'm > >> going to try to use an example: > >> > >> Let's say we have an client/ua that is filling out an order (order > >> + line items). In a traditional web application, the order would > >> be in the http session, and we would add/remove line items to that > >> order, and finally place the order. In that case I clearly see > >> that we are talking about application state that is placed on the > >> server. The server keeps track of it, and it's literally the state > >> of the client/ application. > >> > >> But if we were to store and address the order like any other > >> resource, > >> would that change the nature of the state? It would simply be > >> another way of storing the same state, but nevertheless it would be > >> "resources" with the same properties induced by the stateless > >> constraint (visibility, reliability, and salability) - given that > >> they > >> were stored in the a way that make that possible. To me, this looks > >> like exactly the same kind of state (application state), simply > >> stored/ > >> modeled differently. But in that case I don't see how or if it > >> violates the stateless constraint. > >> > >> Would you say that the order in this example is always a "snapshot > >> of the instance execution of an application protocol", and that it > >> will always be application state - no matter how it's modeled? And > >> by placing it on the server it would be in violation of the REST > >> principles, even though the stateless constraint is dealt with? > >> > >> -- > >> Thanks, > >> Kristian > >> > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
Well, that doesn't mean that a "form-like-thing" couldn't work for machine driven clients. I know I keep saying this on this list but: take a look at CCXML! - It manages to implement form-like requests but is not driven by human input. It implements a state machine construct that is driven by events from an underlying platform. On each transisiton it can run javascript and send messages back down to the platform. It can also put together a GET or a POST using javascript variables to set the equivalent of form inputs. This sort of model could work for more than just call control (what CCXML is designed for). Hypermedia tells the client how to map inputs to an HTTP request -- that basic concept is the same in HTML and CCXML. Andrew --- In rest-discuss@yahoogroups.com, Bill Burke <bburke@...> wrote: > > > > Solomon Duskis wrote: > > Using link doesn't seem all that natural to me. In HTML that question > > is answered pretty simply: > > > > <form action="http://example.com/customers > > <http://example.com/customers?zip=>" method="GET" name="CustomersByZip"> > > <input type="text" name="zip" /> > > </form> > > > > I've said this before when you posted this idea...But... > > <form> is rendering metadata meant to help a *Human Being* make a > decision. Machine based clients are already going to know how to fill > out the "form" ahead of time so the rendering information isn't needed > when transmitting representations. Only the link (or link template) is > interesting to a machine based client. > > I don't know what the convention is, but links can and do provide URLs > for their description. That description URL, is, IMO the appropriate > place for "form" metadata. > > Well, at least, that is my theory on how things might or should work.... > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
wahbedahbe wrote: > > > Well, that doesn't mean that a "form-like-thing" couldn't work for > machine driven clients. > I wasn't saying it was good or bad or needed or not needed. I was just saying that "self description" metadata isn't going to be used by a large set of clients. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Subbu Allamaraju wrote: > > On Oct 2, 2009, at 3:40 PM, Bill Burke wrote: > >> Yeah, somebody will have to explain to me why (or if) the Reservation >> example I gave breaks the stateless constraint of REST. Where I think > > Well - under the disguise of a "transaction", the server is maintaining > per-client state. There is no per-client state. A Reservation is interesting to a Travel Agent, a Customer, and to an Airline. A credit or debit is interesting to a Credit Card Account and to the Merchant (and to Visa and Mastercard).Again, "fulfilled" for a reservation and "posted/settled" for a credit or debit are valid non-session-based states. The fact that these states have a different representation (a tx-document) shouldn't matter. > In stead of answering your question directly, let me > ask you whether have you examined the scalability characteristics of > your proposed design. > Integration scenarios many times require coordination between many actors. It should be irrelevant if the client delegates this coordination to a different service. All a transaction manager does is guarantee that something happens, which is hard to implement many times on a per-application basis. This is why transaction managers exist. > It may be worthwhile to start from basics, apply each constraint one by > one, and whether your approach benefits. The discussion around the kind > of resources needed, media types, link rels, link headers vs link > elements in some XML format are implementation details. So far, this > post does not make a case of why an transactional application should be > built the way you propose. > Well, the way is interesting because the actors being coordinated can negotiate with the transaction manager on the exact protocol. For example, the reservation resource is posted with a "transaction" link. The reservation service can GET that link with an Accept header of the preferred transaction formats it desires to interact with. If the reservation service does not know how to interact with the transaction representation, it can barf at reservation creation. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
I never said nothing in relation to Mr. Fielding thesis, nor was I implying that said thesis lacked formality. And even less, I never looked at said thesis as step-by-step DIY guide. I do understand that said thesis is a dissertation presented as part of a philosophical academic degree. As such I don't think that such dissertation, which I didn't refer in the first place, should have the purpose or necessity of being formal, as are the documents from W3C or IETF or Oasis and others. When I refer to REST in my post I was referring to the REST community, from which the members of this list are a subset, and as such I was referring the necessity of having a community-driven resource, such as a wiki or such, where the community could agree on those "formal" definitions (even if those definitions are just a copy&paste from Fielding dissertation excerpts), and could be used as reference by everybody, specially by new-comers. However, regrettably, judging by several conversations on this list, it seems that the REST community likes to cultivate some sort of obscurantism at the level of concepts, and like to avoid all the practical questions of developing software based on the REST style. For sure Mr. Fielding dissertation is not a step-by-step DIY guide, but I think that everybody would profit from a site containing such "practical things" as formal definitions, rules-of-thumb, best-practices... _______________________________________________ 2009/10/3 Eric J. Bowman <eric@bisonsystems.net>: > António Mota wrote: > >> >> I think REST lacks lot's of *formal* definitions... >> > > I have to disagree. If the formal definitions weren't so precise, I > wouldn't have seen the need to embarass myself... twice... in this > thread by correcting imprecise wording that I didn't catch until I'd > read what I posted a first, then second, time because something still > didn't feel right. > > Any formal definitions you need are laid out in the first four chapters > of Roy's thesis. Where terms are not defined, they are footnoted. > While the thesis assumes the reader understands the Principle of > Generality, it does provide a footnote which gives a reference to the > accepted formal definition. Roy takes some terms whose definitions are > vague, and gives a precise definition of their use within his thesis. > > What the dissertation isn't intended to be, is some sort of step-by- > step guide describing what an application is, how to use and/or define > media types properly, when and why to implement content negotiation, or > any such DIY material. Those looking for specific enlightenment within > REST will come away confused. It's an architectural style, not a > blueprint. > > -Eric >
Hello. Maybe I will repeat some things, but I will try a different approach to explain this. I want to play blackjack. I have some friends and we all go to the online blackjack service. Each one of us enters the application, which allows us to play BJ. The main screen requests our IDs. At the next minute, we all but one are ready to play, that one not ready is because he forgot the ID numbers. Let's see what happens there. We are, say, ten clients, each one is using the application, and nine of which are ready to play while one is waiting for ID input. As you can see, there is one application, but each client has a different application state now. And each client knows its state clearly. So, we left our friend finding his number and we started to request cards. We knew there was a dealer at the other end of the line, but we were not able to know if we were all playing with the same dealer! Still, there was just one mass of cards. Yes, each card requested changed the mass. Still more, each request may be served by a different dealer! Here, the mass of cards is a resource, the dealers are the servers. Now, can you clearly see the difference of the states? Each request will change the resource state, but that is not the app state. Each client has its own state of the game, and that is the app state. The actual server state may be dealer working or dealer in a break. So server state is actually another very different thing. So, what is the REST constrain here? Well, if you program the dealer to know the hand of each client, you are violating REST, since then the server is actually controlling the state of the app for each user. The idea is the client controls its app state, not the server. So, even when the client wins the hand, it will have to show it to the dealer, saying I won. And the dealer will give me the lollipop gift. And after that the dealer will wait again for another request. Each request will contain all the information needed to be totally completed in one interaction. The sum of all interactions is your final goal, the application. Also note the resource can be changed in the process, but that is not an app change. Why is all that setup good? Well, you can have thousands of clients that it will not impact servers, since they not keep track of the hands. You can add or remove dealers with no impact on the clients nor in the application states. You keep hidden the resource implementation. And you got then a nicely distributed game, scalable and simple. Hope this helps! William Martinez. --- In rest-discuss@yahoogroups.com, Mark Baker <distobj@...> wrote: > > On Fri, Oct 2, 2009 at 4:56 AM, Kristian Nordal > <kristian.nordal@...> wrote: > > I'm also struggling with the difference between application state and > > server state (which I assume is the same as "resource state"). Can > > someone point me to a good definition of "application state"? > > It's literally the *state* of the *application*. If you're looking at > your bank balance, that's a different state than if you were preparing > to submit a bill payment, and once you've submitted the payment, > you're in yet another state in the application state machine. > > Mark. >
Eric, don't take it wrong but I read your post and sincerely I think "what does this has to do with what?" Take this for example: > In a RESTful system, once a user has completed their series of > application interactions, a history of the steady-states is contained > in the browser history. What browser (and what user)? We have a Restfull (almost) infrastructure that we use in order to put our different software modules communicating with each other, sometimes using HTTP, other JMS or IMAP... And the rest of your post is similar to this quote, and I don't understand what it has to do with "application" and "application state" in the realm of a RESTfull based system (not a HTTP based system). There are things that we can extrapolate from HTTP to a more general level in order to fit other protocols, like we did with some HTTP headers that we use generically, but not concepts like "browser history". So maybe I'm contradicting myself regarding the conceptual/practical dichotomy I referred in other post, but concepts like "application" and "application state" have to be formally defined at the most abstract level as possible, so they can be applied on the ground. (and I'm not saying they are not defined, for sure they are, I'm referring to the formal enunciation of the definition, but that is another issue).
2009/10/4 Bill Burke <bburke@...> > Please point out exactly what is transient in the example? The > reservation? No. It doesn't go away as its a record of the purchase. > The transactional-state of the reservation? No, because really it is > just a representation of the "fulfilled" or "unfulfilled" state. "fulfilled" or "unfulfilled" are states of the reservation, not of a eventual "transaction resource", that should not be a resource because it's not a "entity", a "subject", you're only using it as a crutch for the reservation resource. > You could say the the transaction resource itself is transient as it was > only used by the client to fulfill a greater task: both an airline and > hotel reservation. But, what it turns into is a record of the entire > transaction with the Travel Agent. For example, what if a law > enforcement agency was investigating a crime. They would follow the links: > > ticket -> reservation > reservation -> transaction > transaction -> transaction-participants > transaction-participants -> hotel reservation > hotel-reservation -> room > room -> arrest. > What's wrong with ticket -> reservation reservation -> hotel reservation hotel-reservation -> room room -> arrest. or more accurate ticket -> reservation reservation -> [flight reservation, hotel reservation] hotel-reservation -> room room -> arrest.
Thanks Ian. That answers my question. Just to add what you said, it is imperative for the server keep its concepts of state opaque from the client. Subbu On Oct 3, 2009, at 10:32 PM, Ian wrote: > Hi Subbu > > Yes, I think most of this is opaque to clients. Perhaps I implied > otherwise when I suggested clients might "infer" application state > from received representations: I don't in fact think that's > necessary of desirable. It's simpler than that. Clients are > interested in achieving particular goals, and they evaluate received > representations of resource state in light of those goals; that is, > they choose to operate hypermedia controls - links or forms - based > on their understanding of how the control's semantic context (ie. > link relation value) relates to their current goal. In all this, the > client need not necessarily know it's participating in a particular > protocol, or be aware of the overall state of the distributed > application. > > Is that in line with what you meant by this being opaque to the > client? > > ian > > --- In rest-discuss@yahoogroups.com, Subbu Allamaraju <subbu@...> > wrote: >> >> Hi Ian, >> >> That is an excellent description of state from the server's point of >> view. However, isn't all this opaque for the client? >> >> Subbu >> >> On Oct 3, 2009, at 6:16 PM, Ian wrote: >> >>> Hi Christian >>> >>> The state of an order - whether it has zero line items, or five, is >>> resource state, not application state. The state of the order as >>> held in the http session in your example is resource state, not >>> application state. >>> >>> A simple - perhaps overly simple - ordering protocol might be >>> something like: new order created -> adding line items -> order >>> completed -> payment received -> order dispatched. >>> >>> In the observable interactions between client and server, this >>> protocol is never visible "as such": it can only be viewed through >>> the lens of resource state. >>> >>> Over the course of a series of interactions, the "application" (the >>> game being played out between the client and the server) will be in >>> one or other of these states - as viewed from a "God's eye" point of >>> view. Once the application state has progressed to "order >>> completed", for example, it's no longer possible to add manipulate >>> resources so as to add new line items; it is, however, possible to >>> manipulate resources such that the application state transitions to >>> "payment received" (the client would do this by submitting a >>> representation of a payment, perhaps). >>> >>> The client and the server cooperate to execute this protocol, but >>> they do so by transferring representations of resource state, not >>> representations of application state. Application state is never >>> represented "as such"; rather, it's inferred by the client based on >>> on current representations of resource state. If the application is >>> in the "order completed" state, the representation of the order >>> received by the client may very well include a link that has been >>> annotated with the link relation value "payment". This isn't a >>> straightforward representation of application state, however: it's >>> an "invitation" to the client to transfer a representation of a >>> payment to this linked resource. As a side-effect of transferring >>> this representation, the "application" may transition to "payment >>> received". >>> >>> What's important here is that the server is really only interested >>> in maintaining resource state, which includes maintaining the >>> integrity of the lifecycles of the resources under its control, and >>> the invariants that hold between resources (if any). The server >>> can't be sure the client will ever take that step of submitting a >>> payment, so why bother holding onto application state? Application >>> state is something that can be reconstructed "after the fact", by a >>> client, or omniscient observer, based on the disposition of the >>> current set of resource representations. >>> >>> So the order representation is always a representation of resource >>> state. Application state, that "snapshot of the instance of >>> execution of a protocol", can only be inferred or reconstructed from >>> resource state. >>> >>> Hope this is of some help. Apologies if I've confused more than >>> clarified; double apologies if I'm just talking plain nonsense. >>> >>> ian >>> > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
My understanding is that, in 2PC, the "transaction context" is transient and represents the state of the transaction. In order to manage this context, the coordinator associates it with writes done by each client. In a sense, this context a sum of client state. Further, my understanding is that, the most efficient way to manage this transaction context is by keeping the client-server protocol "connection oriented". So, when you implement 2PC over a connection- less protocol, how is that context managed, other than by treating it as resource state? Subbu On Oct 4, 2009, at 1:01 AM, Bill Burke wrote: > > > Subbu Allamaraju wrote: >> On Oct 2, 2009, at 3:40 PM, Bill Burke wrote: >>> Yeah, somebody will have to explain to me why (or if) the >>> Reservation >>> example I gave breaks the stateless constraint of REST. Where I >>> think >> Well - under the disguise of a "transaction", the server is >> maintaining per-client state. > > There is no per-client state. A Reservation is interesting to a > Travel Agent, a Customer, and to an Airline. A credit or debit is > interesting to a Credit Card Account and to the Merchant (and to > Visa and Mastercard).Again, "fulfilled" for a reservation and > "posted/settled" for a credit or debit are valid non-session-based > states. The fact that these states have a different representation > (a tx-document) shouldn't matter. > > > >> In stead of answering your question directly, let me ask you >> whether have you examined the scalability characteristics of your >> proposed design. > > Integration scenarios many times require coordination between many > actors. It should be irrelevant if the client delegates this > coordination to a different service. All a transaction manager does > is guarantee that something happens, which is hard to implement many > times on a per-application basis. This is why transaction managers > exist. > > >> It may be worthwhile to start from basics, apply each constraint >> one by one, and whether your approach benefits. The discussion >> around the kind of resources needed, media types, link rels, link >> headers vs link elements in some XML format are implementation >> details. So far, this post does not make a case of why an >> transactional application should be built the way you propose. > > Well, the way is interesting because the actors being coordinated > can negotiate with the transaction manager on the exact protocol. > For example, the reservation resource is posted with a "transaction" > link. The reservation service can GET that link with an Accept > header of the preferred transaction formats it desires to interact > with. If the reservation service does not know how to interact with > the transaction representation, it can barf at reservation creation. > > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com
--- In rest-discuss@yahoogroups.com, Bill Burke <bburke@...> wrote: > wahbedahbe wrote: > > > > Well, that doesn't mean that a "form-like-thing" couldn't work for > > machine driven clients. > > > > I wasn't saying it was good or bad or needed or not needed. I was just > saying that "self description" metadata isn't going to be used by a > large set of clients. > I would argue that if a client doesn't need this "metadata" then it's not a RESTful system. This would imply that the client was too closely bound to the URI structure of the server and/or server-specific data types. Some sort of metadata about the link (if its not a straight GET/PUT/DELETE of the verbatim link) is necessary to map information from the client's domain to the server's interface. Specifically, on your point earlier in the thread: >My point was that transmitting a "form" (quotes) is usually not useful >to a machine-based client as the "discovery" phase happened when the >programmer coded the client. This goes against the principles of REST. There is no service discovery phase that occurs during coding. The client is coded against the uniform interface. e.g. URI + HTTP + Hypermedia Format (+ Link Relations). A web browser works with any HTTP server that serves back HTML. A CCXML client (again, which is machine driven) works with any HTTP server that serves back CCXML -- the server could be implementing a conferencing service, or a service to locate the callee on multiple phones, or whatever. The CCXML client doesn't care, it just executes the CCXML, just as a web browser just executes the HTML. There's no "server-specific form types" in the uniform interface, or it wouldn't be uniform. The hypermedia format is not specific to a service. If anything, it's specific to a type of client. HTML is specific to (primarily visual) data presentation & interaction clients. VoiceXML is specific to aural data presentation & interaction clients. CCXML is specific to call control clients. Any of these hypermedia clients could interact with any number of services that return data in the client's hypermedia format. A single service, via conneg, can interact with various types of client. The client and server are not coupled to each other at all. IMO, the most common mistake is to design a format specifically for your service -- that's not RESTful as it always results in coupling. The trick is designing a hypermedia format for the client domain you wish to target (if no such format already exists -- if one exists, you should just use it). A client that is coded to work with that format is not bound to any one service. Regards, Andrew
Thank you for this description. This has confirmed my initial understanding of the difference between application and resource state. Regards, Bediako On Sat, Oct 3, 2009 at 9:16 AM, Ian <iansrobinson@...> wrote: > > > Hi Christian > > The state of an order - whether it has zero line items, or five, is > resource state, not application state. The state of the order as held in the > http session in your example is resource state, not application state. > > A simple - perhaps overly simple - ordering protocol might be something > like: new order created -> adding line items -> order completed -> payment > received -> order dispatched. > > In the observable interactions between client and server, this protocol is > never visible "as such": it can only be viewed through the lens of resource > state. > > Over the course of a series of interactions, the "application" (the game > being played out between the client and the server) will be in one or other > of these states - as viewed from a "God's eye" point of view. Once the > application state has progressed to "order completed", for example, it's no > longer possible to add manipulate resources so as to add new line items; it > is, however, possible to manipulate resources such that the application > state transitions to "payment received" (the client would do this by > submitting a representation of a payment, perhaps). > > The client and the server cooperate to execute this protocol, but they do > so by transferring representations of resource state, not representations of > application state. Application state is never represented "as such"; rather, > it's inferred by the client based on on current representations of resource > state. If the application is in the "order completed" state, the > representation of the order received by the client may very well include a > link that has been annotated with the link relation value "payment". This > isn't a straightforward representation of application state, however: it's > an "invitation" to the client to transfer a representation of a payment to > this linked resource. As a side-effect of transferring this representation, > the "application" may transition to "payment received". > > What's important here is that the server is really only interested in > maintaining resource state, which includes maintaining the integrity of the > lifecycles of the resources under its control, and the invariants that hold > between resources (if any). The server can't be sure the client will ever > take that step of submitting a payment, so why bother holding onto > application state? Application state is something that can be reconstructed > "after the fact", by a client, or omniscient observer, based on the > disposition of the current set of resource representations. > > So the order representation is always a representation of resource state. > Application state, that "snapshot of the instance of execution of a > protocol", can only be inferred or reconstructed from resource state. > > Hope this is of some help. Apologies if I've confused more than clarified; > double apologies if I'm just talking plain nonsense. > > ian > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > Kristian Nordal <kristian.nordal@...> wrote: > > > > > > On Oct 2, 2009, at 10:55 PM, Ian wrote: > > > > > > > > > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > Mark Baker <distobj@> wrote: > > >> > > >> On Fri, Oct 2, 2009 at 4:56 AM, Kristian Nordal > > >> <kristian.nordal@> wrote: > > >>> I'm also struggling with the difference between application state > > >>> and > > >>> server state (which I assume is the same as "resource state"). Can > > >>> someone point me to a good definition of "application state"? > > >> > > >> It's literally the *state* of the *application*. If you're looking > > >> at > > >> your bank balance, that's a different state than if you were > > >> preparing > > >> to submit a bill payment, and once you've submitted the payment, > > >> you're in yet another state in the application state machine. > > >> > > >> Mark. > > >> > > > > > > Just to add to Mark's definition, and put it in the context of > > > "application" and "application protocol": if we think of an > > > application as being computer behavior that achieves a particular > > > goal, we can describe an application protocol as the specification > > > of the legitimate interactions necessary to realize that behavior, > > > and application state as a snapshot of the instance of execution of > > > an application protocol. > > > > Thanks for the definitions. I'm still a bit confused though, so I'm > > going to try to use an example: > > > > Let's say we have an client/ua that is filling out an order (order + > > line items). In a traditional web application, the order would be in > > the http session, and we would add/remove line items to that order, > > and finally place the order. In that case I clearly see that we are > > talking about application state that is placed on the server. The > > server keeps track of it, and it's literally the state of the client/ > > application. > > > > But if we were to store and address the order like any other resource, > > would that change the nature of the state? It would simply be another > > way of storing the same state, but nevertheless it would be > > "resources" with the same properties induced by the stateless > > constraint (visibility, reliability, and salability) - given that they > > were stored in the a way that make that possible. To me, this looks > > like exactly the same kind of state (application state), simply stored/ > > modeled differently. But in that case I don't see how or if it > > violates the stateless constraint. > > > > Would you say that the order in this example is always a "snapshot of > > the instance execution of an application protocol", and that it will > > always be application state - no matter how it's modeled? And by > > placing it on the server it would be in violation of the REST > > principles, even though the stateless constraint is dealt with? > > > > -- > > Thanks, > > Kristian > > > > > -- Bediako George Partner - Lucid Technics, LLC Think Clearly, Think Lucid www.lucidtechnics.com (p) 202.683.7486 (f) 703.563.6279
Wonder if referring to an analogous state machine, one could summarise as follows. The client's view of an application state is always a set of available transitions from an implicit inferrable state, and the server having conveyed the appropriate transitions has made unnecessary the need to track the client state at its end even though both are working off of some ephemeral yet very relevant application state. Thus at the end of each transiton, the client only knows the available transitions - which the server served. The state itself while inferrable is irrelevant in the context of what the client needs to do next and how the server will satisfy the client's future exercised preferences, - once a transition is complete. On Sun, Oct 4, 2009 at 9:09 AM, Subbu Allamaraju <subbu@...> wrote: > > > Thanks Ian. That answers my question. > > Just to add what you said, it is imperative for the server keep its > concepts of state opaque from the client. > > Subbu > > > On Oct 3, 2009, at 10:32 PM, Ian wrote: > > > Hi Subbu > > > > Yes, I think most of this is opaque to clients. Perhaps I implied > > otherwise when I suggested clients might "infer" application state > > from received representations: I don't in fact think that's > > necessary of desirable. It's simpler than that. Clients are > > interested in achieving particular goals, and they evaluate received > > representations of resource state in light of those goals; that is, > > they choose to operate hypermedia controls - links or forms - based > > on their understanding of how the control's semantic context (ie. > > link relation value) relates to their current goal. In all this, the > > client need not necessarily know it's participating in a particular > > protocol, or be aware of the overall state of the distributed > > application. > > > > Is that in line with what you meant by this being opaque to the > > client? > > > > ian > > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > Subbu Allamaraju <subbu@...> > > wrote: > >> > >> Hi Ian, > >> > >> That is an excellent description of state from the server's point of > >> view. However, isn't all this opaque for the client? > >> > >> Subbu > >> > >> On Oct 3, 2009, at 6:16 PM, Ian wrote: > >> > >>> Hi Christian > >>> > >>> The state of an order - whether it has zero line items, or five, is > >>> resource state, not application state. The state of the order as > >>> held in the http session in your example is resource state, not > >>> application state. > >>> > >>> A simple - perhaps overly simple - ordering protocol might be > >>> something like: new order created -> adding line items -> order > >>> completed -> payment received -> order dispatched. > >>> > >>> In the observable interactions between client and server, this > >>> protocol is never visible "as such": it can only be viewed through > >>> the lens of resource state. > >>> > >>> Over the course of a series of interactions, the "application" (the > >>> game being played out between the client and the server) will be in > >>> one or other of these states - as viewed from a "God's eye" point of > >>> view. Once the application state has progressed to "order > >>> completed", for example, it's no longer possible to add manipulate > >>> resources so as to add new line items; it is, however, possible to > >>> manipulate resources such that the application state transitions to > >>> "payment received" (the client would do this by submitting a > >>> representation of a payment, perhaps). > >>> > >>> The client and the server cooperate to execute this protocol, but > >>> they do so by transferring representations of resource state, not > >>> representations of application state. Application state is never > >>> represented "as such"; rather, it's inferred by the client based on > >>> on current representations of resource state. If the application is > >>> in the "order completed" state, the representation of the order > >>> received by the client may very well include a link that has been > >>> annotated with the link relation value "payment". This isn't a > >>> straightforward representation of application state, however: it's > >>> an "invitation" to the client to transfer a representation of a > >>> payment to this linked resource. As a side-effect of transferring > >>> this representation, the "application" may transition to "payment > >>> received". > >>> > >>> What's important here is that the server is really only interested > >>> in maintaining resource state, which includes maintaining the > >>> integrity of the lifecycles of the resources under its control, and > >>> the invariants that hold between resources (if any). The server > >>> can't be sure the client will ever take that step of submitting a > >>> payment, so why bother holding onto application state? Application > >>> state is something that can be reconstructed "after the fact", by a > >>> client, or omniscient observer, based on the disposition of the > >>> current set of resource representations. > >>> > >>> So the order representation is always a representation of resource > >>> state. Application state, that "snapshot of the instance of > >>> execution of a protocol", can only be inferred or reconstructed from > >>> resource state. > >>> > >>> Hope this is of some help. Apologies if I've confused more than > >>> clarified; double apologies if I'm just talking plain nonsense. > >>> > >>> ian > >>> > > > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > -- -------------------------------------------------------- blog: http://blog.dhananjaynene.com twitter: http://twitter.com/dnene http://twitter.com/_pythonic
Hello guys, Ian, Subbu, Dhananjay. One important thing we need not to forget is that, being in a distributed context, the "server" as the performer of some services against resources, may change between each client interaction. THAT is why the app state is held in the client, and no server has to keep any. Still, based on the example, we are clear the state "on the server side" is actually the resource state, not a state "stored on the server". Even more: the resources state graph may indicate restrictions between states, actions and trigger events. The idea of the client inferring the next step given the actual state of the resource, can be also ported to the server! That is, if a client requests an illegal action for the current state (adding a line to a closed order), the server may check first the resource state and send an error back to client. But it is clear that the server is not keeping the client state internally, it is just responding to the request in that particular moment, thus allowing us to scale nicely. Cheers. William Martinez Pomares --- In rest-discuss@yahoogroups.com, Dhananjay Nene <dhananjay.nene@...> wrote: > > Wonder if referring to an analogous state machine, one could summarise as > follows. The client's view of an application state is always a set of > available transitions from an implicit inferrable state, and the server > having conveyed the appropriate transitions has made unnecessary the need to > track the client state at its end even though both are working off of some > ephemeral yet very relevant application state. Thus at the end of each > transiton, the client only knows the available transitions - which the > server served. The state itself while inferrable is irrelevant in the > context of what the client needs to do next and how the server will satisfy > the client's future exercised preferences, - once a transition is complete. > > On Sun, Oct 4, 2009 at 9:09 AM, Subbu Allamaraju <subbu@...> wrote: > > > > > > > Thanks Ian. That answers my question. > > > > Just to add what you said, it is imperative for the server keep its > > concepts of state opaque from the client. > > > > Subbu > > > > > > On Oct 3, 2009, at 10:32 PM, Ian wrote: > > > > > Hi Subbu > > > > > > Yes, I think most of this is opaque to clients. Perhaps I implied > > > otherwise when I suggested clients might "infer" application state > > > from received representations: I don't in fact think that's > > > necessary of desirable. It's simpler than that. Clients are > > > interested in achieving particular goals, and they evaluate received > > > representations of resource state in light of those goals; that is, > > > they choose to operate hypermedia controls - links or forms - based > > > on their understanding of how the control's semantic context (ie. > > > link relation value) relates to their current goal. In all this, the > > > client need not necessarily know it's participating in a particular > > > protocol, or be aware of the overall state of the distributed > > > application. > > > > > > Is that in line with what you meant by this being opaque to the > > > client? > > > > > > ian > > > > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > > Subbu Allamaraju <subbu@> > > > wrote: > > >> > > >> Hi Ian, > > >> > > >> That is an excellent description of state from the server's point of > > >> view. However, isn't all this opaque for the client? > > >> > > >> Subbu > > >> > > >> On Oct 3, 2009, at 6:16 PM, Ian wrote: > > >> > > >>> Hi Christian > > >>> > > >>> The state of an order - whether it has zero line items, or five, is > > >>> resource state, not application state. The state of the order as > > >>> held in the http session in your example is resource state, not > > >>> application state. > > >>> > > >>> A simple - perhaps overly simple - ordering protocol might be > > >>> something like: new order created -> adding line items -> order > > >>> completed -> payment received -> order dispatched. > > >>> > > >>> In the observable interactions between client and server, this > > >>> protocol is never visible "as such": it can only be viewed through > > >>> the lens of resource state. > > >>> > > >>> Over the course of a series of interactions, the "application" (the > > >>> game being played out between the client and the server) will be in > > >>> one or other of these states - as viewed from a "God's eye" point of > > >>> view. Once the application state has progressed to "order > > >>> completed", for example, it's no longer possible to add manipulate > > >>> resources so as to add new line items; it is, however, possible to > > >>> manipulate resources such that the application state transitions to > > >>> "payment received" (the client would do this by submitting a > > >>> representation of a payment, perhaps). > > >>> > > >>> The client and the server cooperate to execute this protocol, but > > >>> they do so by transferring representations of resource state, not > > >>> representations of application state. Application state is never > > >>> represented "as such"; rather, it's inferred by the client based on > > >>> on current representations of resource state. If the application is > > >>> in the "order completed" state, the representation of the order > > >>> received by the client may very well include a link that has been > > >>> annotated with the link relation value "payment". This isn't a > > >>> straightforward representation of application state, however: it's > > >>> an "invitation" to the client to transfer a representation of a > > >>> payment to this linked resource. As a side-effect of transferring > > >>> this representation, the "application" may transition to "payment > > >>> received". > > >>> > > >>> What's important here is that the server is really only interested > > >>> in maintaining resource state, which includes maintaining the > > >>> integrity of the lifecycles of the resources under its control, and > > >>> the invariants that hold between resources (if any). The server > > >>> can't be sure the client will ever take that step of submitting a > > >>> payment, so why bother holding onto application state? Application > > >>> state is something that can be reconstructed "after the fact", by a > > >>> client, or omniscient observer, based on the disposition of the > > >>> current set of resource representations. > > >>> > > >>> So the order representation is always a representation of resource > > >>> state. Application state, that "snapshot of the instance of > > >>> execution of a protocol", can only be inferred or reconstructed from > > >>> resource state. > > >>> > > >>> Hope this is of some help. Apologies if I've confused more than > > >>> clarified; double apologies if I'm just talking plain nonsense. > > >>> > > >>> ian > > >>> > > > > > > > > > > > > > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > > > > > > > -- > -------------------------------------------------------- > blog: http://blog.dhananjaynene.com > twitter: http://twitter.com/dnene http://twitter.com/_pythonic >
Sorry! I forgot to mention Dhananjay's point: The resource is "owned" by the server space (not by a particular server) and that resource, modeled in a state machine (which would be metadata), may change its rules at any time. Thus, it makes perfect sense what Dhananjay mentions about a set of particular "legal" transitions in a given point of time (or state). That set of transitions is the one offered by the server to the client (that list or URLs and actions). Still, that list is inferred by the server at hand from the resource's actual state and metadata, and sent back to client. It is never maintained as a state in one particular server. In that way, if the rules change, the next server that receives a request for legal actions, will build the list using the new rules, and the client will have an automatic, on the spot, application update. That is why knowing the operations out of band is not good for app maintainability. And of course, server may need those rules too to avoid crazy, out of place clients trying to post illegal actions. Cheers again! William Martinez Pomares --- In rest-discuss@yahoogroups.com, Dhananjay Nene <dhananjay.nene@...> wrote: > > Wonder if referring to an analogous state machine, one could summarise as > follows. The client's view of an application state is always a set of > available transitions from an implicit inferrable state, and the server > having conveyed the appropriate transitions has made unnecessary the need to > track the client state at its end even though both are working off of some > ephemeral yet very relevant application state. Thus at the end of each > transiton, the client only knows the available transitions - which the > server served. The state itself while inferrable is irrelevant in the > context of what the client needs to do next and how the server will satisfy > the client's future exercised preferences, - once a transition is complete. > > On Sun, Oct 4, 2009 at 9:09 AM, Subbu Allamaraju <subbu@...> wrote: > > > > > > > Thanks Ian. That answers my question. > > > > Just to add what you said, it is imperative for the server keep its > > concepts of state opaque from the client. > > > > Subbu > > > > > > On Oct 3, 2009, at 10:32 PM, Ian wrote: > > > > > Hi Subbu > > > > > > Yes, I think most of this is opaque to clients. Perhaps I implied > > > otherwise when I suggested clients might "infer" application state > > > from received representations: I don't in fact think that's > > > necessary of desirable. It's simpler than that. Clients are > > > interested in achieving particular goals, and they evaluate received > > > representations of resource state in light of those goals; that is, > > > they choose to operate hypermedia controls - links or forms - based > > > on their understanding of how the control's semantic context (ie. > > > link relation value) relates to their current goal. In all this, the > > > client need not necessarily know it's participating in a particular > > > protocol, or be aware of the overall state of the distributed > > > application. > > > > > > Is that in line with what you meant by this being opaque to the > > > client? > > > > > > ian > > > > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > > Subbu Allamaraju <subbu@> > > > wrote: > > >> > > >> Hi Ian, > > >> > > >> That is an excellent description of state from the server's point of > > >> view. However, isn't all this opaque for the client? > > >> > > >> Subbu > > >> > > >> On Oct 3, 2009, at 6:16 PM, Ian wrote: > > >> > > >>> Hi Christian > > >>> > > >>> The state of an order - whether it has zero line items, or five, is > > >>> resource state, not application state. The state of the order as > > >>> held in the http session in your example is resource state, not > > >>> application state. > > >>> > > >>> A simple - perhaps overly simple - ordering protocol might be > > >>> something like: new order created -> adding line items -> order > > >>> completed -> payment received -> order dispatched. > > >>> > > >>> In the observable interactions between client and server, this > > >>> protocol is never visible "as such": it can only be viewed through > > >>> the lens of resource state. > > >>> > > >>> Over the course of a series of interactions, the "application" (the > > >>> game being played out between the client and the server) will be in > > >>> one or other of these states - as viewed from a "God's eye" point of > > >>> view. Once the application state has progressed to "order > > >>> completed", for example, it's no longer possible to add manipulate > > >>> resources so as to add new line items; it is, however, possible to > > >>> manipulate resources such that the application state transitions to > > >>> "payment received" (the client would do this by submitting a > > >>> representation of a payment, perhaps). > > >>> > > >>> The client and the server cooperate to execute this protocol, but > > >>> they do so by transferring representations of resource state, not > > >>> representations of application state. Application state is never > > >>> represented "as such"; rather, it's inferred by the client based on > > >>> on current representations of resource state. If the application is > > >>> in the "order completed" state, the representation of the order > > >>> received by the client may very well include a link that has been > > >>> annotated with the link relation value "payment". This isn't a > > >>> straightforward representation of application state, however: it's > > >>> an "invitation" to the client to transfer a representation of a > > >>> payment to this linked resource. As a side-effect of transferring > > >>> this representation, the "application" may transition to "payment > > >>> received". > > >>> > > >>> What's important here is that the server is really only interested > > >>> in maintaining resource state, which includes maintaining the > > >>> integrity of the lifecycles of the resources under its control, and > > >>> the invariants that hold between resources (if any). The server > > >>> can't be sure the client will ever take that step of submitting a > > >>> payment, so why bother holding onto application state? Application > > >>> state is something that can be reconstructed "after the fact", by a > > >>> client, or omniscient observer, based on the disposition of the > > >>> current set of resource representations. > > >>> > > >>> So the order representation is always a representation of resource > > >>> state. Application state, that "snapshot of the instance of > > >>> execution of a protocol", can only be inferred or reconstructed from > > >>> resource state. > > >>> > > >>> Hope this is of some help. Apologies if I've confused more than > > >>> clarified; double apologies if I'm just talking plain nonsense. > > >>> > > >>> ian > > >>> > > > > > > > > > > > > > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > > > > > > > -- > -------------------------------------------------------- > blog: http://blog.dhananjaynene.com > twitter: http://twitter.com/dnene http://twitter.com/_pythonic >
Not necessarily: "Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s)." So, it's up to the client to decide if it should follow the link or not. Because all applications have at least one or a few well know URI, so a client should be capable of deciding "let's follow this redirect link, or assume there's nothing more to do here and let's go back to the entry-point of this app", for instance a menu page. Now, I noticed the surge on this thread of the term "Steady-State" and I since I only remembered that term vaguelly, I went to "the" dissertation looking for it. I found it 3 time, all in the same section and in two consecutive paragraphs, that talk about performance and optimization. I'm not going to quote the over-quoted sentence "optimizations is the root of all evils" (damn, I just did) that is usually quoted out of context but I think it's applicable here. It seems to me that from the point of view of "application development" - that is my area of interest - there is no distinction at all between "application state" and "application steady-state". That distinction is only interesting from a point of view of network architecture - obvious coupled with the applications that run on it. So, that distinction should only be made at a late stage of application development, when all the functionalities of said application are in place and testable. There is no point in distinguish then during application design and development. So I ask, why now the surge of this term? Did we shift from talking about application development to network architecture, are we mixing both, or what is the purpose of explicitly changed from "application state" to "application steady-state"? Or did I understood wrong the meaning of "steady-state"? Eric J. Bowman wrote: > > > Jan Algermissen wrote: > > > > > Suppose client retrieves an employee record > > > > GET /employees/552 > > > > and then changes the record's surname with > > > > POST /employees/552/surname > > > > [new name] > > > > and the receives > > > > 303 See Other > > Location: /employees/552 > > > > Is the client in a steady state now or only after a subsequent GET > > to / employees/552 to update the changed record representation? > > > > Only after. A redirect is not a steady-state containing a hypermedia > representation presenting the user with a selection of further state > transitions to choose from. > > -Eric > > > > > Or does that question not make any sense in the absence of > > additional semantics beyond the HTTP specs? > > > > Jan > > > > > > > > > > > > On Oct 3, 2009, at 6:16 PM, Ian wrote: > > > > > Hi Christian > > > > > > The state of an order - whether it has zero line items, or five, > > > is resource state, not application state. The state of the order > > > as held in the http session in your example is resource state, not > > > application state. > > > > > > A simple - perhaps overly simple - ordering protocol might be > > > something like: new order created -> adding line items -> order > > > completed -> payment received -> order dispatched. > > > > > > In the observable interactions between client and server, this > > > protocol is never visible "as such": it can only be viewed through > > > the lens of resource state. > > > > > > Over the course of a series of interactions, the > > > "application" (the game being played out between the client and the > > > server) will be in one or other of these states - as viewed from a > > > "God's eye" point of view. Once the application state has > > > progressed to "order completed", for example, it's no longer > > > possible to add manipulate resources so as to add new line items; > > > it is, however, possible to manipulate resources such that the > > > application state transitions to "payment received" (the client > > > would do this by submitting a representation of a payment, perhaps). > > > > > > The client and the server cooperate to execute this protocol, but > > > they do so by transferring representations of resource state, not > > > representations of application state. Application state is never > > > represented "as such"; rather, it's inferred by the client based > > > on on current representations of resource state. If the application > > > is in the "order completed" state, the representation of the order > > > received by the client may very well include a link that has been > > > annotated with the link relation value "payment". This isn't a > > > straightforward representation of application state, however: it's > > > an "invitation" to the client to transfer a representation of a > > > payment to this linked resource. As a side-effect of transferring > > > this representation, the "application" may transition to "payment > > > received". > > > > > > What's important here is that the server is really only interested > > > in maintaining resource state, which includes maintaining the > > > integrity of the lifecycles of the resources under its control, > > > and the invariants that hold between resources (if any). The > > > server can't be sure the client will ever take that step of > > > submitting a payment, so why bother holding onto application state? > > > Application state is something that can be reconstructed "after the > > > fact", by a client, or omniscient observer, based on the > > > disposition of the current set of resource representations. > > > > > > So the order representation is always a representation of resource > > > state. Application state, that "snapshot of the instance of > > > execution of a protocol", can only be inferred or reconstructed > > > from resource state. > > > > > > Hope this is of some help. Apologies if I've confused more than > > > clarified; double apologies if I'm just talking plain nonsense. > > > > > > ian > > > > > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Kristian Nordal > > > <kristian.nordal@...> wrote: > > >> > > >> > > >> On Oct 2, 2009, at 10:55 PM, Ian wrote: > > >> > > >>> > > >>> > > >>> --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Mark Baker <distobj@> wrote: > > >>>> > > >>>> On Fri, Oct 2, 2009 at 4:56 AM, Kristian Nordal > > >>>> <kristian.nordal@> wrote: > > >>>>> I'm also struggling with the difference between application > > >>>>> state and > > >>>>> server state (which I assume is the same as "resource state"). > > >>>>> Can someone point me to a good definition of "application > > >>>>> state"? > > >>>> > > >>>> It's literally the *state* of the *application*. If you're > > >>>> looking at > > >>>> your bank balance, that's a different state than if you were > > >>>> preparing > > >>>> to submit a bill payment, and once you've submitted the payment, > > >>>> you're in yet another state in the application state machine. > > >>>> > > >>>> Mark. > > >>>> > > >>> > > >>> Just to add to Mark's definition, and put it in the context of > > >>> "application" and "application protocol": if we think of an > > >>> application as being computer behavior that achieves a particular > > >>> goal, we can describe an application protocol as the specification > > >>> of the legitimate interactions necessary to realize that behavior, > > >>> and application state as a snapshot of the instance of execution > > >>> of an application protocol. > > >> > > >> Thanks for the definitions. I'm still a bit confused though, so I'm > > >> going to try to use an example: > > >> > > >> Let's say we have an client/ua that is filling out an order (order > > >> + line items). In a traditional web application, the order would > > >> be in the http session, and we would add/remove line items to that > > >> order, and finally place the order. In that case I clearly see > > >> that we are talking about application state that is placed on the > > >> server. The server keeps track of it, and it's literally the state > > >> of the client/ application. > > >> > > >> But if we were to store and address the order like any other > > >> resource, > > >> would that change the nature of the state? It would simply be > > >> another way of storing the same state, but nevertheless it would be > > >> "resources" with the same properties induced by the stateless > > >> constraint (visibility, reliability, and salability) - given that > > >> they > > >> were stored in the a way that make that possible. To me, this looks > > >> like exactly the same kind of state (application state), simply > > >> stored/ > > >> modeled differently. But in that case I don't see how or if it > > >> violates the stateless constraint. > > >> > > >> Would you say that the order in this example is always a "snapshot > > >> of the instance execution of an application protocol", and that it > > >> will always be application state - no matter how it's modeled? And > > >> by placing it on the server it would be in violation of the REST > > >> principles, even though the stateless constraint is dealt with? > > >> > > >> -- > > >> Thanks, > > >> Kristian > > >> > > > > > > > > > > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > -------------------------------------- > > Jan Algermissen > > > > Mail: algermissen@... <mailto:algermissen%40acm.org> > > Blog: http://algermissen.blogspot.com/ > <http://algermissen.blogspot.com/> > > Home: http://www.jalgermissen.com <http://www.jalgermissen.com> > > -------------------------------------- > > > > > > > > > >
Eric J. Bowman wrote: > Ant�nio Mota wrote: > > >> I think REST lacks lot's of *formal* definitions and one of those is >> the formal definition of "application". >> >> > > "Application" is clearly defined in REST, sec. 5.3.3: > > "Since REST is specifically targeted at distributed information systems, > it views an application as a cohesive structure of information and > control alternatives through which a user can perform a desired task. > For example, looking-up a word in an on-line dictionary is one > application, as is touring through a virtual museum, or reviewing a set > of class notes to study for an exam." > > The simplest REST application is "following a link". After the steady- > state is reached, the client has all the information it needs to allow > the user (human or machine) to choose the next steady-state, in pursuit > of their overall goal, whether that goal is reading an article or > paying for the items in a shopping cart. > > -Eric > That is "the" definition, but when I talk about "formal" definition is not having to "dive" into a philosophical dissertation to find it, but go to goggle and search for something like "rest definitions" or "rest dictionary" (although "rest" is a very difficult word to search in this context). Or even better, knowing there is a "cool uri" that is the entry point of a nice REST application whose purpose is to check for "formal definitions of rest" just use it. BTW, "following a link" can or not be a application depending on what it does in practice, if it serves some purpose of the user or not. Otherwise, "following a link" is only a REST application if the purpose of that application is to follow a link. Bottom line, what I was trying to say is that the REST community lacks a "formal" place where to compile, and in the case of non-existing or ambiguous definitions, to define them in a "formal" way, that could serve as the "formal" authoritative site for REST definitions. That should be, of course, community-driven and at least not have the opposition of Roy Fielding, as most of the definitions will be copy&paste of content of his thesis. Maybe we should revive the body@rest thread?
Hi Ian, Thanks for this great description, it helped a lot. -- Thanks, Kristian On Oct 3, 2009, at 6:16 PM, Ian wrote: > Hi Christian > > The state of an order - whether it has zero line items, or five, is > resource state, not application state. The state of the order as > held in the http session in your example is resource state, not > application state. > > A simple - perhaps overly simple - ordering protocol might be > something like: new order created -> adding line items -> order > completed -> payment received -> order dispatched. > > In the observable interactions between client and server, this > protocol is never visible "as such": it can only be viewed through > the lens of resource state. > > Over the course of a series of interactions, the "application" (the > game being played out between the client and the server) will be in > one or other of these states - as viewed from a "God's eye" point of > view. Once the application state has progressed to "order > completed", for example, it's no longer possible to add manipulate > resources so as to add new line items; it is, however, possible to > manipulate resources such that the application state transitions to > "payment received" (the client would do this by submitting a > representation of a payment, perhaps). > > The client and the server cooperate to execute this protocol, but > they do so by transferring representations of resource state, not > representations of application state. Application state is never > represented "as such"; rather, it's inferred by the client based on > on current representations of resource state. If the application is > in the "order completed" state, the representation of the order > received by the client may very well include a link that has been > annotated with the link relation value "payment". This isn't a > straightforward representation of application state, however: it's > an "invitation" to the client to transfer a representation of a > payment to this linked resource. As a side-effect of transferring > this representation, the "application" may transition to "payment > received". > > What's important here is that the server is really only interested > in maintaining resource state, which includes maintaining the > integrity of the lifecycles of the resources under its control, and > the invariants that hold between resources (if any). The server > can't be sure the client will ever take that step of submitting a > payment, so why bother holding onto application state? Application > state is something that can be reconstructed "after the fact", by a > client, or omniscient observer, based on the disposition of the > current set of resource representations. > > So the order representation is always a representation of resource > state. Application state, that "snapshot of the instance of > execution of a protocol", can only be inferred or reconstructed from > resource state. > > Hope this is of some help. Apologies if I've confused more than > clarified; double apologies if I'm just talking plain nonsense. > > ian > > --- In rest-discuss@yahoogroups.com, Kristian Nordal > <kristian.nordal@...> wrote: >> >> >> On Oct 2, 2009, at 10:55 PM, Ian wrote: >> >>> >>> >>> --- In rest-discuss@yahoogroups.com, Mark Baker <distobj@> wrote: >>>> >>>> On Fri, Oct 2, 2009 at 4:56 AM, Kristian Nordal >>>> <kristian.nordal@> wrote: >>>>> I'm also struggling with the difference between application state >>>>> and >>>>> server state (which I assume is the same as "resource state"). Can >>>>> someone point me to a good definition of "application state"? >>>> >>>> It's literally the *state* of the *application*. If you're looking >>>> at >>>> your bank balance, that's a different state than if you were >>>> preparing >>>> to submit a bill payment, and once you've submitted the payment, >>>> you're in yet another state in the application state machine. >>>> >>>> Mark. >>>> >>> >>> Just to add to Mark's definition, and put it in the context of >>> "application" and "application protocol": if we think of an >>> application as being computer behavior that achieves a particular >>> goal, we can describe an application protocol as the specification >>> of the legitimate interactions necessary to realize that behavior, >>> and application state as a snapshot of the instance of execution of >>> an application protocol. >> >> Thanks for the definitions. I'm still a bit confused though, so I'm >> going to try to use an example: >> >> Let's say we have an client/ua that is filling out an order (order + >> line items). In a traditional web application, the order would be in >> the http session, and we would add/remove line items to that order, >> and finally place the order. In that case I clearly see that we are >> talking about application state that is placed on the server. The >> server keeps track of it, and it's literally the state of the client/ >> application. >> >> But if we were to store and address the order like any other >> resource, >> would that change the nature of the state? It would simply be another >> way of storing the same state, but nevertheless it would be >> "resources" with the same properties induced by the stateless >> constraint (visibility, reliability, and salability) - given that >> they >> were stored in the a way that make that possible. To me, this looks >> like exactly the same kind of state (application state), simply >> stored/ >> modeled differently. But in that case I don't see how or if it >> violates the stateless constraint. >> >> Would you say that the order in this example is always a "snapshot of >> the instance execution of an application protocol", and that it will >> always be application state - no matter how it's modeled? And by >> placing it on the server it would be in violation of the REST >> principles, even though the stateless constraint is dealt with? >> >> -- >> Thanks, >> Kristian >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hello Bill. I would not tell you you are breaking REST constrains, but I would like to check on some notes. 1. The idea of the stateless server is an easy one. As a distributed system, you may have one or more servers that should be able to respond to any client. The request from that client may be the first one of an application sequence, or the 10th, you have to serve it with no memory of the other 9. In this way, you can add or remove servers with no problem to clients. 2. Now, resources are in the cloud. That means we cannot assume a resource is IN one particular server. Any server we contact will have to use the same resource. This is very difficult part of implementation, since you may need to implement duplication of information and fail overs. But that is totally unknown by client. 3. On, particular, two phase commit. For this, I assume there are two different data sources, and you can commit to each one separately, but when a transaction involves individual transactions at each source, then you use the famous two phase commit. Each source is a participant, right? 4. Ok, on to your proposal. One server dedicated to the management of transactions, given we need to send to it, manually, all the transaction steps and actions, may suffer some scalability problems. On the other hand you have the client that needs to do all that processing to commit the transaction. My feeling is that exposing the data entities as resources, and leaving to the client all the commit processing, is exposing too much the application detail. May not break REST, but adds unnecessary complexity. Now, the two phase commit assumes we have two sources, and you depict them as the airline resource and the hotel room resource. It is them implied that both are like databases, even more, separated database engines. And, your client will have to drive the transaction management to change data in both and then to commit. That is implicitly forcing the concepts of a resource, but still it sounds like REST. So far, so good. Now, my question would be: should I need to do all that to actually reserve a package using REST? Well, to imagine how would I do it, I'd actually follow an online reservation workflow and see what happens: a. I enter and search for a flight. System returns a list of flights and I select one. At this time a draft reservation is created with my flight in it. (Think a PUT of the empty reservation followed by a POST of the flight). b. Then the system offers me to add a hotel reservation, and from the provided list I select one too. That is added to my draft reservation (another POST). c. Finally, I add my credit card information and post a confirmation (Another POST). This last action is served by server number 5 of 10 currently serving. That server 5 needs to complete the POST, and if unable, it will return an error to the client. Well, that server uses the draft reservation resource information to call a transaction manager to commit all changes. If it fails, server 5 returns the error. That is totally opaque to the client, which only confirms and receives a yes or no to that request. Depending on that response, the client retries, updates the selection of flights or hotel and confirms again, or even desists and eliminates the reservation. Simple, ha. The difference in this process is that client is freed from knowing the transaction is happening. Resources are just that, no databases nor tables that need transactions and the client doesn't have to choose the use of single or two phase commits. You can scale since you can change the number of servers or transaction managers without touching the client. AND, each client interaction leaves the system in a stable state. Actually, this can be RESTFull too!. So, if we can hide the complexity of the transaction, why do we need to expose that complexity to the client? I may do it if that brings some benefit. My question will then be, which benefits will I found from one implementation to the other one, or why one of them is not suitable for some particular business case. Cheers. William Martinez Pomares. --- In rest-discuss@yahoogroups.com, Bill Burke <bburke@...> wrote: > > > > Kristian Nordal wrote: > > Hi, > > > > On Oct 2, 2009, at 12:25 AM, Bediako George wrote: > > > >> > >> > >> Hullo Benjamin, > >> > >> I must admit I am having some trouble understanding the distinction > >> you make between server state and application state. In principal I > >> get the theoretical difference, but I think the examples you give > >> don't necessarily illustrate the point, and in one case confuses me. > > > > I'm also struggling with the difference between application state and > > server state (which I assume is the same as "resource state"). Can > > someone point me to a good definition of "application state"? > > > > Will some kinds of state never stop being "application state", no matter > > how or where it's stored? If I were to move for instance typical session > > state into it's own resources, and treat those resources as any regular > > resource in my application - will those resources for some definitions > > of state still be application state (and a violation of the stateless > > constraint)? Or does the fact that I've re-modelled it as resources make > > it resource state? > > > > Yeah, somebody will have to explain to me why (or if) the Reservation > example I gave breaks the stateless constraint of REST. Where I think > it doesn't break the constraint is that instead of storing a specific > "view" of a resource for a specific client (like the Richardson/Ruby > O'Reilly book example on transactions), the state change is modeled as a > resource in and of itself. A Reservation still has a lot of meaning to > clients other than the Travel Agent. > > Also, whether or not the Reservation has been fulfilled is a valid state > of the resource. Just because I chose to model that state with a > specific media type (a generic transactional one) shouldn't matter IMO > as its an implementation detail. > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
I'm confused. Your site says: > When our first specifications get released we will be submitting > them to the IETF. This will allow our specifications to live within > a trusted organization with well defined rules of engagement. The latter sentence implies that you're wanting to work within the Internet Standards Process (RFC2026), which practically means a Working Group, but the former implies that you're going to bake the specs and throw them over the wall, which means Individual Submissions. Which is it? And, have you discussed this with anyone from the IETF? Cheers, On 19/09/2009, at 12:27 AM, Bill Burke wrote: > __Message Change__ > * It is now an open source project. > * We will be publishing the final content on IETF as a set of RFCs. > * We're still focusing on middleware and middleware services. > > "REST-* is an open source project dedicated to bringing the > architecture > of the web to traditional middleware services." > > "REST has a the potential to re-define how application developers > interact with traditional middleware services. The REST-* community > aims to re-examine which of these traditional services fits within the > REST model by defining new standards, guidelines, and specifications. > Where appropriate, any end product will be published at the IETF." > > __Governance changes__ > * No more trying to be a better JCP. We'll let the IETF RFC process > govern us when we're ready to submit something. > * An open source contributor agreement similar to what Apache, Eclipse > or JBoss has to protect users and contributors. > > (FYI we already required ASL, open source processes, NO-field-of-use > restrictions, etc...) > > If you have any other suggestions, let me know: > > http://www.jboss.org/reststar/community/gov2.html > > > __RESTful Interfaces for Un-RESTful Services__ > > Many traditional middleware services do not fit into the RESTful style > of development. An example is 2PC transaction management. Still, > these > services can benefit from having their distributed interface defined > RESTfully. The nomenclature will be RESTful Service vs. RESTful > Interface. > > * 2PC transactions would be considered a RESTful interface under > REST-*.org. Meaning using it makes your stuff less-RESTful, but at > least the service has a restful interface. > > * Messaging, compensations, and workflow services would be considered > "RESTful Services" that fit in the model. > > __GUIDELINES SECTION__ > > This is where I want to talk about how existing patterns, RFC's and > such > fit in with the rest of what we're doing. An example here could be > Security. What authentication models are good when? When should you > use OAuth and OpenID? How could something like OAuth interact with > middleware services? > > Some of this stuff is already up on the website. (You may have to > reload > it to see it due to cache-control policies.) > > Finally, apologies for the jboss.org redirection. It is a problem > with > our infrastructure. > > > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > > > ------------------------------------ > > Yahoo! Groups Links > > > -- Mark Nottingham http://www.mnot.net/
Hi William > > One important thing we need not to forget is that, being in a distributed context, the "server" as the performer of some services against resources, may change between each client interaction. > THAT is why the app state is held in the client, and no server has to keep any. > Good point - thanks. A distributed application is as distributed as it needs be; it's not restricted to the interactions between a client and a single server. > > Even more: the resources state graph may indicate restrictions between states, actions and trigger events. The idea of the client inferring the next step given the actual state of the resource, can be also ported to the server! That is, if a client requests an illegal action for the current state (adding a line to a closed order), the server may check first the resource state and send an error back to client. But it is clear that the server is not keeping the client state internally, it is just responding to the request in that particular moment, thus allowing us to scale nicely. > This reminds me of a great piece in Duncan Cragg's REST Dialogues (http://duncan-cragg.org/blog/post/ws-are-you-sure-rest-dialogues/), where he says: "You [the client] start by declaring your intention that some state be true, which puts the system in tension - a tension that can only be resolved by the application of business logic constraints over each player in parallel, until the whole system settles or resolves into a new, consistent state." That is, the server, on receipt of a request, checks the resource state and any constraints that only it may be aware of, before bringing the system in line with the request's declaration of future state. ian
Mark Nottingham wrote: > I'm confused. Your site says: > >> When our first specifications get released we will be submitting them >> to the IETF. This will allow our specifications to live within a >> trusted organization with well defined rules of engagement. > > The latter sentence implies that you're wanting to work within the > Internet Standards Process (RFC2026), which practically means a Working > Group, but the former implies that you're going to bake the specs and > throw them over the wall, which means Individual Submissions. > > Which is it? > > And, have you discussed this with anyone from the IETF? > What's your suggestion? Should we bake independently, then move to IETF or OASIS or something? Or start a working group now before things move any further? -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
I think I didn't knew that site. Yes, it seems a good place to put such a "formal" dictionary/definitions along the best-practices, rules-of-thumb and the like... At least, I do recognize one name of the contributors... :-) (not so, but to be sincere I only recognize two!) Well, I guess that what's missing then is someone to do the actually writing... Darrel Miller wrote: > 2009/10/4 Ant�nio Mota <amsmota@...> > >> Bottom line, what I was trying to say is that the REST community lacks a "formal" place where to compile, and in the case of non-existing or ambiguous definitions, to define them in a "formal" way, that could serve as the "formal" authoritative site for REST definitions. That should be, of course, community-driven and at least not have the opposition of Roy Fielding, as most of the definitions will be copy&paste of content of his thesis. >> >> > > Try here: > > RESTWiki [1] > > If I look at the list of contributors [2], it seems to be about as > authoritative as you are going to get. > > Darrel > > [1] http://rest.blueoxen.net/cgi-bin/wiki.pl > [2] http://rest.blueoxen.net/cgi-bin/wiki.pl?RestWikiContributors >
2009/10/4 António Mota <amsmota@...> > Bottom line, what I was trying to say is that the REST community lacks a "formal" place where to compile, and in the case of non-existing or ambiguous definitions, to define them in a "formal" way, that could serve as the "formal" authoritative site for REST definitions. That should be, of course, community-driven and at least not have the opposition of Roy Fielding, as most of the definitions will be copy&paste of content of his thesis. > Try here: RESTWiki [1] If I look at the list of contributors [2], it seems to be about as authoritative as you are going to get. Darrel [1] http://rest.blueoxen.net/cgi-bin/wiki.pl [2] http://rest.blueoxen.net/cgi-bin/wiki.pl?RestWikiContributors
Certainly true of of database transactions, but not coarse-grain coordination. If you look what a transaction context is, all it is is a tx-id, coordinator reference/location, and transaction policy information, which IMO, can easily be turned into a representation published via a link. Subbu Allamaraju wrote: > My understanding is that, in 2PC, the "transaction context" is transient > and represents the state of the transaction. In order to manage this > context, the coordinator associates it with writes done by each client. > In a sense, this context a sum of client state. Further, my > understanding is that, the most efficient way to manage this transaction > context is by keeping the client-server protocol "connection oriented". > So, when you implement 2PC over a connection-less protocol, how is that > context managed, other than by treating it as resource state? > > Subbu > > On Oct 4, 2009, at 1:01 AM, Bill Burke wrote: > >> >> >> Subbu Allamaraju wrote: >>> On Oct 2, 2009, at 3:40 PM, Bill Burke wrote: >>>> Yeah, somebody will have to explain to me why (or if) the Reservation >>>> example I gave breaks the stateless constraint of REST. Where I think >>> Well - under the disguise of a "transaction", the server is >>> maintaining per-client state. >> >> There is no per-client state. A Reservation is interesting to a >> Travel Agent, a Customer, and to an Airline. A credit or debit is >> interesting to a Credit Card Account and to the Merchant (and to Visa >> and Mastercard).Again, "fulfilled" for a reservation and >> "posted/settled" for a credit or debit are valid non-session-based >> states. The fact that these states have a different representation (a >> tx-document) shouldn't matter. >> >> >> >>> In stead of answering your question directly, let me ask you whether >>> have you examined the scalability characteristics of your proposed >>> design. >> >> Integration scenarios many times require coordination between many >> actors. It should be irrelevant if the client delegates this >> coordination to a different service. All a transaction manager does >> is guarantee that something happens, which is hard to implement many >> times on a per-application basis. This is why transaction managers >> exist. >> >> >>> It may be worthwhile to start from basics, apply each constraint one >>> by one, and whether your approach benefits. The discussion around the >>> kind of resources needed, media types, link rels, link headers vs >>> link elements in some XML format are implementation details. So far, >>> this post does not make a case of why an transactional application >>> should be built the way you propose. >> >> Well, the way is interesting because the actors being coordinated can >> negotiate with the transaction manager on the exact protocol. For >> example, the reservation resource is posted with a "transaction" link. >> The reservation service can GET that link with an Accept header of the >> preferred transaction formats it desires to interact with. If the >> reservation service does not know how to interact with the transaction >> representation, it can barf at reservation creation. >> >> >> >> -- >> Bill Burke >> JBoss, a division of Red Hat >> http://bill.burkecentral.com > -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
William Martinez Pomares wrote: > > > Hello Bill. > > I would not tell you you are breaking REST constrains, but I would like > to check on some notes. > > 1. The idea of the stateless server is an easy one. As a distributed > system, you may have one or more servers that should be able to respond > to any client. The request from that client may be the first one of an > application sequence, or the 10th, you have to serve it with no memory > of the other 9. In this way, you can add or remove servers with no > problem to clients. > > 2. Now, resources are in the cloud. That means we cannot assume a > resource is IN one particular server. Any server we contact will have to > use the same resource. This is very difficult part of implementation, > since you may need to implement duplication of information and fail > overs. But that is totally unknown by client. > > 3. On, particular, two phase commit. For this, I assume there are two > different data sources, and you can commit to each one separately, but > when a transaction involves individual transactions at each source, then > you use the famous two phase commit. Each source is a participant, right? > > 4. Ok, on to your proposal. One server dedicated to the management of > transactions, given we need to send to it, manually, all the transaction > steps and actions, may suffer some scalability problems. > On the other hand you have the client that needs to do all that > processing to commit the transaction. > > My feeling is that exposing the data entities as resources, and leaving > to the client all the commit processing, is exposing too much the > application detail. May not break REST, but adds unnecessary complexity. > > Now, the two phase commit assumes we have two sources, and you depict > them as the airline resource and the hotel room resource. It is them > implied that both are like databases, even more, separated database > engines. And, your client will have to drive the transaction management > to change data in both and then to commit. That is implicitly forcing > the concepts of a resource, but still it sounds like REST. > > So far, so good. Now, my question would be: should I need to do all that > to actually reserve a package using REST? Well, to imagine how would I > do it, I'd actually follow an online reservation workflow and see what > happens: Side discussion but, "workflow" is an interesting idea here. You see, IMO, all a Transaction Manager is, is a specific re-ocurring workflow pattern. > a. I enter and search for a flight. System returns a list of flights and > I select one. At this time a draft reservation is created with my flight > in it. (Think a PUT of the empty reservation followed by a POST of the > flight). > b. Then the system offers me to add a hotel reservation, and from the > provided list I select one too. That is added to my draft reservation > (another POST). > c. Finally, I add my credit card information and post a confirmation > (Another POST). > > This last action is served by server number 5 of 10 currently serving. > That server 5 needs to complete the POST, and if unable, it will return > an error to the client. Well, that server uses the draft reservation > resource information to call a transaction manager to commit all > changes. If it fails, server 5 returns the error. That is totally opaque > to the client, which only confirms and receives a yes or no to that > request. Depending on that response, the client retries, updates the > selection of flights or hotel and confirms again, or even desists and > eliminates the reservation. Simple, ha. > Correct me if I'm wrong, but I think you're confusing the actors here. The actors being Customer and Travel Agent. It is the Travel Agent that has to coordinate between different services, not the Customer. So yes, the Customer will be isolated from transactional semantics because it is only dealing with one actor, the Travel Agent. The Travel Agent on the other hand has to juggle a set of decoupled systems. > The difference in this process is that client is freed from knowing the > transaction is happening. Resources are just that, no databases nor > tables that need transactions and the client doesn't have to choose the > use of single or two phase commits. You can scale since you can change > the number of servers or transaction managers without touching the > client. AND, each client interaction leaves the system in a stable > state. Actually, this can be RESTFull too!. > > So, if we can hide the complexity of the transaction, why do we need to > expose that complexity to the client? I may do it if that brings some > benefit. My question will then be, which benefits will I found from one > implementation to the other one, or why one of them is not suitable for > some particular business case. > All I know is somebody is going to have to do coordination work in a system that has more than two actors: client and server. My question is how can these coordination requirements be improved by RESTful architectural principles. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
António Mota wrote: > > > 2009/10/4 Bill Burke <bburke@... <mailto:bburke%40redhat.com>> > > > Please point out exactly what is transient in the example? The > > reservation? No. It doesn't go away as its a record of the purchase. > > The transactional-state of the reservation? No, because really it is > > just a representation of the "fulfilled" or "unfulfilled" state. > > "fulfilled" or "unfulfilled" are states of the reservation, not of a > eventual "transaction resource", that should not be a resource because > it's not a "entity", a "subject", you're only using it as a crutch for > the reservation resource. > > > You could say the the transaction resource itself is transient as it was > > only used by the client to fulfill a greater task: both an airline and > > hotel reservation. But, what it turns into is a record of the entire > > transaction with the Travel Agent. For example, what if a law > > enforcement agency was investigating a crime. They would follow the > links: > > > > ticket -> reservation > > reservation -> transaction > > transaction -> transaction-participants > > transaction-participants -> hotel reservation > > hotel-reservation -> room > > room -> arrest. > > > > What's wrong with > > ticket -> reservation > reservation -> hotel reservation > hotel-reservation -> room > room -> arrest. > > or more accurate > > ticket -> reservation > reservation -> [flight reservation, hotel reservation] > hotel-reservation -> room > room -> arrest. > Why should there be a coupling between airline and hotel reservations? Add a taxi, car, and restaurant reservation and you see what i mean. The TM is acting as a proxy (as in taking the place of, not caching) for the Travel Agent. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
On Oct 5, 2009, at 6:11 PM, Bill Burke wrote: > Certainly true of of database transactions, but not coarse-grain > coordination. > > If you look what a transaction context is, all it is is a tx-id, > coordinator reference/location, and transaction policy information, > which IMO, can easily be turned into a representation published via > a link. I certainly understand that. But what is the point of this exercise? Subbu
Hi Bill, > All I know is somebody is going to have to do coordination work in a > system that has more than two actors: client and server. My question > is how can these coordination requirements be improved by RESTful > architectural principles. Why not have the client do it? Clients get a whole bunch of useful (coordination) metadata in the form of the status codes in responses from all the resources on all the services they interact with. Jim
I haven't had time to read every response here, but will still respond ... On Sat, Oct 3, 2009 at 1:59 AM, Kristian Nordal <kristian.nordal@...> wrote: > Thanks for the definitions. I'm still a bit confused though, so I'm > going to try to use an example: > > Let's say we have an client/ua that is filling out an order (order + > line items). In a traditional web application, the order would be in > the http session, and we would add/remove line items to that order, > and finally place the order. In that case I clearly see that we are > talking about application state that is placed on the server. The > server keeps track of it, and it's literally the state of the client/ > application. > > But if we were to store and address the order like any other resource, > would that change the nature of the state? It would simply be another > way of storing the same state, but nevertheless it would be > "resources" with the same properties induced by the stateless > constraint (visibility, reliability, and salability) - given that they > were stored in the a way that make that possible. To me, this looks > like exactly the same kind of state (application state), simply stored/ > modeled differently. But in that case I don't see how or if it > violates the stateless constraint. Making that state addressable doesn't change anything. The stateless constraint is being violated here because the meaning of the message sent when any user hits a "Purchase" button on a page showing an order is "Purchase these line items", yet the message contains no information about the line items, only a pointer (URI or cookie, doesn't matter) to some state held on the server. See; http://www.markbaker.ca/blog/2007/11/users-and-self-description/ Mark.
Well, if you really want the benefits of the IETF process regarding consensus, dispute resolution, etc., as well as the authority behind it, you need standards-track, which means a WG. However, before that happens I think there needs to be more discussion about concrete deliverables; depending on what you want to do, the IETF might not be a good fit. The alternate approach would be for you to come up with some proposed specs, take community input on them as you see fit, but de-emphasise the "we're building THE REST stack" flavour of this (i.e., speak primarily from a jboss/redhat perspective, not try to build a new community), and then put them in as RFC Editor submissions for Informational RFCs (i.e., non-standards-track). Then, the market would decide if they were worth taking up, and perhaps later down the road a WG could use them as input documents for the next revision. I think this latter path is more realistic, especially given the controversy that's come up; it's very hard to build technology from scratch in a standards effort, and much more workable to just throw it at the wall and see if it sticks. If it does, then you can think about standardising it. Cheers, On 06/10/2009, at 1:24 AM, Bill Burke wrote: > Mark Nottingham wrote: >> I'm confused. Your site says: >>> When our first specifications get released we will be submitting >>> them to the IETF. This will allow our specifications to live >>> within a trusted organization with well defined rules of engagement. >> The latter sentence implies that you're wanting to work within the >> Internet Standards Process (RFC2026), which practically means a >> Working Group, but the former implies that you're going to bake the >> specs and throw them over the wall, which means Individual >> Submissions. >> Which is it? >> And, have you discussed this with anyone from the IETF? > > What's your suggestion? Should we bake independently, then move to > IETF or OASIS or something? Or start a working group now before > things move any further? > > > > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com -- Mark Nottingham http://www.mnot.net/
Hello Bill. Bill Burke wrote: William Martinez Pomares wrote: Hello Bill. I would not tell you you are breaking REST constrains, but I would like to check on some notes. (...) So far, so good. Now, my question would be: should I need to do all that to actually reserve a package using REST? Well, to imagine how would I do it, I'd actually follow an online reservation workflow and see what happens: Side discussion but, "workflow" is an interesting idea here. You see, IMO, all a Transaction Manager is, is a specific re-ocurring workflow pattern. I agree with you. a. I enter and search for a flight. System returns a list of flights and I select one. At this time a draft reservation is created with my flight in it. (Think a PUT of the empty reservation followed by a POST of the flight). b. Then the system offers me to add a hotel reservation, and from the provided list I select one too. That is added to my draft reservation (another POST). c. Finally, I add my credit card information and post a confirmation (Another POST). This last action is served by server number 5 of 10 currently serving. That server 5 needs to complete the POST, and if unable, it will return an error to the client. Well, that server uses the draft reservation resource information to call a transaction manager to commit all changes. If it fails, server 5 returns the error. That is totally opaque to the client, which only confirms and receives a yes or no to that request. Depending on that response, the client retries, updates the selection of flights or hotel and confirms again, or even desists and eliminates the reservation. Simple, ha. Correct me if I'm wrong, but I think you're confusing the actors here. The actors being Customer and Travel Agent. It is the Travel Agent that has to coordinate between different services, not the Customer. So yes, the Customer will be isolated from transactional semantics because it is only dealing with one actor, the Travel Agent. The Travel Agent on the other hand has to juggle a set of decoupled systems. Actually, I'm not confusing them: I'm simplifying the solution to just one actor, the client, since the actual application IS the travel agent. See? Let's think on layers. First, the services, let's agree they are not data-like things, but as you say they are two different services offered by two different providers. In the middle, we have the travel agent, which knows how to talk to each individual provider to get the best from it. In the lower part, we have the client, which is agnostic of what is happening. He just wants to fly and sleep. So, if you want one single client to follow the workflow presented above, we agree we need more than one interaction, between the client an a travel agent. You can create a session, from client to agent, and then you have one channel there: two actors, talking to each other over several interactions. Under that assumption, you have a client server, but you will need one agent per client. Say you want it RESTfull so you can scale. Then, you can have several agents and thousands of clients. In this case, each interaction a client does may be answered by a different agent and should be a transaction by itself! See? Thus, agents are servers, and say a reservation is the resource being created. Let's go next level. Agent talks to providers. Each agent, at each interaction, talks to providers. Same thing, each provider may be a single server, or a RESTFull thing in the cloud. The agent is then in charge of committing to both providers in a single request from its client. Here, the providers are not part of the agent's application, they are third party. The two phase should be created somehow. IF the providers are the same application as the agent, then there may be no need to complicate more things, and do the two phase commits directly to databases, without building complex resources and multiple interactions. The difference in this process is that client is freed from knowing the transaction is happening. Resources are just that, no databases nor tables that need transactions and the client doesn't have to choose the use of single or two phase commits. You can scale since you can change the number of servers or transaction managers without touching the client. AND, each client interaction leaves the system in a stable state. Actually, this can be RESTFull too!. So, if we can hide the complexity of the transaction, why do we need to expose that complexity to the client? I may do it if that brings some benefit. My question will then be, which benefits will I found from one implementation to the other one, or why one of them is not suitable for some particular business case. All I know is somebody is going to have to do coordination work in a system that has more than two actors: client and server. My question is how can these coordination requirements be improved by RESTful architectural principles. Answer is I think not in much. The principles are not for coordination, but for simplicity of escalation, visibility, etc. Two phase commit is for named actors, not moving targets, and REST is about anonymous actors playing against named resources. First confusion is to think resources are data, and thus need transactions. Second, trend is to expose too much detail, thus coupling and getting inflexible to changes. That much detail transfer business logic knowledge requirements to clients, which may not be good, because any change will require adjusting all clients! (Unless it is RESTfull, I know). Still, your questions is very valid: Is REST a good thing for systems that need coordination between actors? (Not necessarily transactions). I'll leave it open for other commenters! Cheers! William Martinez Pomares
Thanks Ian, you pointed me to a great dialog that has a transaction snippet I can use in the other transaction REST discussions :D William Martinez Pomares --- In rest-discuss@yahoogroups.com, "Ian" <iansrobinson@...> wrote: > > Hi William > > > > > One important thing we need not to forget is that, being in a distributed context, the "server" as the performer of some services against resources, may change between each client interaction. > > THAT is why the app state is held in the client, and no server has to keep any. > > > > Good point - thanks. A distributed application is as distributed as it needs be; it's not restricted to the interactions between a client and a single server. > > > > > Even more: the resources state graph may indicate restrictions between states, actions and trigger events. The idea of the client inferring the next step given the actual state of the resource, can be also ported to the server! That is, if a client requests an illegal action for the current state (adding a line to a closed order), the server may check first the resource state and send an error back to client. But it is clear that the server is not keeping the client state internally, it is just responding to the request in that particular moment, thus allowing us to scale nicely. > > > > This reminds me of a great piece in Duncan Cragg's REST Dialogues (http://duncan-cragg.org/blog/post/ws-are-you-sure-rest-dialogues/), where he says: "You [the client] start by declaring your intention that some state be true, which puts the system in tension - a tension that can only be resolved by the application of business logic constraints over each player in parallel, until the whole system settles or resolves into a new, consistent state." That is, the server, on receipt of a request, checks the resource state and any constraints that only it may be aware of, before bringing the system in line with the request's declaration of future state. > > ian >
Hey! --- In rest-discuss@yahoogroups.com, Bill Burke <bburke@...> wrote: > (...) > All I know is somebody is going to have to do coordination work in a > system that has more than two actors: client and server. My question > is how can these coordination requirements be improved by RESTful > architectural principles. > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com > Ian just pointed me to Duncan Cragg's dialogs, particularly this one: http://duncan-cragg.org/blog/post/ws-are-you-sure-rest-dialogues/ It has a nice dialog with an imaginary eBay architect, and he says: "DC: Hold on. Let's not mix up financial transactions and database transactions! We'll first talk about the need for atomic units of work. Then see how to support financial transaction business logic. Also, we're talking about units of work in public view, not hidden behind resources. Inside, it's up to a resource to ensure that its integrity and consistency are maintained through its interactions with others, and it's free to use transactions to achieve that internally if it wants, without exposing that to its clients. " And he continues: "DC: In a distributed system, you have to decide on what to give up out of Consistency, Availability and Partition Tolerance... " "DC: Essentially, the rule of thumb is, use ACID internally, use BASE externally. We're back to the inevitable inversion from internal imperative thinking to external declarative thinking. As an imperative programmer you're inclined to want to take your internal programming style out into the distributed world - to think single-thread, central control: 'begin - do work - commit'. But the importance of Availability and Partition Tolerance in distributed systems usually outweighs the importance of Consistency, leading the wise architect to a more relaxed, less imperative, more declarative approach. " You can continue reading, it is nicely explained and I think it matches fairly well this discussion. Cheers! And thanks Duncan! William Martinez Pomares.
2009/10/6 Mark Nottingham <mnot@...> > Well, if you really want the benefits of the IETF process regarding > consensus, dispute resolution, etc., as well as the authority behind > it, you need standards-track, which means a WG. > > However, before that happens I think there needs to be more discussion > about concrete deliverables; depending on what you want to do, the > IETF might not be a good fit. > > The alternate approach would be for you to come up with some proposed > specs, take community input on them as you see fit, but de-emphasise > the "we're building THE REST stack" flavour of this (i.e., speak > primarily from a jboss/redhat perspective, not try to build a new > community), and then put them in as RFC Editor submissions for > Informational RFCs (i.e., non-standards-track). > > Then, the market would decide if they were worth taking up, and > perhaps later down the road a WG could use them as input documents for > the next revision. > > I think this latter path is more realistic, especially given the > controversy that's come up; it's very hard to build technology from > scratch in a standards effort, and much more workable to just throw it > at the wall and see if it sticks. If it does, then you can think about > standardising it. > FWIW that's what we're doing for the parts of OCCI <http://www.occi-wg.org/>that we can't get from existing IETF standards (like Web Categories <http://tools.ietf.org/html/draft-johnston-http-category-header>), at least in part because it's not immediately obvious to outsiders as to how to engage with existing IETF groups. We're then tying together the results and leaving the low-level/normative specification to IETF documents where possible. Observation: virtually all Internet-based protocols that matter are IETF specifications. The quality (in terms of completeness/attention to detail and resulting interoperability) is very high - usually much higher than documents created by other "niche" SDOs. In any case I see no reason to move away from a process that has worked nicely for the past 4 decades<http://www.faqs.org/rfcs/rfc1.html> . That said, more can be done to open up the process and as you said<http://www.mnot.net/blog/2009/04/14/rev_canonical_bad>earlier this year: Stepping back, I think this sort of thing is going to happen more often, not > less. Microsoft and Netscape unilaterally extended the Web with MARQUEE and > BLINK, and it was ugly, but the impact wasn’t nearly as bad as countless Web > developers all extending the Web in their own way could be. The onus is > clearly upon organisations like the W3C and IETF to make themselves as > transparent and approachable to developers as possible, so that the latent > experience and expertise in them can be drawn upon by these innovators, > instead of being seen as either irrelevant or impediments. > Sam
> Making that state addressable doesn't change anything. > > The stateless constraint is being violated here because the meaning of > the message sent when any user hits a "Purchase" button on a page > showing an order is "Purchase these line items", yet the message > contains no information about the line items, only a pointer (URI or > cookie, doesn't matter) to some state held on the server. > > > > So what I am reading from this is that a 'restful' shopping cart would need to send the entire order over at the time of purchase. So for one thing, there is no way to progressively reserve items as you find them. (or maybe you can reserve them but then you can't say 'buy everything I reserved'). Still this doesnt make transaction-like behaviours impossible. It just necessitates that the transaction is specified in one go. However I fear this per-user resource thing is much more restrictive than that. If I have a user account on a service, is that a per-user resource? What if I want to add some preferences to the account? In the worst case I can see, should the entire registration be repeated at every request? Is restful design therefore completely at odds with having services that store user profiles and allow the users to store things in that profile? So for example an amazon S3-like service is impossible as storing your files in the service is state, and deleting a directory doesn't give details about the files but only a pointer to the directory to be deleted. Alexandros
My main point in all of this is that HTML deserves a serious consideration. HTML does stand for "HyperText Markup Language." While it does have a focus on UI aspects, it has quite a few hypertext and semantic markup mechanisms. IMHO, HTML deserves more consideration from the REST community. Regarding "rel" specifically, here's some of the relevant text from the URL you provided: 4. Link Relation Types > A link relation type identifies the semantics of a link. For example, a > link with the relation type "copyright" indicates that the resource > identified by the target IRI is a statement of the copyright terms applying > to the current context IRI. ... they only describe how the current context is related to another resource. I'd like to point out two things: A) "rel" stands for relation, and is meant for semantics related to relationships. It's called "Link Relation" for a reason. Yes, there are semantics involved, but they appear to be constrained to semantics of relationships B) <link> used anywhere in the "body" (as opposed to html header) is contextual and relates one resource to another. My question still remains: Is link/rel the right mechanism to express actions? IMHO, it is worthwhile to at least consider alternatives to link/rel to express both global actions like 'search' and more "local" actions like 'cancel'/'payment'. -Solomon On Fri, Oct 2, 2009 at 4:09 PM, Subbu Allamaraju <subbu@...> wrote: > > On Oct 2, 2009, at 4:02 PM, Solomon Duskis wrote: > > • rel stands for relationship, right? "rel" would have to define >> how a universal ACTION like CustomersByZiprelates to the current document, >> not what the action is >> >> > Not so. As [1] tries to define, "a link relation type identifies the > *semantics* of a link". A well-specified link relation will have to specify > everything about the link. For application specific relations (as opposed to > general purpose ones like "next" and "prev"), specifying semantics is even > more important. In the absence of such semantics, links become meaningless. > > Subbu > > [1] http://tools.ietf.org/html/draft-nottingham-http-link-header
On Oct 6, 2009, at 1:50 PM, Solomon Duskis wrote: > My question still remains: Is link/rel the right mechanism to > express actions? No. Hypermedia semantics tell (via the spec) the client what expectations it may have about the link target (contextual typing information). The actual effect of an HTTP invocation is to be defined in the spec. For example: AtomPub defines what links/hypermedia you must find in responses to deduce that some resourse is an AtomPub collection[1]. Then AtomPub defines what is achieved by POSTing to the resource. Jan [1] It must appear in a service document as a <collection href=""> element
HATEOAS is starting to appear everywhere.... Maybe it's just me but I think that the term HATEOAS is detrimental in at least two ways. It has the negative connotation of having HATE inside it and also covers what is one of the most important aspects of REST 'hypermedia'. I therefore make the serious proposal to burry HATEOAS and actively use 'hypermedia constraint' instead before its too late. Any supporters? Jan
Jan, I'm using HTTP Link: headers for advertising possible state transitions so neither HATEOAS nor your proposed alternative are really appropriate in this case. What is it we're trying to say exactly? Sam On Tue, Oct 6, 2009 at 2:42 PM, Jan Algermissen <algermissen1971@...>wrote: > HATEOAS is starting to appear everywhere.... > > Maybe it's just me but I think that the term HATEOAS is detrimental in > at least two ways. It has the negative connotation of having HATE > inside it and also covers what is one of the most important aspects of > REST 'hypermedia'. > > I therefore make the serious proposal to burry HATEOAS and actively > use 'hypermedia constraint' instead before its too late. > > Any supporters? > > Jan > > > ------------------------------------ > > Yahoo! Groups Links > > > >
'Hypermedia' does not only apply to the message body, but to the message as a whole. Location, Content-Location and Link are all part of 'hypermedia', there is no distinction to be made. Jan On Oct 6, 2009, at 3:12 PM, Sam Johnston wrote: > Jan, > > I'm using HTTP Link: headers for advertising possible state > transitions so neither HATEOAS nor your proposed alternative are > really appropriate in this case. What is it we're trying to say > exactly? > > Sam > > On Tue, Oct 6, 2009 at 2:42 PM, Jan Algermissen <algermissen1971@... > > wrote: > HATEOAS is starting to appear everywhere.... > > Maybe it's just me but I think that the term HATEOAS is detrimental in > at least two ways. It has the negative connotation of having HATE > inside it and also covers what is one of the most important aspects of > REST 'hypermedia'. > > I therefore make the serious proposal to burry HATEOAS and actively > use 'hypermedia constraint' instead before its too late. > > Any supporters? > > Jan > > > ------------------------------------ > > Yahoo! Groups Links > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Hello Alexandros and Mark. You may want to read the other branch were we discuss the difference of server state and resource state. First, in the line items example, I can have one client and several servers, that will process the client requests in a round robin sequence. That is, I cannot assure I will get the same server for each request. So, the client creates an order, which is then a resource, and it is given an ID for it (ID=URI). Then, it adds line items. At the end, it can get the order to verify and the server in turn will send back the header and all lines so far. The client then requests to place the order, which the other server that responds does. As you can see, no server has a state, and I don't need to keep sending back and forth all the data. Now, per user resources. No, you should not think of a private resource that is temporarily locked by a client. At creation, any other client that has the same ID for the order, is able to read it. It is not good either, to "reserve" each line in the order for a final commit. That does not make sense. What you are doing is creating a package, which will be reserved all at once, in one transaction, in the last request, by the server in turn. If it happens one of the goodies is already sold out, then the server returns an error and that's it. Simple. William Martinez Pomares --- In rest-discuss@yahoogroups.com, Alexandros Marinos <al3xgr@...> wrote: > > > Making that state addressable doesn't change anything. > > > > The stateless constraint is being violated here because the meaning of > > the message sent when any user hits a "Purchase" button on a page > > showing an order is "Purchase these line items", yet the message > > contains no information about the line items, only a pointer (URI or > > cookie, doesn't matter) to some state held on the server. > > > > > > > > > > So what I am reading from this is that a 'restful' shopping cart would need > to send the entire order over at the time of purchase. So for one thing, > there is no way to progressively reserve items as you find them. (or maybe > you can reserve them but then you can't say 'buy everything I reserved'). > Still this doesnt make transaction-like behaviours impossible. It just > necessitates that the transaction is specified in one go. > > However I fear this per-user resource thing is much more restrictive than > that. If I have a user account on a service, is that a per-user resource? > What if I want to add some preferences to the account? In the worst case I > can see, should the entire registration be repeated at every request? Is > restful design therefore completely at odds with having services that store > user profiles and allow the users to store things in that profile? So for > example an amazon S3-like service is impossible as storing your files in the > service is state, and deleting a directory doesn't give details about the > files but only a pointer to the directory to be deleted. > > Alexandros >
Not many weeks ago we had this same discussion, try to search for it. The bottom line, everyone has their own opinion on what it should be, so the conclusion is, "leave that alone"... Jan Algermissen wrote: > > > HATEOAS is starting to appear everywhere.... > > Maybe it's just me but I think that the term HATEOAS is detrimental in > at least two ways. It has the negative connotation of having HATE > inside it and also covers what is one of the most important aspects of > REST 'hypermedia'. > > I therefore make the serious proposal to burry HATEOAS and actively > use 'hypermedia constraint' instead before its too late. > > Any supporters? > > Jan > >
Hello William, On Tue, Oct 6, 2009 at 2:21 PM, William Martinez Pomares < wmartinez@...> wrote: > So, the client creates an order, which is then a resource, and it is given > an ID for it (ID=URI). Then, it adds line items. At the end, it can get the > order to verify and the server in turn will send back the header and all > lines so far. The client then requests to place the order, which the other > server that responds does. > > As you can see, no server has a state, and I don't need to keep sending > back and forth all the data. > The issues I see with this are the following: If the client can GET the order resource, then any server has access to the state of the resource at any time and can return it to the client. If any server can access the state of the order at any time, then the server that receives the request to purchase all items will be able to as well. Which means there is a shared database backend. Therefore, adding an extra roundtrip doesn't get you any additional flexibility. What it costs you however is a possible consistency problem. If there are multiple parties that can add items to the order, and one commits this order by purchasing, if there is a new order item added in the time it takes for the 'committer' to GET and POST the order, then the last item is lost. Additionally, since the server presumably does not know which order resource this purchase is related to, it can't change it's status to 'purchased' or whatever. So items can keep being added in perpetuity. Clearly we need something better than this. Alexandros
On Tue, Oct 6, 2009 at 1:29 AM, Alexandros Marinos <al3xgr@...> wrote: > >> Making that state addressable doesn't change anything. >> >> The stateless constraint is being violated here because the meaning of >> the message sent when any user hits a "Purchase" button on a page >> showing an order is "Purchase these line items", yet the message >> contains no information about the line items, only a pointer (URI or >> cookie, doesn't matter) to some state held on the server. >> >> > > So what I am reading from this is that a 'restful' shopping cart would need to send the entire order over at the time of purchase. So for one thing, there is no way to progressively reserve items as you find them. (or maybe you can reserve them but then you can't say 'buy everything I reserved'). Still this doesnt make transaction-like behaviours impossible. It just necessitates that the transaction is specified in one go. Well, the state of the order could be maintained on the server but then transferred to the client for a "Verify" step, and then submitted back to the server as the final step so that the server only uses that state in the message rather than the state it had been maintaining. > > However I fear this per-user resource thing is much more restrictive than that. If I have a user account on a service, is that a per-user resource? What if I want to add some preferences to the account? In the worst case I can see, should the entire registration be repeated at every request? Is restful design therefore completely at odds with having services that store user profiles and allow the users to store things in that profile? So for example an amazon S3-like service is impossible as storing your files in the service is state, It depends on what that state is and how its used in messages. See above. > and deleting a directory doesn't give details about the files but only a pointer to the directory to be deleted. Depends what the intent of the message is. If it's to delete the directory independent of what's in it, then DELETE on that directory captures it. If it's only to delete some files in the directory but not others, then you'll need multiple messages. If it's to delete the directory but only if the directory is in some known state, then DELETE + If-Match would be required. Mark.
António Mota wrote: > Not many weeks ago we had this same discussion, try to search for it. > The bottom line, everyone has their own opinion on what it should be, > so the conclusion is, "leave that alone"... > Roy has never written "HATEOAS" anywhere; his terminology is "hypermedia constraint" and I agree we should say that, not HATEOAS. There is certainly no consensus that says "leave that alone," as we discuss this all the time. Nobody knows where "HATEOAS" came from or who started it or anything, so I doubt anyone will be offended if we drop the silly acronym. -Eric > > Jan Algermissen wrote: > > > > > > HATEOAS is starting to appear everywhere.... > > > > Maybe it's just me but I think that the term HATEOAS is detrimental > > in at least two ways. It has the negative connotation of having HATE > > inside it and also covers what is one of the most important aspects > > of REST 'hypermedia'. > > > > I therefore make the serious proposal to burry HATEOAS and actively > > use 'hypermedia constraint' instead before its too late. > > > > Any supporters? > > > > Jan > > > > > >
Google for "hipermedia constraint" and for "hypermedia as the engine of application state", and for "hateoas" and compare, maybe is too late to change... Probably HATEOAS comes as abbreviation of "hypermedia as the engine of application state", as taken from, wadya know, "the" dissertation... "REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state." "Hipermedia as the engine of applications state constraint" is much more then "hipermedia constraint" (unless you say "hipermedia constraint" as a abbreviation of "hipermedia as the engine of application state constraint"), as this later suggests that the fact that a representation has hiperlinks embedded makes it Restfull. But it's not, unless that hipermedia "drives" the states changes, if it's the "engine" of such state changes. Or in Roy Fielding words, "The problem is that just being connected is not enough (...) hypertext as the engine of hypermedia state is also about late binding of application alternatives that guide the client through whatever it is that we are trying to provide as a service." So the constraint is well described as "hypermedia as the engine of application state". HATEOAS is just for short. Like the name of the country located between Mexico and Canada is called United States of America. Now you could dislike call it USA, and you can call it just America, but this last term is ambiguous, it can be a country or a continent or a musical band... Even if you refer to it as "America country" (like in "Hipermedia constraint"), will it be America the country or any other country in America the continent? Or even a a music called "country" performed by America? "Linkability", "Connectedness", "hipermedia constraint", "links as state transitions", "Hypermedia Describes Protocols", "Hypermedia constrains protocols"... Well, "a rose by any other name would smell as sweet"... Here's the other thread for those who don't remember it http://tech.groups.yahoo.com/group/rest-discuss/message/12730 (Just to clarify, I didn't say that there were a consensus to "leave that alone" as you wrongly imply in your post. So, to be clear, what I'm saying is, unless there is a consensus, we should leave it alone) Eric J. Bowman wrote: > Ant�nio Mota wrote: > > >> Not many weeks ago we had this same discussion, try to search for it. >> The bottom line, everyone has their own opinion on what it should be, >> so the conclusion is, "leave that alone"... >> >> > > Roy has never written "HATEOAS" anywhere; his terminology is > "hypermedia constraint" and I agree we should say that, not HATEOAS. > There is certainly no consensus that says "leave that alone," as we > discuss this all the time. Nobody knows where "HATEOAS" came from or > who started it or anything, so I doubt anyone will be offended if we > drop the silly acronym. > > -Eric > > >> Jan Algermissen wrote: >> >>> >>> >>> HATEOAS is starting to appear everywhere.... >>> >>> Maybe it's just me but I think that the term HATEOAS is detrimental >>> in at least two ways. It has the negative connotation of having HATE >>> inside it and also covers what is one of the most important aspects >>> of REST 'hypermedia'. >>> >>> I therefore make the serious proposal to burry HATEOAS and actively >>> use 'hypermedia constraint' instead before its too late. >>> >>> Any supporters? >>> >>> Jan >>> >>> >>> >>
Just for the sake of curiosity, and since I'm still at work with nothing to do (I hope my boss is not on this list), here's the first occurrence of HATEOAS that I could find in the net... ...curiously in the form HatEoAS ... and after a title "Lie: REST Doesn't Need WSDL" http://qconsf.com/sf2007/file?path=/QConSF2007/slides/public/SanjivaWeerawarana_MythsFactsLies.pdf Ant�nio Mota wrote: > Google for "hipermedia constraint" and for "hypermedia as the engine > of application state", and for "hateoas" and compare, maybe is too > late to change... > > Probably HATEOAS comes as abbreviation of "hypermedia as the engine of > application state", as taken from, wadya know, "the" dissertation... > > "REST is defined by four interface constraints: identification of > resources; manipulation of resources through representations; > self-descriptive messages; and, hypermedia as the engine of > application state." > > "Hipermedia as the engine of applications state constraint" is much > more then "hipermedia constraint" (unless you say "hipermedia > constraint" as a abbreviation of "hipermedia as the engine of > application state constraint"), as this later suggests that the fact > that a representation has hiperlinks embedded makes it Restfull. But > it's not, unless that hipermedia "drives" the states changes, if it's > the "engine" of such state changes. > > Or in Roy Fielding words, "The problem is that just being connected is > not enough (...) hypertext as the engine of hypermedia state is also > about late binding of application alternatives that guide the client > through whatever it is that we are trying to provide as a service." > > So the constraint is well described as "hypermedia as the engine of > application state". HATEOAS is just for short. Like the name of the > country located between Mexico and Canada is called United States of > America. Now you could dislike call it USA, and you can call it just > America, but this last term is ambiguous, it can be a country or a > continent or a musical band... Even if you refer to it as "America > country" (like in "Hipermedia constraint"), will it be America the > country or any other country in America the continent? Or even a a > music called "country" performed by America? > > "Linkability", "Connectedness", "hipermedia constraint", "links as > state transitions", "Hypermedia Describes Protocols", "Hypermedia > constrains protocols"... Well, "a rose by any other name would smell > as sweet"... > > Here's the other thread for those who don't remember it > > http://tech.groups.yahoo.com/group/rest-discuss/message/12730 > > (Just to clarify, I didn't say that there were a consensus to "leave > that alone" as you wrongly imply in your post. So, to be clear, what > I'm saying is, unless there is a consensus, we should leave it alone) > > > > Eric J. Bowman wrote: >> Ant�nio Mota wrote: >> >> >>> Not many weeks ago we had this same discussion, try to search for >>> it. The bottom line, everyone has their own opinion on what it >>> should be, >>> so the conclusion is, "leave that alone"... >>> >>> >> >> Roy has never written "HATEOAS" anywhere; his terminology is >> "hypermedia constraint" and I agree we should say that, not HATEOAS. >> There is certainly no consensus that says "leave that alone," as we >> discuss this all the time. Nobody knows where "HATEOAS" came from or >> who started it or anything, so I doubt anyone will be offended if we >> drop the silly acronym. >> >> -Eric >> >> >>> Jan Algermissen wrote: >>> >>>> >>>> >>>> HATEOAS is starting to appear everywhere.... >>>> >>>> Maybe it's just me but I think that the term HATEOAS is detrimental >>>> in at least two ways. It has the negative connotation of having HATE >>>> inside it and also covers what is one of the most important aspects >>>> of REST 'hypermedia'. >>>> >>>> I therefore make the serious proposal to burry HATEOAS and actively >>>> use 'hypermedia constraint' instead before its too late. >>>> >>>> Any supporters? >>>> >>>> Jan >>>> >>>> >>>> >>> >
2009/10/6 Eric J. Bowman <eric@...> > Roy has never written "HATEOAS" anywhere; his terminology is > "hypermedia constraint" and I agree we should say that, not HATEOAS. > There is certainly no consensus that says "leave that alone," as we > discuss this all the time. Nobody knows where "HATEOAS" came from or > who started it or anything, so I doubt anyone will be offended if we > drop the silly acronym. Actually, I think Roy prefers "hypertext constraint"<http://tech.groups.yahoo.com/group/rest-discuss/message/9680>. Both "hypertext constraint" and "hypermedia constraint" get roughly comparable google hits, so it probably doesn't matter which we choose. -- Nick
2009/10/6 António Mota <amsmota@...> > Just for the sake of curiosity, and since I'm still at work with nothing > to do (I hope my boss is not on this list), here's the first occurrence > of HATEOAS that I could find in the net... > > ...curiously in the form HatEoAS > > ... and after a title "Lie: REST Doesn't Need WSDL" > > http://qconsf.com/sf2007/file?path=/QConSF2007/slides/public/SanjivaWeerawarana_MythsFactsLies.pdf A much earlier use of the term (08/06/2006) is right here in our little group: http://tech.groups.yahoo.com/group/rest-discuss/message/6390 . -- Nick
António Mota wrote: > > Probably HATEOAS comes as abbreviation of "hypermedia as the engine > of application state", as taken from, wadya know, "the" > dissertation... > That is what HATEOAS *stands for* not where it *comes from*. Believe it or not, Antonio, I'm not a complete and total moron who can't for the life of me figure out what an acronym is. But thanks for trolling. -Eric
Hi guys, While creating a RESTful web service, how would one go about documenting it? HATEOAS seem to be key, but how do you document the relationship between the resources linked? Most seem to use URI templates, but depending on them doesn't seem to be RESTful. WSDL/WADL are ok as specifications, but what do you guys think? Jan Vincent Liwanag jvliwanag@...
Documenting for whom? Users? Automated clients? It's not relevant. Representations are self-describing. When was the last time you had to explain to somebody how a browser works? Documenting your implementation? Whatever floats your boat. Nicolai
Hey Jan, > WSDL/WADL > are ok as specifications, but what do you guys think? I think they are not OK, not at all. WSDL is the poorest excuse for an IDL that I have ever encountered, but WADL is equally unnecessary. You're on the right lines when you say hypermedia will help - media types and link relations are the other things you need. Jim
+1 Both are "description" languages but help neither developers nor client apps completely. For RESTful apps, documenting link relations, URIs (in cases where hypermedia does not help), URI templates or equivalent, media types, formats, methods supported etc in prose works best. Subbu On Oct 7, 2009, at 8:47 AM, Jim Webber wrote: > Hey Jan, > >> WSDL/WADL >> are ok as specifications, but what do you guys think? > > I think they are not OK, not at all. WSDL is the poorest excuse for an > IDL that I have ever encountered, but WADL is equally unnecessary. > > You're on the right lines when you say hypermedia will help - media > types and link relations are the other things you need. > > Jim > > > ------------------------------------ > > Yahoo! Groups Links > > >
Subbu's approach is pretty much what we chose to use for documenting the Sun Cloud API[1]. Given that you have the URI to a particular resource (which was discovered by being retrieved from some previous representation, and the representations include URIs to "interesting" related resources -- no URI templates here), the documentation describes "what kind of requests can I submit to this URI"? Craig McClanahan [1] http://kenai.com/projects/suncloudapis/pages/Home On Tue, Oct 6, 2009 at 11:54 PM, Subbu Allamaraju <subbu@...> wrote: > > > +1 > > Both are "description" languages but help neither developers nor > client apps completely. > > For RESTful apps, documenting link relations, URIs (in cases where > hypermedia does not help), URI templates or equivalent, media types, > formats, methods supported etc in prose works best. > > > Subbu > > > On Oct 7, 2009, at 8:47 AM, Jim Webber wrote: > > > Hey Jan, > > > >> WSDL/WADL > >> are ok as specifications, but what do you guys think? > > > > I think they are not OK, not at all. WSDL is the poorest excuse for an > > IDL that I have ever encountered, but WADL is equally unnecessary. > > > > You're on the right lines when you say hypermedia will help - media > > types and link relations are the other things you need. > > > > Jim > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > >
Eric J. Bowman wrote: > Ant�nio Mota wrote: > > >> Probably HATEOAS comes as abbreviation of "hypermedia as the engine >> of application state", as taken from, wadya know, "the" >> dissertation... >> >> > > That is what HATEOAS *stands for* not where it *comes from*. Believe > it or not, Antonio, I'm not a complete and total moron who can't for > the life of me figure out what an acronym is. But thanks for trolling. > > -Eric > Ok, I see, it stands for"hypermedia as the engine of application state" but it doesn't come from"hypermedia as the engine of application state"... subtleties of the english language I guess... Nevertheless, if I read the two following paragraphs "REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state." "REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia." I don't think they mean the same. Now if your problem is with the acronym itself, and you think "hypermedia constraint" is clearer or nicer than HATEOAS as a short way of referring to the concept of "hypermedia as the engine of application state", I again say that the first is more ambiguous and prone to confusion that the second, specially if you're talking to a newbie or someone who's trying to learn REST as it was I not many months ago (and still am of course). Just say "hypermedia constraint" and I guarantee you that person soon forget about that, or at most it will think "hypermedia constraint = must have links". Say to him HATEOAS and I assure you that he will either give up on learning REST or he'll rush to find out what it really is and he'll never forget it. Of course, if you're talking among Restafarians they will understand the same either you say HATEOAS or "hypermedia constraint", but for them they even understand "the 4th constraint"... Just to finish, let me assure you I don't think you're a complete and total moron, and what you call trolling is just a way of writing to make a point, it's called "irony" and it's a figure of speech in literature. But of course you knew this... Nevertheless, I'm sorry if with that I hurt your feelings.
Hi, It seems the ClientInfo.getSubject() method is gone. I was using that to populate the Subject from within my Verifier with the user. What has happened to it, and has it been replaced with something else that I can use? Preferably something returning Subject... /Rickard
Wrong list??? Jan On Oct 7, 2009, at 12:00 PM, Rickard Öberg wrote: > Hi, > > It seems the ClientInfo.getSubject() method is gone. I was using > that to > populate the Subject from within my Verifier with the user. What has > happened to it, and has it been replaced with something else that I > can > use? Preferably something returning Subject... > > /Rickard > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@acm.org Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On 2009-10-07 18.11, Jan Algermissen wrote: > Wrong list??? Yup, sorry about that one. My address book autocomplete was a little too fast.. sorry, Rickard
Hey - No problem Rickard, let's just hope this hasn't got in the way of the super-important discussion about which REST acronyms and buzz-phrases we have to use! ;) - Mike Rickard �berg wrote: > On 2009-10-07 18.11, Jan Algermissen wrote: > >> Wrong list??? >> > > Yup, sorry about that one. My address book autocomplete was a little too > fast.. > > sorry, Rickard >
What nice about HATEOAS is that you can inject commentary, system status, whatever you like if you're using HTML. This additional information helps those in meatspace[1] understand and interact with your resources. I've used javascript to provide interactivity when a representation is sent to the client. The sweet part is that the agents are processing the link/rel tags and never care about the javascript or surrounding html. Microformats help here as well. Since these were specific to the application, I didn't have to bother with defining new media types. I've been asked/challenged on what is the WSDL equivalent for RESTful architecture is and my answer included the above plus the need for the agent to query. Querying the document for links/rel tags allows for arbitrary content to be included which I believe is a good thing and promoted the pattern above. WADL/WSDL is much too brittle for my taste because these techniques rely on ordinal positions or link templates to navigate the hypermedia. I've blogged about the need to query here [2]. -Noah [1] http://en.wikipedia.org/wiki/Meatspace [2] http://noahcampbell.info/?p=203 On Wed, Oct 7, 2009 at 12:12 AM, Craig McClanahan <craigmcc@...>wrote: > > > Subbu's approach is pretty much what we chose to use for documenting the > Sun Cloud API[1]. Given that you have the URI to a particular resource > (which was discovered by being retrieved from some previous representation, > and the representations include URIs to "interesting" related resources -- > no URI templates here), the documentation describes "what kind of requests > can I submit to this URI"? > > Craig McClanahan > > [1] http://kenai.com/projects/suncloudapis/pages/Home > > On Tue, Oct 6, 2009 at 11:54 PM, Subbu Allamaraju <subbu@...> wrote: > >> >> >> +1 >> >> Both are "description" languages but help neither developers nor >> client apps completely. >> >> For RESTful apps, documenting link relations, URIs (in cases where >> hypermedia does not help), URI templates or equivalent, media types, >> formats, methods supported etc in prose works best. >> >> > >> Subbu >> >> >> On Oct 7, 2009, at 8:47 AM, Jim Webber wrote: >> >> > Hey Jan, >> > >> >> WSDL/WADL >> >> are ok as specifications, but what do you guys think? >> > >> > I think they are not OK, not at all. WSDL is the poorest excuse for an >> > IDL that I have ever encountered, but WADL is equally unnecessary. >> > >> > You're on the right lines when you say hypermedia will help - media >> > types and link relations are the other things you need. >> > >> > Jim >> > >> > >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> >> > > >
Hi Alexandros. --- In rest-discuss@yahoogroups.com, Alexandros Marinos <al3xgr@...> wrote: > > Hello William, > > On Tue, Oct 6, 2009 at 2:21 PM, William Martinez Pomares < > wmartinez@... wrote: > > > So, the client creates an order, which is then a resource, and it is given > > an ID for it (ID=URI). Then, it adds line items. At the end, it can get the > > order to verify and the server in turn will send back the header and all > > lines so far. The client then requests to place the order, which the other > > server that responds does. > > > > > > As you can see, no server has a state, and I don't need to keep sending > > back and forth all the data. > > > > The issues I see with this are the following: If the client can GET the > order resource, then any server has access to the state of the resource at > any time and can return it to the client. That is correct. > If any server can access the state of the order at any time, then the server that receives the request to > purchase all items will be able to as well. That is the idea. >Which means there is a shared database backend. This depends on implementation, but let's say it is. > Therefore, adding an extra roundtrip doesn't get you any additional flexibility. Ok, here we disagree: The extra roundtrip you mention is the get of the complete order, to actually request to confirm that complete order. If you use any locking mechanism, I will lose flexibility, so not adding it will make me free! > What it costs you however is a possible consistency > problem. If there are multiple parties that can add items to the order, and > one commits this order by purchasing, if there is a new order item added in > the time it takes for the 'committer' to GET and POST the order, then the > last item is lost. Not really! The get and then post is the way I can tell the server: this is the state of the resource I'm committing, please do so. The server will then verify if there are no less, nor more lines and commit. If it founds any difference, the it will throw an error. Now, that problem you mention may happen, but it will be the most difficult. So, adding the most complex mechanism to avoid other party adding a line at that split second window, costs me more than just checking and then throwing an error, which may not happen at all. See? That is being optimistic due to the business case. Please check the principles eBay uses to gain flexibility it their massive site: no transactions at all, just faith and smart failure recovery. > Additionally, since the server presumably does not know which order resource this purchase is related to, it can't change it's > status to 'purchased' or whatever. So items can keep being added in > perpetuity. Clearly we need something better than this. > Not sure I understand this. The last server of course does know the ID of the order, since it is sent by client! That last server has a great deal of work to do: 1. Lock the order 2. Check all the lines are still in inventory. 3. Subtract the items 4. Change order status and commit. All that in one transaction. In this case, the ACID transaction is done internally, at one hit, from the last request a server receives, not step by step from the client. > Alexandros > Cheers! William Martinez Pomares (Sorry if I post this twice)
--- In rest-discuss@yahoogroups.com, Jan Vincent <jvliwanag@...> wrote: > > Hi guys, > > While creating a RESTful web service, how would one go about > documenting it? HATEOAS seem to be key, but how do you document the > relationship between the resources linked? Most seem to use URI > templates, but depending on them doesn't seem to be RESTful. WSDL/WADL > are ok as specifications, but what do you guys think? > > Jan Vincent Liwanag > jvliwanag@... > Ok. Here is my take. Reading all the other branch, you will see they refer to a RESTFull app, not to a service. Second, you can see almost no one likes WSDL. Well, here is the other side of the coin. First, we will need to define what a RESTFul web service is. Careful with that. There has been much confusion, and people think a RESTFul web service is that one that does not require SOAP and just needs to be called using HTTP. That is a wrong view. A service is a business functionality exposed. You can model that service as a resource. Being a process by itself, it can be implemented replicated in different servers, of with indirection (the servers you request to will forward that to the one that has the service implemented with several threads). That will allow scalability and other REST goals. Now, being RESTFul means you are constrained to use the HTTP operations (unless you don't want to use HTTP, you may want to use SMTP, but that is another story). Simply put, you have one URL that is the origin. A get to that URI will bring a Hypertext document that you use to continue with the next step in the process. Most of the people that don't like WSDL nor SOAP, will use HTML. HTML was created for another purpose, but can be used here. Still, WSDL is a IANA defined content-type, and is an XML and fits also as a Hypertext document. So, to me, it is perfectly fine to use! WSDL will tell you what are the endpoints (URLs to send messages), what is the format of the messages, the interactions, even to the point of describing the HTTP verbs used (Version 2). Now, the bad way of using this is actually the way everybody does: Reading the WSDL out of band and creating a static client that performs RPC. Wrong, that breaks several REST rules. Your client has to be able to read WSDL on the fly, and perform the required operation, which will be a get or a post of a message to a service. Should that be SOAP? not really, if you implement correctly the service. If you use a point and click tool, you will end up with all those REST rules broken. Now, your question: Is WSDL a way to document? NO, it is not. WSDL is a description language for services. That means, you read it to process, not to document what the process does. It is part of implementation, not a document out of band. Now, you should not need documentation for REST execution, but you will do for business case and construction. Let me explain: If you are in a banking case, and you need to standardize the account IDs, then you will need to create a glossary with types (data dictionary?) where all the semantics of the application are described. Not the code interactions and URIs, but explaining what an Account ID is, or how to handle an overdraft. See? This is business level. Why is that needed? Because then you are talking the same, and when the server asks for an accountID field, all clients know what it refers too on the fly! So you can automate most everything, and you effort is in dynamic construction and not static one. Hope this helps. William Martinez Pomares
Noah Campbell wrote: > > > What nice about HATEOAS is that you can inject commentary, system > status, whatever you like if you're using HTML. This additional > information helps those in meatspace[1] understand and interact with > your resources. I've used javascript to provide interactivity when a > representation is sent to the client. The sweet part is that the agents > are processing the link/rel tags and never care about the javascript or > surrounding html. Microformats help here as well. Since these were > specific to the application, I didn't have to bother with defining new > media types. > > > I've been asked/challenged on what is the WSDL equivalent for RESTful > architecture is and my answer included the above plus the need for the > agent to query. Querying the document for links/rel tags allows for > arbitrary content to be included which I believe is a good thing and > promoted the pattern above. WADL/WSDL is much too brittle for my taste > because these techniques rely on ordinal positions or link templates to > navigate the hypermedia. I've blogged about the need to query here [2]. > The only thing I see something like WADL is useful for is testing via tools/ide. Then again, you might be able to get there just as easily from what you describe above. Bill -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Sorry for responding to this rather old post now: On Sep 20, 2009, at 6:45 AM, Mark Baker wrote: > On Sat, Sep 19, 2009 at 4:23 PM, Aristotle Pagaltzis > <pagaltzis@...> wrote: >>> As a side question, is it ok to use DELETE when it 'clears >>> a list'. Say, I want to clear my shopping cart, so I delete it. >>> However, when I GET it later on, it simply says that it's empty >>> rather than it doesn't exist. >> >> No. Don’t overload the meaning of verbs to use them for >> “something similar”. DELETE has narrow semamtics; if your use >> does not preserve them, then you should not be using DELETE. > > He's not overloading the meaning; DELETE requests the resource's > representations be removed, and the server does that so is able to > respond 2xx to it. Nothing says the server can't then immediately - > or at any time of its choosing - make new representations of that > resource available. > What's the difference between a PUT and DELETE then in terms of visibility? Can any intermediary reasonably do something different for the two? Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ > Mark. > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Sep 27, 2009, at 7:46 PM, Felipe Gaúcho wrote: > GET url to the confirmation form... > the page containing the form has a POST button to validate the > registration.... > > it adds even more security to the whole process, while also add more > usability complications to the users.. this trade off is complicated > because it seems I am penalizing the users to preserve hateoas :) Why is that a complication? I see confirmation emails of this kind all the time, and found them all usable. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Oct 7, 2009, at 9:14 PM, Stefan Tilkov wrote: > Sorry for responding to this rather old post now: > > On Sep 20, 2009, at 6:45 AM, Mark Baker wrote: > >> On Sat, Sep 19, 2009 at 4:23 PM, Aristotle Pagaltzis >> <pagaltzis@...> wrote: >>>> As a side question, is it ok to use DELETE when it 'clears >>>> a list'. Say, I want to clear my shopping cart, so I delete it. >>>> However, when I GET it later on, it simply says that it's empty >>>> rather than it doesn't exist. >>> >>> No. Don’t overload the meaning of verbs to use them for >>> “something similar”. DELETE has narrow semamtics; if your use >>> does not preserve them, then you should not be using DELETE. >> >> He's not overloading the meaning; DELETE requests the resource's >> representations be removed, and the server does that so is able to >> respond 2xx to it. Nothing says the server can't then immediately - >> or at any time of its choosing - make new representations of that >> resource available. >> > > What's the difference between a PUT and DELETE then in terms of > visibility? Can any intermediary reasonably do something different for > the two? What has this got to do with visibility? The server is free to recycle its URIs (and expose itself to security and/or consistency issues). Subbu
On Oct 7, 2009, at 10:22 PM, Subbu Allamaraju wrote: > > On Oct 7, 2009, at 9:14 PM, Stefan Tilkov wrote: > >> Sorry for responding to this rather old post now: >> >> On Sep 20, 2009, at 6:45 AM, Mark Baker wrote: >> >>> On Sat, Sep 19, 2009 at 4:23 PM, Aristotle Pagaltzis >>> <pagaltzis@...> wrote: >>>>> As a side question, is it ok to use DELETE when it 'clears >>>>> a list'. Say, I want to clear my shopping cart, so I delete it. >>>>> However, when I GET it later on, it simply says that it's empty >>>>> rather than it doesn't exist. >>>> >>>> No. Don’t overload the meaning of verbs to use them for >>>> “something similar”. DELETE has narrow semamtics; if your use >>>> does not preserve them, then you should not be using DELETE. >>> >>> He's not overloading the meaning; DELETE requests the resource's >>> representations be removed, and the server does that so is able to >>> respond 2xx to it. Nothing says the server can't then immediately - >>> or at any time of its choosing - make new representations of that >>> resource available. >>> >> >> What's the difference between a PUT and DELETE then in terms of >> visibility? Can any intermediary reasonably do something different >> for >> the two? > > What has this got to do with visibility? The server is free to > recycle its URIs (and expose itself to security and/or consistency > issues). After a DELETE, I expect to get a 410 or 404 - I assume it deletes a resource (or at least all of its representations). If my intent is to change it, I use a PUT. You are right that an intermediary can't assume anything about a URI that has been DELETEd, though. Which makes me wonder why it's there in the first place. Historically, it would be interesting to know why some verbs made it and others didn't. Stefan
On Wed, Oct 7, 2009 at 2:16 PM, Stefan Tilkov <stefan.tilkov@...> wrote: > After a DELETE, I expect to get a 410 or 404 - I assume it deletes a > resource (or at least all of its representations). It does, but it might decide to immediately provide a new representation. Consider a Wiki page; if you deleted one with DELETE, would you really expect a 404 on the next GET? Or would you expect a "Click here to create this page" response? > If my intent is to change > it, I use a PUT. Right. When you DELETE your intent better be to delete. But the fact that the server might choose to provide a new representation after the DELETE completes is independent of your intent. > > You are right that an intermediary can't assume anything about a URI that > has been DELETEd, though. Which makes me wonder why it's there in the first > place. Historically, it would be interesting to know why some verbs made it > and others didn't. Not sure what you mean. Mark.
>>>>> "Mark" == Mark Baker <distobj@...> writes:
Mark> It does, but it might decide to immediately provide a new
Mark> representation. Consider a Wiki page; if you deleted one
Mark> with DELETE, would you really expect a 404 on the next GET?
Mark> Or would you expect a "Click here to create this page"
Mark> response?
I would expect a 404 of course, and in the body the click here to
create a new one.
--
Cheers,
Berend de Boer
On Wed, Oct 7, 2009 at 3:51 PM, <berend@...> wrote: >>>>>> "Mark" == Mark Baker <distobj@...> writes: > > Mark> It does, but it might decide to immediately provide a new > Mark> representation. Consider a Wiki page; if you deleted one > Mark> with DELETE, would you really expect a 404 on the next GET? > Mark> Or would you expect a "Click here to create this page" > Mark> response? > > I would expect a 404 of course, and in the body the click here to > create a new one. LOL, I had written an explanation of that but removed it figuring it detracted from my point. But yes, that's a reasonable approach for a Wiki. So consider a "counter" page that returns the number of hits on it. If you DELETE that, it's reasonable for the server to next respond with a 200 and a "0" response. The point being that DELETE still means "delete", but what the server chooses to do after it successfully processes it is completely context dependent and so should be of no concern to anybody. Mark.
Dear All, I am new to this group, although have been reading discussions for a while. We are working on a project, trying to design RESTfull system, comprising of multiple resources, exposing (subsets) of the same REST API. The rough description of the system is as follows: there are resources providing access to domain specific data (with more or less complex structure) and several types of resources, with the intended function to process given subset of the data (dataset) and as a result to augment the data (e.g. add /update new calculated properties). The processing is also encapsulated as a REST resource, making use of POST to apply its processing to the data subset, identified with URI. Now, this all works fine if we consider all types of services residing on the same server (or multiple servers with a common shared data access backend). The processing type resources take dataset URI as an argument in the POST or PUT, does its magic and generates URI for the new/modified dataset. However, what we would like to do is to have multiple independent REST systems, possibly under different administrative boundaries and domain names. The dataset resources can be on a totally different servers / domains than the processing ones, for various reasons. Several open questions arise, like where the URI, resulting from the processing service should point, if the processing service itself doesn't want to implement any of the data storage/access capabilities itself? What about authentication and authorization within such a system (centralized, federated )? While this is a typical scenario for SOAP web services, it seems we are hitting the limits of REST architecture here - having multiple independent services, that we would like to access in an uniform way from a client. Quick search on REST services composition reveal this is indeed an unexplored issue, the vast amount of the REST talks only touch the case of a single REST system. Looking for your comments; is the above a scenario unsuitable for REST architecture, are we doing something completely wrong, has anybody experience with distributed REST resources, exposing the same API? Best regards, Nina Jeliazkova -- --------------------------------- Dr. Nina Jeliazkova Technical Manager IdeaConsult Ltd. 1000 Sofia, Bulgaria Tel: +359 886 802011 ICQ: 10705013 www: http://ambit.sourceforge.net --------------------------------- PGP Public Key http://cert.acad.bg/pgp-keys/keys/nina-nikolova-0xEEABA669.asc 8E99 8BAD D804 1A43 27B7 7F87 CF04 C7D1 EEAB A669 ---------------------------------------------------------------
> > After a DELETE, I expect to get a 410 or 404 - I assume it deletes a > resource (or at least all of its representations). If my intent is to > change it, I use a PUT. > > Out of curiosity, Is it unreasonable to model it as a 303 or 307 (in general one of the 30X as the case may be) ... For example if we take the example of gmail where one deletes a mail and the server moves the item to the deleted "queue". The response is a link to "undelete" the item in deleted "queue". Best, - Dilip
Felipe Gaucho wrote: > The response should include the URI of its resources, including " > itself" and the related resources.... > > So if your request creates a resource, the response should include > where the resource were created (URI) and also the address of itd > related resources... The "navigation" of the clients should be driven > by this information... In advance a client is not supposed to know the > "endpoints", the client should find the resource address in the > response itself.... > > It is hateoas, just google it to know more about it May be I am not quite clear. The Processor does not want to create resource on the same server where it is running. It should create a resource elsewhere. The places where it is possible are multiple and can be dynamic. The HATEOAS (including the URI in the response) is the final step - the first step is how do we choose where to create the resource? One answer is this scenario is not RESTfull - but ii is a real use case, thus REST has its limits. We've been developing these services for more than 6 months already. It is well known composition of REST services is underexplored problem [1],[2] - I hoped there is some experience already. [1] On Composing RESTful Services http://drops.dagstuhl.de/opus/volltexte/2009/2043/ [2] Towards Automated RESTful Web Service Composition, http://www2.computer.org/portal/web/csdl/doi/10.1109/ICWS.2009.111 (2009) Best regards, Nina > > On 08.10.2009, at 10:01, Nina Jeliazkova <nina@... > <mailto:nina@...>> wrote: > >> Felipe Gaucho wrote: >>> If all servers are reachable on the wbe, HATEOAS will fix your >>> problem... >> >> All services are on the web. Still, how a resource >> "http://myserver/Processor" should decide where to create the >> resource "Result", if there are several existing servers providing >> REST API for creating resources of type Result, and "myserver" itself >> does not support such functionality? Where it should be decided - in >> the client application, by myserver's configuration ? How about >> those servers providing "Result" API being dynamic and independent - >> how the Processor service would learn about their existence, in order >> to provide hyperlinks to them ? >> >> HATEOAS is a very elegant architecture principle, but it will be hard >> to apply if a resource does not have a knowledge of potential >> hyperlinks applicable (like browsing WWW before search engines existed). >> A specific answer in this context is very much appreciated. >> >> Best regards, >> Nina >> >>> >>> Otherwise, you should have entry servers and do use redirection in >>> some cases... >>> >>> Routing is not a limitation of REST, it is actually a basic feature >>> of the internet >>> >>> On 08.10.2009, at 07:51, Nina Jeliazkova <nina@... >>> <mailto:nina@...>> wrote: >>> >>>> >>>> >>>> Dear All, >>>> >>>> I am new to this group, although have been reading discussions for >>>> a while. >>>> >>>> We are working on a project, trying to design RESTfull system, >>>> comprising of multiple resources, exposing (subsets) of the same >>>> REST API. >>>> The rough description of the system is as follows: there are resources >>>> providing access to domain specific data (with more or less complex >>>> structure) and several types of resources, with the intended function >>>> to process given subset of the data (dataset) and as a result to >>>> augment >>>> the data (e.g. add /update new calculated properties). The >>>> processing is >>>> also encapsulated as a REST resource, making use of POST to apply its >>>> processing to the data subset, identified with URI. >>>> >>>> Now, this all works fine if we consider all types of services residing >>>> on the same server (or multiple servers with a common shared data >>>> access >>>> backend). The processing type resources take dataset URI as an argument >>>> in the POST or PUT, does its magic and generates URI for the >>>> new/modified dataset. However, what we would like to do is to have >>>> multiple independent REST systems, possibly under different >>>> administrative boundaries and domain names. The dataset resources >>>> can be >>>> on a totally different servers / domains than the processing ones, for >>>> various reasons. Several open questions arise, like where the URI, >>>> resulting from the processing service should point, if the processing >>>> service itself doesn't want to implement any of the data storage/access >>>> capabilities itself? What about authentication and authorization within >>>> such a system (centralized, federated )? >>>> >>>> While this is a typical scenario for SOAP web services, it seems we are >>>> hitting the limits of REST architecture here - having multiple >>>> independent services, that we would like to access in an uniform way >>>> from a client. Quick search on REST services composition reveal this is >>>> indeed an unexplored issue, the vast amount of the REST talks only >>>> touch >>>> the case of a single REST system. >>>> >>>> Looking for your comments; is the above a scenario unsuitable for REST >>>> architecture, are we doing something completely wrong, has anybody >>>> experience with distributed REST resources, exposing the same API? >>>> >>>> Best regards, >>>> Nina Jeliazkova >>>> >>>> -- >>>> --------------------------------- >>>> Dr. Nina Jeliazkova >>>> Technical Manager >>>> IdeaConsult Ltd. >>>> 1000 Sofia, Bulgaria >>>> Tel: +359 886 802011 >>>> ICQ: 10705013 >>>> www: http://ambit.sourceforge.net >>>> --------------------------------- >>>> PGP Public Key >>>> http://cert.acad.bg/pgp-keys/keys/nina-nikolova-0xEEABA669.asc >>>> 8E99 8BAD D804 1A43 27B7 7F87 CF04 C7D1 EEAB A669 >>>> ---------------------------------------------------------- >>>> >>>> >>
Mark Baker wrote: > On Wed, Oct 7, 2009 at 2:16 PM, Stefan Tilkov <stefan.tilkov@...> wrote: >> After a DELETE, I expect to get a 410 or 404 - I assume it deletes a >> resource (or at least all of its representations). > > It does, but it might decide to immediately provide a new > representation. Consider a Wiki page; if you deleted one with DELETE, > would you really expect a 404 on the next GET? Or would you expect a > "Click here to create this page" response? I would really expect a 404 or 410 (if the information to know it had previously existed was there) where the entity was a "click here to create this page" response. 404 and 410 entities are representations of the fact that the resource doesn't exist. There's no reason why they shouldn't be more useful than stating that bare fact, and every reason why they should.
Nina Jeliazkova wrote: > However, what we would like to do is to have > multiple independent REST systems, possibly under different > administrative boundaries and domain names. The dataset resources can be > on a totally different servers / domains than the processing ones, for > various reasons. Several open questions arise, like where the URI, > resulting from the processing service should point, if the processing > service itself doesn't want to implement any of the data storage/access > capabilities itself? What's the issue. Server 1 receives a request to <http://server1/someResource> does something that affects <http://server2/someResource> and makes use of that URI in its response. What's the difficulty? > What about authentication and authorization within > such a system (centralized, federated )? Yeah, one of those. REST gives you a means to transfer representations, which gives you can use to transfer a representation of authentication tokens between clients and servers. The details of what happens away from the REST interface aren't a matter of REST, but REST allows for the various "I am me" or "I know that secret thing" statements involved. > While this is a typical scenario for SOAP web services, it seems we are > hitting the limits of REST architecture here - having multiple > independent services, that we would like to access in an uniform way > from a client. You've just described the world-wide-web. As much as SOAP people seem to keep insisting otherwise, the www is real and it works.
Jon Hanna wrote: > Nina Jeliazkova wrote: > >> However, what we would like to do is to have >> multiple independent REST systems, possibly under different >> administrative boundaries and domain names. The dataset resources can be >> on a totally different servers / domains than the processing ones, for >> various reasons. Several open questions arise, like where the URI, >> resulting from the processing service should point, if the processing >> service itself doesn't want to implement any of the data storage/access >> capabilities itself? >> > > What's the issue. > > Server 1 receives a request to <http://server1/someResource> does > something that affects <http://server2/someResource> and makes use of > that URI in its response. > > What's the difficulty? > How does it choose it should affect server2/someresource and not server3/someresource ? By configuration ? What if for the next round server4/someresource appears and can be used as well, how will server1 learn about it? > > What about authentication and authorization within > > such a system (centralized, federated )? > > Yeah, one of those. REST gives you a means to transfer representations, > which gives you can use to transfer a representation of authentication > tokens between clients and servers. The details of what happens away > from the REST interface aren't a matter of REST, but REST allows for the > various "I am me" or "I know that secret thing" statements involved. > > What would be the recommended solution for the federated AA for REST services? Can you please point to an example of an existing REST system, making use of it? >> While this is a typical scenario for SOAP web services, it seems we are >> hitting the limits of REST architecture here - having multiple >> independent services, that we would like to access in an uniform way >> from a client. >> > > You've just described the world-wide-web. As much as SOAP people seem to > keep insisting otherwise, the www is real and it works. > > WWW works mostly because it is read-only for humans and human mind is used to select the next click. The existence of links is defined either by the webmaster, or by search engines. How this translates to machines ? Don't get me wrong, we selected REST to SOAP for our web services, having already some experience in SOAP. The point is we are now struggling with an use case which has a straightforward solution in SOAP, but we are already deep in implementing services in a REST way. None of the REST books/talks explain how one coordinates distributed read/write REST services, in contrast to the vast amount of orchestration, workflows, etc. for SOAP. In order to provide hypermedia links in the next response, one would need to be aware of them at least. This is easy if the entire REST system is on one server and it constructs the URI by some templates. How a resource learns about links outside of their other domain? Before search engines people used to send each other bookmarks and paste it manually into the browser - is this the recommended approach now? Best regards, Nina > > ------------------------------------ > > Yahoo! Groups Links > > >
Nina Jeliazkova wrote: > How does it choose it should affect server2/someresource and not > server3/someresource ? By configuration ? What if for the next round > server4/someresource appears and can be used as well, how will server1 > learn about it? Either because it's told about server2/someresource or it already knows, or it knows where to find out, or it's told were to find out. I don't know which would be more appropriate for what you are doing, but right now can't think of any way for any server of any sort to know what to do here beyond those four possibilities. REST can handle the telling it or telling it where to find out. If it knows or knows where to find out that's nothing to do with the interaction that is being done with another server through REST (though in the "knows where to find out" scenario, that could involve another REST interaction). > What would be the recommended solution for the federated AA for REST > services? Can you please point to an example of an existing REST > system, making use of it? I understand OpenID is federated, but won't say any more beyond admitting to much ignorance here. > WWW works mostly because it is read-only for humans and human mind is > used to select the next click. The existence of links is defined > either by the webmaster, or by search engines. How this translates to > machines ? This isn't true. The WWW is not read-only for humans, as we can do all manner of things that affect the state of the server (Wiki's are the most blatant example, but just about anything that receives a POST acts on it, and doesn't throw away the result is an example). And we know which links to select next because of the information we are given about those links. It's precisely the same for machines. They know what a URI does based on what they are told in the representation in which they found it. > In order to provide hypermedia links in the next response, one would > need to be aware of them at least. This is easy if the entire REST > system is on one server and it constructs the URI by some templates. > How a resource learns about links outside of their other domain? Before > search engines people used to send each other bookmarks and paste it > manually into the browser - is this the recommended approach now? From your mail I'm envisioning something like: 1. Server A is responding to a request. 2. This cases state in Server B to change, in a manner the client will be interested. Either: 3a. Server A is telling Server B what change to make and "where" to make it. 3b. Server A is telling Server B what change to make and Server B is deciding "where" to make it. In which case either Server A knows the URI of the affected resource to tell the client about, or Server B can tell Server A after its done the change, and Server B can pass this on to the client. Am I getting something very wrong?
Jon Hanna wrote: > Nina Jeliazkova wrote: > >> How does it choose it should affect server2/someresource and not >> server3/someresource ? By configuration ? What if for the next round >> server4/someresource appears and can be used as well, how will server1 >> learn about it? >> > > Either because it's told about server2/someresource or it already knows, > or it knows where to find out, or it's told were to find out. > > I don't know which would be more appropriate for what you are doing, but > right now can't think of any way for any server of any sort to know what > to do here beyond those four possibilities. REST can handle the telling > it or telling it where to find out. If it knows or knows where to find > out that's nothing to do with the interaction that is being done with > another server through REST (though in the "knows where to find out" > scenario, that could involve another REST interaction). > > So this is a configuration issue, or introducing a custom solution for registration and availability of resources. It's a pity there is no search engine looking for resources of specific media type to help us with HATEOAS. >> What would be the recommended solution for the federated AA for REST >> services? Can you please point to an example of an existing REST >> system, making use of it? >> > > I understand OpenID is federated, but won't say any more beyond > admitting to much ignorance here. > > OpenID is for authentication, authorization will need additional (probably custom) solution. I am really interested if there exists a system based on distributed REST read/write services, exposing the same API, besides the human readable web. >> WWW works mostly because it is read-only for humans and human mind is >> used to select the next click. The existence of links is defined >> either by the webmaster, or by search engines. How this translates to >> machines ? >> > > This isn't true. The WWW is not read-only for humans, as we can do all > manner of things that affect the state of the server (Wiki's are the > most blatant example, but just about anything that receives a POST acts > on it, and doesn't throw away the result is an example). And we know > which links to select next because of the information we are given about > those links. > > It's precisely the same for machines. They know what a URI does based on > what they are told in the representation in which they found it. > > Do you mean "told by the _rel_ tag in the link" - that's a bit vaguely defined IMHO. >> In order to provide hypermedia links in the next response, one would >> need to be aware of them at least. This is easy if the entire REST >> system is on one server and it constructs the URI by some templates. >> How a resource learns about links outside of their other domain? Before >> search engines people used to send each other bookmarks and paste it >> manually into the browser - is this the recommended approach now? >> > > From your mail I'm envisioning something like: > > 1. Server A is responding to a request. > 2. This cases state in Server B to change, in a manner the client will > be interested. > Either: > 3a. Server A is telling Server B what change to make and "where" to make it. > 3b. Server A is telling Server B what change to make and Server B is > deciding "where" to make it. > > In which case either Server A knows the URI of the affected resource to > tell the client about, or Server B can tell Server A after its done the > change, and Server B can pass this on to the client. > > Am I getting something very wrong? > > > Both are fine, provided Server A knows about Server B by some external means. An interesting question is how 1) or 3b) are different from the RPC approach - Server A doesn't manipulate directly any resource, it just receives a request and conveys some information to Server B - exactly in a way a SOAP service will behave. Why do we call this RESTfull? Best regards, Nina > ------------------------------------ > > Yahoo! Groups Links > > >
NinaJeliazkova wrote: > So it's a configuration issue, or inventing a custom solution for resource registration and availability. It's a pity there is no search engine looking for resources of specific media type to help us with HATEOAS. Maybe there is. We're just talking about a design style here. There's nothing in OO to directly deal with making things appear on a screen, but there are plenty of OO implementations. Likewise, someone may very well have already invented this wheel in a RESTful manner unbeknownst to me. Personally I've built: Systems where one server would talk to several others (generally through means spec'd by third parties and a mixture of SOAP, XML-over-HTTP and RESTful XML-over-HTTP in decreasing order of how long they took me to integrate) and dealt RESTfully with clients who didn't care what those servers where or even that they existed. Systems where one server would query another for information based on entities received by the client referring it to resources on that second server. Systems where one server knew about users and of the second and could authenticate the identity of those users for the second to then treat accordingly. All of which had custom constraints and custom specs. and so while cases of multiple server interaction may not give any direct parallels to what you are doing. (And of course, also plenty of cases where there were multiple servers in a farm, but since they are conceptually a single server to a client, that doesn't really count). Maybe other people here have looked at the issue in more general terms, which would be nice. > Do you mean "told by the _rel_ element" in the link? IMHO that's rather vaguely defined. I mean told by whatever it is they are receiving. HTML with rel could be the perfect solution, or custom XML, or an already defined format, or JSON or a document written in the style of Jane Austen (assuming you've secretly broken several hard AI problems and have a client that can process a document written in the style of Jane Austen ;) All of these approaches are RESTful as long as the document contains the relevant URI references and all are useful as long as the document can be processed by the client. > Both are fine, given service A is told about Server B by some external means. An interesting point is how 1) or 3b) are different from the RPC approach - in both cases server A is not affecting any resource directly, it just receives a request and conveys some information to Server B for further processing. How does it fit with the RESTfull design? Well, there is no reason why a resource on Server B need not be a resource on Server A. The resource <http://example.net/A> could be the resource <http://example.com/B> or, if you don't mind showing your workings (which might be either more flexible or more brittle depending on other factors) we might use <http://example.net/example.com/B>. The simplest such implementation of this would be to work out the URI the other server uses for the same resource and pass on the request and return the response. This would also be a completely pointless implementation (one could just go to the second server and forget about this), but the interaction between client and example.net and between example.net (acting as a client) and example.com can both be completely RESTful. Obviously, you have some reason for doing something more involved than just this, or else why bother with talking to the first server at all, but demonstrates the point. A slightly more sensible example is a resource at <http://example.net/someHandler> that receives a POSTed entity and does some sort of complicated operation that affects a resource at <http://example.com/someResource> which is what the client is then interested in. Here the client doesn't care about the details of what is done, it POSTs what it POSTs, some black-box magic that it is none of its concern happens, and then a 303 See Other directs it to GET <http://example.com/someResource>. A myriad other models of interaction could be thought up, but their relevance to what you are talking about, I can't really guess at.
Jon Hanna wrote: > NinaJeliazkova wrote: > >> So it's a configuration issue, or inventing a custom solution for resource registration and availability. It's a pity there is no search engine looking for resources of specific media type to help us with HATEOAS. >> > > Maybe there is. > > We're just talking about a design style here. There's nothing in OO to > directly deal with making things appear on a screen, but there are > plenty of OO implementations. Likewise, someone may very well have > already invented this wheel in a RESTful manner unbeknownst to me. > > Personally I've built: > > Systems where one server would talk to several others (generally through > means spec'd by third parties and a mixture of SOAP, XML-over-HTTP and > RESTful XML-over-HTTP in decreasing order of how long they took me to > integrate) and dealt RESTfully with clients who didn't care what those > servers where or even that they existed. > > Systems where one server would query another for information based on > entities received by the client referring it to resources on that second > server. > > Systems where one server knew about users and of the second and could > authenticate the identity of those users for the second to then treat > accordingly. > > All of which had custom constraints and custom specs. and so while cases > of multiple server interaction may not give any direct parallels to what > you are doing. (And of course, also plenty of cases where there were > multiple servers in a farm, but since they are conceptually a single > server to a client, that doesn't really count). > > Thank you for the interesting discussion. All these seem to fall in the category where the client talks only to a single server, regardless of what complexity that server hides; while my original point was a client talking to multiple independent servers, being able not only to retrieve data from multiple servers (as in mashups), but do some processing with the help of servers that offer processing capabilities. > Maybe other people here have looked at the issue in more general terms, > which would be nice. > > >> Do you mean "told by the _rel_ element" in the link? IMHO that's rather vaguely defined. >> > > I mean told by whatever it is they are receiving. HTML with rel could be > the perfect solution, or custom XML, or an already defined format, or > JSON or a document written in the style of Jane Austen (assuming you've > secretly broken several hard AI problems and have a client that can > process a document written in the style of Jane Austen ;) > > All of these approaches are RESTful as long as the document contains the > relevant URI references and all are useful as long as the document can > be processed by the client. > > >> Both are fine, given service A is told about Server B by some external means. An interesting point is how 1) or 3b) are different from the RPC approach - in both cases server A is not affecting any resource directly, it just receives a request and conveys some information to Server B for further processing. How does it fit with the RESTfull design? >> > > Well, there is no reason why a resource on Server B need not be a > resource on Server A. > This was the original point - Server A and Server B are different, one can only do some magic, given the data, and the other can only retrieve/store data. Best regards, Nina > The resource <http://example.net/A> could be the resource > <http://example.com/B> or, if you don't mind showing your workings > (which might be either more flexible or more brittle depending on other > factors) we might use <http://example.net/example.com/B>. > > The simplest such implementation of this would be to work out the URI > the other server uses for the same resource and pass on the request and > return the response. This would also be a completely pointless > implementation (one could just go to the second server and forget about > this), but the interaction between client and example.net and between > example.net (acting as a client) and example.com can both be completely > RESTful. > > Obviously, you have some reason for doing something more involved than > just this, or else why bother with talking to the first server at all, > but demonstrates the point. > > A slightly more sensible example is a resource at > <http://example.net/someHandler> that receives a POSTed entity and does > some sort of complicated operation that affects a resource at > <http://example.com/someResource> which is what the client is then > interested in. Here the client doesn't care about the details of what is > done, it POSTs what it POSTs, some black-box magic that it is none of > its concern happens, and then a 303 See Other directs it to GET > <http://example.com/someResource>. > > A myriad other models of interaction could be thought up, but their > relevance to what you are talking about, I can't really guess at. > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hello Nina, > All these seem to fall in the category where the client talks only > to a > single server, regardless of what complexity that server hides; > while my > original point was a client talking to multiple independent servers, > being able not only to retrieve data from multiple servers (as in > mashups), but do some processing with the help of servers that offer > processing capabilities. Using hypermedia, a service can generate links which are understood by client automata which do not resolve to its own URI space. For example (shameless book plug follows, sorry!), in our forthcoming book, Savas, Ian, and I show how we could outsource the payment of coffee orders to a third party using a hypermedia-driven protocol. The automata client "writes" data to at least two services in this use- case. Does that qualify as service composition? If so I can elaborate, if not I'm stuck. Jim
Hello Jim,
Jim Webber wrote:
> Hello Nina,
>
>
>> All these seem to fall in the category where the client talks only
>> to a
>> single server, regardless of what complexity that server hides;
>> while my
>> original point was a client talking to multiple independent servers,
>> being able not only to retrieve data from multiple servers (as in
>> mashups), but do some processing with the help of servers that offer
>> processing capabilities.
>>
>
> Using hypermedia, a service can generate links which are understood by
> client automata which do not resolve to its own URI space. For example
> (shameless book plug follows, sorry!), in our forthcoming book, Savas,
> Ian, and I show how we could outsource the payment of coffee orders to
> a third party using a hypermedia-driven protocol.
>
> The automata client "writes" data to at least two services in this use-
> case. Does that qualify as service composition? If so I can elaborate,
> if not I'm stuck.
>
Yes, please elaborate. What I'm interested is how clients and REST
services work in the following setting, rather than whether the correct
name is service composition.
* Several REST services,exposing the (subsets of the) same API
* There are two types of resources in our use case - one (A) that
can read/write data and others (B) that can only process given
data in some magic way and generate more data on the fly; or try
to create a resource of type (A) somewhere else, not necessary on
the same server where (B) is running;
* Client(s) (and perhaps services) being aware of the other services
* Client(s) being able to talk to multiple REST services
Summarizing the discussion up to this point:
* The clients/services can be made aware of existence of other REST
services by default configuration or some custom solution for
registration of the services (e.g. REST UDDI ;)
* We don't really care if magic resource (B) behaves in a RESTfull
way or not when processing the request and generating data,
without assigning an URI to it;
* OpenID (eventually) can be used for client authentication,
provided the client is a browser. If a client is not a browser,
or if a service needs to be authenticated to talk to another
service, it's an open question;
* Authorization of users, authenticated by OpenID for resource
access is an open question, also in the case when one service
needs to talk to another one, on behalf of the client.
Best regards,
Nina
> Jim
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
On Oct 8, 2009, at 1:07 AM, Mark Baker wrote: > So consider a "counter" page that returns the number of hits on it. > If you DELETE that, it's reasonable for the server to next respond > with a 200 and a "0" response. Yes, that would be a reasonable response. But why would it be reasonable for the client to use a DELETE here? Why would a PUT not be a better option? Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Oct 8, 2009, at 12:34 AM, Mark Baker wrote: > On Wed, Oct 7, 2009 at 2:16 PM, Stefan Tilkov > <stefan.tilkov@...> wrote: > > Historically, it would be interesting to know why some verbs made it > > and others didn't. > > Not sure what you mean. > I meant it would be interesting to know why which verb made it into the official HTTP standard (e.g. I know that PATCH was considered, but didn't make it). Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Jon Hanna wrote: > > While this is a typical scenario for SOAP web services, it seems we are > > hitting the limits of REST architecture here - having multiple > > independent services, that we would like to access in an uniform way > > from a client. > > You've just described the world-wide-web. As much as SOAP people seem to > keep insisting otherwise, the www is real and it works. > You (and others) really have to stop using this "the www is real and it works" to answer every difficulty that application developers come across when trying to apply a RESTful architecture to their problem domain. Its just totally unproductive. The fact of the matter is the vast majority of the Web is a one-to-one relationship between client and server. In other words, the Web is a simple system. As a result you have a lot of simple answers to simple problems. These integration problems are real and I think REST can solve many of them in a better way. And stop trashing the SOAP stacks. SOAP specs define real requirements and distributed computing problems. Sure, the design and implementation of the SOAP stacks suck, but the problems they try and address are real. I believe REST can solve these problems in a better way, but the Web is not a superset of these problems. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
Bill, You (and others) really have to stop using this "the www is real and it > works" to answer every difficulty that application developers come > across when trying to apply a RESTful architecture to their problem > domain. Its just totally unproductive. The fact of the matter is the > vast majority of the Web is a one-to-one relationship between client and > server. In other words, the Web is a simple system. As a result you > have a lot of simple answers to simple problems. These integration > problems are real and I think REST can solve many of them in a better way. > Almost all of the real web is based on a one-to-many relationship. Silos rarely exist on the web. We must all be glad to be able to use the web without needing "global link registries aka RESTful UDDI repositories". The integration problem is real, but the server boundary does not change the problem very much. Does the client care if the resource it is accessing is on hateoas.com or ulser.com as long it knows the semantics of the link, the methods to use, security requirements, media types and so on? The key is communicating and implementing the semantics. Unfortunately, things like "RESTful composition" only help spread the confusion. Subbu
On Thu, Oct 8, 2009 at 6:55 AM, Nina Jeliazkova <nina@...> wrote: > OpenID is for authentication, authorization will need additional (probably custom) solution. Check out OAuth <http://en.wikipedia.org/wiki/Oauth>, OpenID's RESTful authorization partner. > I am really interested if there exists a system based on distributed REST read/write services, exposing the same API, besides the human readable web. Check out Google's Data Protocol <http://code.google.com/apis/gdata/>. Right now you could deploy a 3rd party (ie non-Google, not on a Google server) "processing-type resource" (as you call it) to 1. get data from one Google Data source/sink (say Google Spreadsheet 1) 2. process it with the processing-type resource 3. put the resulting data into a different Google Data source/sink (say Google Spreadsheet 2) All this came be done with federated authorization via Google's realization of OAuth <http://sites.google.com/site/oauthgoog/>. You could even get the data from a Google source and post the results to a Zoho<http://writer.zoho.com/public/help/zohoapi/fullpage>sink (though I don't think Zoho supports OAuth...yet). So the machine-navigable Web (using REST, ROA, WOA architectural constraints) is not as far along as the human navigable one, but its slowing getting there. I'm pretty sure it will get there before WS-* does. -- Nick
Subbu Allamaraju wrote: > Bill, > > You (and others) really have to stop using this "the www is real and it > works" to answer every difficulty that application developers come > across when trying to apply a RESTful architecture to their problem > domain. Its just totally unproductive. The fact of the matter is the > vast majority of the Web is a one-to-one relationship between client and > server. In other words, the Web is a simple system. As a result you > have a lot of simple answers to simple problems. These integration > problems are real and I think REST can solve many of them in a > better way. > > > Almost all of the real web is based on a one-to-many relationship.' For reading data sure, certainly not for coordinating input and output. -- Bill Burke JBoss, a division of Red Hat http://bill.burkecentral.com
> > > The integration problem is real, but the server boundary does not > change the problem very much. Does the client care if the resource it > is accessing is on hateoas.com <http://hateoas.com> or ulser.com > <http://ulser.com> as long it knows the semantics of the link, the > methods to use, security requirements, media types and so on? At least transparently accessing resources outside of server boundary under different domains requires /slightly/ more complicated authentication/authorization scheme, that just a single server, unless all resources are considered unprotected. Best regards, Nina
Nope. You mean, I can't serve HTML from one server to a client that would post to another? Most real-world large scale web sites do use tens if not hundreds of servers to navigate users between reads and writes seamlessly. In most cases, these servers cross business unit boundaries. Subbu On Thu, Oct 8, 2009 at 4:24 PM, Bill Burke <bburke@...> wrote: > > > Subbu Allamaraju wrote: > >> Bill, >> >> You (and others) really have to stop using this "the www is real and it >> works" to answer every difficulty that application developers come >> across when trying to apply a RESTful architecture to their problem >> domain. Its just totally unproductive. The fact of the matter is the >> vast majority of the Web is a one-to-one relationship between client >> and >> server. In other words, the Web is a simple system. As a result you >> have a lot of simple answers to simple problems. These integration >> problems are real and I think REST can solve many of them in a >> better way. >> >> >> Almost all of the real web is based on a one-to-many relationship.' >> > > For reading data sure, certainly not for coordinating input and output. > > > > -- > Bill Burke > JBoss, a division of Red Hat > http://bill.burkecentral.com >
Learning how to authenticate is no different from leaning about the media types and formats. I don't mean to undermine the difficulty here, but the problem does not change just because a URI belongs to a server different from the one that served the representation. Subbu On Thu, Oct 8, 2009 at 4:24 PM, Nina Jeliazkova <nina@...> wrote: > > > > > > The integration problem is real, but the server boundary does not change > the problem very much. Does the client care if the resource it is accessing > is on hateoas.com or ulser.com as long it knows the semantics of the link, > the methods to use, security requirements, media types and so on? > > At least transparently accessing resources outside of server boundary under > different domains requires *slightly* more complicated > authentication/authorization scheme, that just a single server, unless all > resources are considered unprotected. > > Best regards, > Nina > > >
Subbu Allamaraju wrote: > Learning how to authenticate is no different from leaning about the > media types and formats. > > I don't mean to undermine the difficulty here, but the problem does > not change just because a URI belongs to a server different from the > one that served the representation. > > Subbu The difficulty is not in the authentication itself, but with the federated authentication/authorization, encompassing multiple servers. Otherwise, it is pretty easy to protect each resource with arbitrary kind of available authentication scheme and ask the client to provide credentials on each POST. It is quite sure users will not be happy with such approach. Besides, REST does not encourages cookies and sessions, meaning credentials or something derived from credentials should be sent on_every_ request. Best regards, Nina > > On Thu, Oct 8, 2009 at 4:24 PM, Nina Jeliazkova <nina@... > <mailto:nina@...>> wrote: > > > > >> >> >> The integration problem is real, but the server boundary does not >> change the problem very much. Does the client care if the >> resource it is accessing is on hateoas.com <http://hateoas.com> >> or ulser.com <http://ulser.com> as long it knows the semantics of >> the link, the methods to use, security requirements, media types >> and so on? > At least transparently accessing resources outside of server > boundary under different domains requires /slightly/ more > complicated authentication/authorization scheme, that just a > single server, unless all resources are considered unprotected. > > Best regards, > Nina > > > > >
Nina Jeliazkova wrote: > Besides, REST does not encourages cookies and sessions, meaning > credentials or something derived from credentials should be sent > on_every_ request. I'm curious about this assertion, at least as it applies to cookies. Cookies represent pieces of application state that are stored with the client, and they get sent on every request (where domains apply). Why isn't that RESTful? (Noting that some cookies may contain more interesting information than JSESSIONID, for example. I agree with an understand the general assessment that server-side session storage is not RESTful). Jon ........ Jon Moore Comcast Interactive Media -----Original Message----- From: rest-discuss@yahoogroups.com on behalf of Nina Jeliazkova Sent: Thu 10/8/2009 10:53 AM To: Subbu Allamaraju Cc: jeliazkova.nina@gmail.com; Rest List Subject: Re: [rest-discuss] composition of REST services Subbu Allamaraju wrote: > Learning how to authenticate is no different from leaning about the > media types and formats. > > I don't mean to undermine the difficulty here, but the problem does > not change just because a URI belongs to a server different from the > one that served the representation. > > Subbu The difficulty is not in the authentication itself, but with the federated authentication/authorization, encompassing multiple servers. Otherwise, it is pretty easy to protect each resource with arbitrary kind of available authentication scheme and ask the client to provide credentials on each POST. It is quite sure users will not be happy with such approach. Besides, REST does not encourages cookies and sessions, meaning credentials or something derived from credentials should be sent on_every_ request. Best regards, Nina > > On Thu, Oct 8, 2009 at 4:24 PM, Nina Jeliazkova <nina@... > <mailto:nina@...>> wrote: > > > > >> >> >> The integration problem is real, but the server boundary does not >> change the problem very much. Does the client care if the >> resource it is accessing is on hateoas.com <http://hateoas.com> >> or ulser.com <http://ulser.com> as long it knows the semantics of >> the link, the methods to use, security requirements, media types >> and so on? > At least transparently accessing resources outside of server > boundary under different domains requires /slightly/ more > complicated authentication/authorization scheme, that just a > single server, unless all resources are considered unprotected. > > Best regards, > Nina > > > > >
Bill Burke wrote: > You (and others) really have to stop using this "the www is real and it > works" to answer every difficulty that application developers come > across when trying to apply a RESTful architecture to their problem > domain. No we don't. That the www is real and it works shows that it is not an insurmountable problem to have different servers being used by the same client. This is clearly not *the* problem, or the www wouldn't work. It's not answering the difficulty, but it is trying suggesting that the focus of the difficulty may be either: 1. A difficulty also experienced by the www, so maybe how the www works can give an insight. OR 2. A different difficulty, so maybe we can narrow down where the difficulty really lies. OR 3. It really is a problem that REST and HTTP can't solve at all. I'm still not sure what Nina's concrete problem is I am sure that REST can happily have clients operate across different servers. Maybe Nina's problem falls into case 1 above and comparison with the wider web can bring insight, maybe it falls into case 2 above and the real source of the difficulty isn't the fact that multiple servers are involved but elsewhere (in the communication between them perhaps), and maybe HTTP is just the wrong way to go. > And stop trashing the SOAP stacks. SOAP specs define real requirements > and distributed computing problems. Sure, the design and implementation > of the SOAP stacks suck, but the problems they try and address are real. I said nothing about the SOAP stacks. The SOAP people seem to think that HTTP had gotten some things right too.
Nina Jeliazkova wrote: > The difficulty is not in the authentication itself, but with the > federated authentication/authorization, encompassing multiple servers. > Otherwise, it is pretty easy to protect each resource with arbitrary > kind of available authentication scheme and ask the client to provide > credentials on each POST. It is quite sure users will not be happy with > such approach. A last shout from me on this, because when it comes to authentication I will gladly admit to having much ignorance generally, and all the more so in the context of this list, where there are plenty who know plenty. However, just a thought. Perhaps passing information in the "opaque" portion of digest authentication headers allow for the server that can vouch for the identity of the client in question to be identified and queried?
On Oct 8, 2009, at 5:09 PM, Moore, Jonathan (CIM) wrote: > Nina Jeliazkova wrote: >> Besides, REST does not encourages cookies and sessions, meaning >> credentials or something derived from credentials should be sent >> on_every_ request. > > I'm curious about this assertion, at least as it applies to cookies. > Cookies represent pieces of application state that are stored with > the client, and they get sent on every request (where domains > apply). Why isn't that RESTful? If used properly, cookies do not violate statelessness. I think though, that they violate visibility because the meaning of the cookie depends on a non-standardized contract between client and server. For example, when you use cookies for sending credentials or authentication tokens, caches would not be mandated to not apply public caching. Jan > > (Noting that some cookies may contain more interesting information > than JSESSIONID, for example. I agree with an understand the general > assessment that server-side session storage is not RESTful). > > Jon > ........ > Jon Moore > Comcast Interactive Media > > > > -----Original Message----- > From: rest-discuss@yahoogroups.com on behalf of Nina Jeliazkova > Sent: Thu 10/8/2009 10:53 AM > To: Subbu Allamaraju > Cc: jeliazkova.nina@...; Rest List > Subject: Re: [rest-discuss] composition of REST services > > Subbu Allamaraju wrote: >> Learning how to authenticate is no different from leaning about the >> media types and formats. >> >> I don't mean to undermine the difficulty here, but the problem does >> not change just because a URI belongs to a server different from the >> one that served the representation. >> >> Subbu > The difficulty is not in the authentication itself, but with the > federated authentication/authorization, encompassing multiple servers. > Otherwise, it is pretty easy to protect each resource with arbitrary > kind of available authentication scheme and ask the client to provide > credentials on each POST. It is quite sure users will not be happy > with > such approach. > > Besides, REST does not encourages cookies and sessions, meaning > credentials or something derived from credentials should be sent > on_every_ request. > > Best regards, > Nina >> >> On Thu, Oct 8, 2009 at 4:24 PM, Nina Jeliazkova <nina@... >> <mailto:nina@...>> wrote: >> >> >> >> >>> >>> >>> The integration problem is real, but the server boundary does not >>> change the problem very much. Does the client care if the >>> resource it is accessing is on hateoas.com <http://hateoas.com> >>> or ulser.com <http://ulser.com> as long it knows the semantics of >>> the link, the methods to use, security requirements, media types >>> and so on? >> At least transparently accessing resources outside of server >> boundary under different domains requires /slightly/ more >> complicated authentication/authorization scheme, that just a >> single server, unless all resources are considered unprotected. >> >> Best regards, >> Nina >> >> >> >> >> > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Moore, Jonathan (CIM) wrote:
> Nina Jeliazkova wrote:
>
>> Besides, REST does not encourages cookies and sessions, meaning
>> credentials or something derived from credentials should be sent
>> on_every_ request.
>>
>
> I'm curious about this assertion, at least as it applies to cookies. Cookies represent pieces of application state that are stored with the client, and they get sent on every request (where domains apply). Why isn't that RESTful?
>
Well, this is what REST gurus [1] are telling us:
/*"The Trouble with Cookies*
A web service that sends HTTP cookies violates the principle of
statelessness. In fact, it usually violates statelessness twice. It
moves application state onto the server even though it belongs on
the client, and it stops clients from being in charge of their own
application state."
...
OK, so cookies shouldn’t contain session IDs: that’s just an excuse
to keep application state on the server. What about cookies that
really do contain application state? What if you serialize the
actual session hash and send it as a cookie, instead of just sending
a reference to a hash on the server?
This can be RESTful, but it’s usually not. The cookie standard says
that the client can get rid of a cookie when it expires, or when the
client terminates. This is a pretty big restriction on the client’s
control over application state. If you make 10 web requests and
suddenly the server sends you a cookie, you have to start sending
this cookie with your future requests. You can’t make those 10
precookie requests unless you quit and start over. To use a web
browser analogy, your “Back†button is broken. You can’t put the
application in any of the states it was in before you got the cookie.
...
The only RESTful use of cookies is one where the client is in charge
of the cookie value. The server can suggest values for a cookie
using the Set-Cookie header, just like it can suggest links the
client might want to follow, but the client chooses what cookie to
send just as it chooses what links to follow. In some browser-based
applications, cookies are created by the client and never sent to
the server. The cookie is just a convenient container for
application state, which makes its way to the server in
representations and URIs. That’s a very RESTful use of cookies."
/
I hope the authors don't mind the long citation .
Best regards,
Nina
[1] Leonard Richardson and Sam Ruby, RESTful Web Services, O'Reilly
2007, p.252
> (Noting that some cookies may contain more interesting information than JSESSIONID, for example. I agree with an understand the general assessment that server-side session storage is not RESTful).
>
> Jon
> ........
> Jon Moore
> Comcast Interactive Media
>
>
>
> -----Original Message-----
> From: rest-discuss@yahoogroups.com on behalf of Nina Jeliazkova
> Sent: Thu 10/8/2009 10:53 AM
> To: Subbu Allamaraju
> Cc: jeliazkova.nina@...; Rest List
> Subject: Re: [rest-discuss] composition of REST services
>
> Subbu Allamaraju wrote:
>
>> Learning how to authenticate is no different from leaning about the
>> media types and formats.
>>
>> I don't mean to undermine the difficulty here, but the problem does
>> not change just because a URI belongs to a server different from the
>> one that served the representation.
>>
>> Subbu
>>
> The difficulty is not in the authentication itself, but with the
> federated authentication/authorization, encompassing multiple servers.
> Otherwise, it is pretty easy to protect each resource with arbitrary
> kind of available authentication scheme and ask the client to provide
> credentials on each POST. It is quite sure users will not be happy with
> such approach.
>
> Besides, REST does not encourages cookies and sessions, meaning
> credentials or something derived from credentials should be sent
> on_every_ request.
>
> Best regards,
> Nina
>
>> On Thu, Oct 8, 2009 at 4:24 PM, Nina Jeliazkova <nina@...
>> <mailto:nina@...>> wrote:
>>
>>
>>
>>
>>
>>> The integration problem is real, but the server boundary does not
>>> change the problem very much. Does the client care if the
>>> resource it is accessing is on hateoas.com <http://hateoas.com>
>>> or ulser.com <http://ulser.com> as long it knows the semantics of
>>> the link, the methods to use, security requirements, media types
>>> and so on?
>>>
>> At least transparently accessing resources outside of server
>> boundary under different domains requires /slightly/ more
>> complicated authentication/authorization scheme, that just a
>> single server, unless all resources are considered unprotected.
>>
>> Best regards,
>> Nina
>>
>>
>>
>>
>>
>>
>
>
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
Great, thanks for reminding me of that passage. I think this matches my intuition, which is that there are RESTful uses of cookies, but pointed out that to actually get there, you have to do more than what standard cookie usage looks like.
Jon
........
Jon Moore
Comcast Interactive Media
-----Original Message-----
From: Nina Jeliazkova [mailto:nina@...]
Sent: Thu 10/8/2009 11:27 AM
To: Moore, Jonathan (CIM)
Cc: jeliazkova.nina@...; Rest List
Subject: Re: [rest-discuss] RESTful Cookies?
Moore, Jonathan (CIM) wrote:
> Nina Jeliazkova wrote:
>
>> Besides, REST does not encourages cookies and sessions, meaning
>> credentials or something derived from credentials should be sent
>> on_every_ request.
>>
>
> I'm curious about this assertion, at least as it applies to cookies. Cookies represent pieces of application state that are stored with the client, and they get sent on every request (where domains apply). Why isn't that RESTful?
>
Well, this is what REST gurus [1] are telling us:
/*"The Trouble with Cookies*
A web service that sends HTTP cookies violates the principle of
statelessness. In fact, it usually violates statelessness twice. It
moves application state onto the server even though it belongs on
the client, and it stops clients from being in charge of their own
application state."
...
OK, so cookies shouldn't contain session IDs: that's just an excuse
to keep application state on the server. What about cookies that
really do contain application state? What if you serialize the
actual session hash and send it as a cookie, instead of just sending
a reference to a hash on the server?
This can be RESTful, but it's usually not. The cookie standard says
that the client can get rid of a cookie when it expires, or when the
client terminates. This is a pretty big restriction on the client's
control over application state. If you make 10 web requests and
suddenly the server sends you a cookie, you have to start sending
this cookie with your future requests. You can't make those 10
precookie requests unless you quit and start over. To use a web
browser analogy, your "Back" button is broken. You can't put the
application in any of the states it was in before you got the cookie.
...
The only RESTful use of cookies is one where the client is in charge
of the cookie value. The server can suggest values for a cookie
using the Set-Cookie header, just like it can suggest links the
client might want to follow, but the client chooses what cookie to
send just as it chooses what links to follow. In some browser-based
applications, cookies are created by the client and never sent to
the server. The cookie is just a convenient container for
application state, which makes its way to the server in
representations and URIs. That's a very RESTful use of cookies."
/
I hope the authors don't mind the long citation .
Best regards,
Nina
[1] Leonard Richardson and Sam Ruby, RESTful Web Services, O'Reilly
2007, p.252
> (Noting that some cookies may contain more interesting information than JSESSIONID, for example. I agree with an understand the general assessment that server-side session storage is not RESTful).
>
> Jon
> ........
> Jon Moore
> Comcast Interactive Media
>
>
>
> -----Original Message-----
> From: rest-discuss@yahoogroups.com on behalf of Nina Jeliazkova
> Sent: Thu 10/8/2009 10:53 AM
> To: Subbu Allamaraju
> Cc: jeliazkova.nina@...; Rest List
> Subject: Re: [rest-discuss] composition of REST services
>
> Subbu Allamaraju wrote:
>
>> Learning how to authenticate is no different from leaning about the
>> media types and formats.
>>
>> I don't mean to undermine the difficulty here, but the problem does
>> not change just because a URI belongs to a server different from the
>> one that served the representation.
>>
>> Subbu
>>
> The difficulty is not in the authentication itself, but with the
> federated authentication/authorization, encompassing multiple servers.
> Otherwise, it is pretty easy to protect each resource with arbitrary
> kind of available authentication scheme and ask the client to provide
> credentials on each POST. It is quite sure users will not be happy with
> such approach.
>
> Besides, REST does not encourages cookies and sessions, meaning
> credentials or something derived from credentials should be sent
> on_every_ request.
>
> Best regards,
> Nina
>
>> On Thu, Oct 8, 2009 at 4:24 PM, Nina Jeliazkova <nina@...
>> <mailto:nina@...>> wrote:
>>
>>
>>
>>
>>
>>> The integration problem is real, but the server boundary does not
>>> change the problem very much. Does the client care if the
>>> resource it is accessing is on hateoas.com <http://hateoas.com>
>>> or ulser.com <http://ulser.com> as long it knows the semantics of
>>> the link, the methods to use, security requirements, media types
>>> and so on?
>>>
>> At least transparently accessing resources outside of server
>> boundary under different domains requires /slightly/ more
>> complicated authentication/authorization scheme, that just a
>> single server, unless all resources are considered unprotected.
>>
>> Best regards,
>> Nina
>>
>>
>>
>>
>>
>>
>
>
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
I like to use cookies from javascript for things like preserving the values of input boxes between requests. That seems like it's all client state data anyway, so I've never felt RESTless doing it. It's just persisting the client-side (HTML) application state. -L --- In rest-discuss@...m, Nina Jeliazkova <nina@...> wrote: > > > Moore, Jonathan (CIM) wrote: > > Nina Jeliazkova wrote: > > > >> Besides, REST does not encourages cookies and sessions, meaning > >> credentials or something derived from credentials should be sent > >> on_every_ request. > >> > > > > I'm curious about this assertion, at least as it applies to cookies. Cookies represent pieces of application state that are stored with the client, and they get sent on every request (where domains apply). Why isn't that RESTful? > > > Well, this is what REST gurus [1] are telling us: > > /*"The Trouble with Cookies* > A web service that sends HTTP cookies violates the principle of > statelessness. In fact, it usually violates statelessness twice. It > moves application state onto the server even though it belongs on > the client, and it stops clients from being in charge of their own > application state." > ... > OK, so cookies shouldn’t contain session IDs: that’s just an excuse > to keep application state on the server. What about cookies that > really do contain application state? What if you serialize the > actual session hash and send it as a cookie, instead of just sending > a reference to a hash on the server? > This can be RESTful, but it’s usually not. The cookie standard says > that the client can get rid of a cookie when it expires, or when the > client terminates. This is a pretty big restriction on the client’s > control over application state. If you make 10 web requests and > suddenly the server sends you a cookie, you have to start sending > this cookie with your future requests. You can’t make those 10 > precookie requests unless you quit and start over. To use a web > browser analogy, your “Back†button is broken. You can’t put the > application in any of the states it was in before you got the cookie. > ... > The only RESTful use of cookies is one where the client is in charge > of the cookie value. The server can suggest values for a cookie > using the Set-Cookie header, just like it can suggest links the > client might want to follow, but the client chooses what cookie to > send just as it chooses what links to follow. In some browser-based > applications, cookies are created by the client and never sent to > the server. The cookie is just a convenient container for > application state, which makes its way to the server in > representations and URIs. That’s a very RESTful use of cookies." > > / > > I hope the authors don't mind the long citation . > > Best regards, > Nina > > [1] Leonard Richardson and Sam Ruby, RESTful Web Services, O'Reilly > 2007, p.252 > > > (Noting that some cookies may contain more interesting information than JSESSIONID, for example. I agree with an understand the general assessment that server-side session storage is not RESTful). > > > > Jon > > ........ > > Jon Moore > > Comcast Interactive Media > > > > > > > > -----Original Message----- > > From: rest-discuss@yahoogroups.com on behalf of Nina Jeliazkova > > Sent: Thu 10/8/2009 10:53 AM > > To: Subbu Allamaraju > > Cc: jeliazkova.nina@...; Rest List > > Subject: Re: [rest-discuss] composition of REST services > > > > Subbu Allamaraju wrote: > > > >> Learning how to authenticate is no different from leaning about the > >> media types and formats. > >> > >> I don't mean to undermine the difficulty here, but the problem does > >> not change just because a URI belongs to a server different from the > >> one that served the representation. > >> > >> Subbu > >> > > The difficulty is not in the authentication itself, but with the > > federated authentication/authorization, encompassing multiple servers. > > Otherwise, it is pretty easy to protect each resource with arbitrary > > kind of available authentication scheme and ask the client to provide > > credentials on each POST. It is quite sure users will not be happy with > > such approach. > > > > Besides, REST does not encourages cookies and sessions, meaning > > credentials or something derived from credentials should be sent > > on_every_ request. > > > > Best regards, > > Nina > > > >> On Thu, Oct 8, 2009 at 4:24 PM, Nina Jeliazkova <nina@... > >> <mailto:nina@...>> wrote: > >> > >> > >> > >> > >> > >>> The integration problem is real, but the server boundary does not > >>> change the problem very much. Does the client care if the > >>> resource it is accessing is on hateoas.com <http://hateoas.com> > >>> or ulser.com <http://ulser.com> as long it knows the semantics of > >>> the link, the methods to use, security requirements, media types > >>> and so on? > >>> > >> At least transparently accessing resources outside of server > >> boundary under different domains requires /slightly/ more > >> complicated authentication/authorization scheme, that just a > >> single server, unless all resources are considered unprotected. > >> > >> Best regards, > >> Nina > >> > >> > >> > >> > >> > >> > > > > > > > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > >
Hi all, Here's a modelling question for http. Let's say that I execute a long-running operation on server A, in response to a request from a client. So as to not hold the connection, I want to send a 202, with a response entity describing the expected wait time and the monitor URI that one can use until then. The issue I'm encountering is around having both server A and server B used behind a load balancing device, where they share the same domain name. Server B would have no way to tell if server A is done with the processing yet or not. So I end up having to persist that data across all servers, and unsure if it's a good idea at all. Those processes could be as short as a second and at most a couple of seconds. The problem is that, because it's a framework concern, there is no supporting code for my users to persist such information across servers. How would you solve this problem? Impose on the consumer to persist this information in their own data store? Publish a "in progress" entity to a URI and use that as a monitor, and impose on my consumers to expose those entities in their URI space? In other words, how do you persist state that is shared across servers when the state is not part of the application but part of the framework people use to implement their application? Is there even a way out of this? Same question can apply to revoking the cnonce in http digest for that matter. Seb
Jon Hanna wrote: > Bill Burke wrote: > >> You (and others) really have to stop using this "the www is real and it >> works" to answer every difficulty that application developers come >> across when trying to apply a RESTful architecture to their problem >> domain. >> > > No we don't. > > That the www is real and it works shows that it is not an insurmountable > problem to have different servers being used by the same client. This is > clearly not *the* problem, or the www wouldn't work. > It seems there are still people thinking the web doesn't yet work smoothly when protected resources are involved, especially machine readable one. James Hollenbach, Joe Presbrey, and Tim Berners-Lee, Using RDF Metadata To Enable Access Control on the Social Semantic Web , http://dig.csail.mit.edu/2009/Papers/ISWC/rdf-access-control/paper.pd Best regards, Nina
Bill Burke wrote: > > > Nina Jeliazkova wrote: > >> Summarizing the discussion up to this point: >> >> * The clients/services can be made aware of existence of other REST >> services by default configuration or some custom solution for >> registration of the services (e.g. REST UDDI ;) > > This is done via links. The root URI of a system publishes a set of > links that applications can follow at runtime. This sounds very > similar to a "Naming Service" and it solves similar problems, but it > is different in that the services themselves act as the mechanism for > service registration. Things become much more dynamic and decoupled. > Services can change, on the fly, which other services the client is > routed to. > > Links allow your URIs to become totally opaque as well. That way you > can redesign your URL schemes as your application evolves and not > worry as much about breaking clients. > > Links are also somewhat self describing. Like a schema URL in an XML > document Links also define where you can find out information on how > to interact with them. Links, yes. The problem is which one is the Root URI in a distributed set of REST services? Sounds like a centralized Naming service again, each Service registering itself into The_Root_URI. Am I missing something? Best regards, Nina
Nick Gall wrote: > On Thu, Oct 8, 2009 at 6:55 AM, Nina Jeliazkova <nina@... > <mailto:nina@...>> wrote: > > OpenID is for authentication, authorization will need additional > (probably custom) solution. > > Check out OAuth <http://en.wikipedia.org/wiki/Oauth>, OpenID's RESTful > authorization partner. > > > I am really interested if there exists a system based on distributed > REST read/write services, exposing the same API, besides the human > readable web. > > Check out Google's Data Protocol <http://code.google.com/apis/gdata/>. > Right now you could deploy a 3rd party (ie non-Google, not on a Google > server) "processing-type resource" (as you call it) to > > 1. get data from one Google Data source/sink (say Google Spreadsheet 1) > 2. process it with the processing-type resource > 3. put the resulting data into a different Google Data source/sink > (say Google Spreadsheet 2) > > All this came be done with federated authorization via Google's > realization of OAuth <http://sites.google.com/site/oauthgoog/>. You > could even get the data from a Google source and post the results to a > Zoho <http://writer.zoho.com/public/help/zohoapi/fullpage> sink > (though I don't think Zoho supports OAuth...yet). > Thank you, this is interesting. Best regards, Nina > So the machine-navigable Web (using REST, ROA, WOA architectural > constraints) is not as far along as the human navigable one, but its > slowing getting there. I'm pretty sure it will get there before WS-* does. > > -- Nick > >
On Thu, Oct 8, 2009 at 7:34 AM, Nina Jeliazkova <nina@...> wrote: > > Links are also somewhat self describing. Like a schema URL in an XML > > document Links also define where you can find out information on how > > to interact with them. > Links, yes. The problem is which one is the Root URI in a distributed > set of REST services? Sounds like a centralized Naming service again, > each Service registering itself into The_Root_URI. Am I missing something? You can take it to that level if you like, where you have a single entry point that you use to dispatch to all of the other servers. Or you can consider each service is their own Root. If a consumer hits any of the servers first, it will eventually get what it wants done (assuming it's valid request in the first place, of course, that the server and its associated services actually implement the API the consumer is expecting). Each service knows of its complimentary services that it provides links to, whether a service "knows" about these other services because someone typed them in to a config file on the server or the server went out to some know it all directory telling is an implementation detail. If you use the distributed case, where each service has it's own local data, then you may think that you have a large reconfiguration burden when services move around. "If I change this service, I have to tell everyone else about the change". That's true, but you could do that dynamically through redirects. Older services have older links, they hit the old infrastructure which redirects to the new infrastructure. This capability keeps the entire architecture robust and resilient to change without having some central authority knowing "all the links". It breaks down when services are "bad citizens" that don't let folks know who replaced them, but that's a choice you need to make in your implementation. The key thing is, I think, that a REST architecture is not about URIs. We focus on that a lot, but it misses the big picture. The keyword regarding links is that they're opaque. Rather than the URIs themselves being important (which includes the actual servers they represent), the meta data describing the links is what is important. Each link has some kind of name, and the data type associated with it. The consumers of the service, the API if you will, know what the names do, and what goes in to the data type. Applications are not going to be able to "intuit" anything. They will likely not "discover" anything, within reason. They should "know" what they're looking for, and what do to with it once they find it. If an application want the next chunk of data in a long list, it will need to already know to look for a link named "next", know what, if any data needs to associated with that request and know the appropriate verb to use. In this case, will likely just use GET on the URI provided by the link named "next". The application isn't going to "discover" the "next" link, it has to know to expect it and what it's called. It could have been called "more", or "nextChunk" or however "next" is expressed in Chinese. If you want a server to direct a consumer to Server B or Server C, that's a choice the server needs to make. It could be a round robin across a configured list, it could be random, it could be result of a query to a central load balancing service used to direct traffic across the cluster. The consumer certainly has no care whatsoever whether it is going to Server B or C. The consumer, in fact, won't know. It'll go wherever it's told to go. The human WWW "works" because we can intuit and discover the API as we go along. Whether its a Link named next,and icon with an arror, a plus sign, we can resolve those abstractions at a natural language level. When presented with a form, we generally know how that form should be filled in, either from past experience (a name and address form, for example), or from just plain training (filling in a purchase order on a back end office system all of the codes and details). Machines aren't at that level yet. They can't interpret much of anything yet. They simply have to be trained. Regards, Will Hartung (willh@...)
Hello Nina. Too much writing for me to read, so I will take on from this summary you presented. 1. Let's clearly differentiate Server from Resource from Data. Servers, don't think on them on REST. Just know that they are there at implementation time. Resources can be anything, live anywhere, identified by an URI. Anything means a resources can be more things than just data. So, imagine your resource as being a process. That process may need a database, who cares. REST does not care about it. The only important thing is that resource is accessible with a URI. That is, do not thing any data in your app needs to be a resource. Finally, the resource type you mention? Not sure what do you mean by a resource that can read/write data. We don't care, it is a resource somewhere. And do not think of resources living in one server. 2. Now, your app needs an entry point, and a client workflow. That means, you model your client interaction making it do what it needs to do naturally. Please, do not make your client do thinks that are application related. For instance, if I want a loan, I present the forms filled and wait. Do not make the client to fill the form, deliver it to the credit people, make it go to managers for signing, and to the vault to get the money. See what I mean? So, anything that needs to be done, that the client does not need to know, may be done by internal guys. If your client asks resource B (which is a magic troll that performs things producing data) and request something with a form, and that ends up with a resource created, there is no problem is B calls A (using whatever it needs) to generate the resource, simply because A knows how to create resources. Do not make B send back to client a complex URL so client requests its own resource creation, if the client does not need to know that. 3. Security. Following point 2, Client authenticates against B, and B against A. Client does not need to be authenticated against A. Now, if needed, and client is not human, a two way SSL may be a good option. Already implemented everywhere. 4. If both parties (A and B) do not know each other, and the client does, then you can let the client drive the composition, using A and B alternatively, even sending URLs from one to the other to complete the work. Now, if the client does not know either, the someone needs to know. In this case, a services is placed in the middle. Client talks to that middle service asking for composed services, and that middle service calls the other two to accomplish the request. Simple. Summary: If composition can be made at app level, do not mess with the client. If composition needs to be made from the client, either the client knows and drives the composition, or a middle service does, abstracting that from the client (which is always the best option). Security may not be concern, two way ssl being one option, or simple HTTP security the other one. Consider chained security: from client to middle service, from that service to the other services. Hope this helps. William Martinez Pomares. --- In rest-discuss@yahoogroups.com, Nina Jeliazkova <nina@...> wrote: > > Hello Jim, > > Jim Webber wrote: > > Hello Nina, > > > > > >> All these seem to fall in the category where the client talks only > >> to a > >> single server, regardless of what complexity that server hides; > >> while my > >> original point was a client talking to multiple independent servers, > >> being able not only to retrieve data from multiple servers (as in > >> mashups), but do some processing with the help of servers that offer > >> processing capabilities. > >> > > > > Using hypermedia, a service can generate links which are understood by > > client automata which do not resolve to its own URI space. For example > > (shameless book plug follows, sorry!), in our forthcoming book, Savas, > > Ian, and I show how we could outsource the payment of coffee orders to > > a third party using a hypermedia-driven protocol. > > > > The automata client "writes" data to at least two services in this use- > > case. Does that qualify as service composition? If so I can elaborate, > > if not I'm stuck. > > > Yes, please elaborate. What I'm interested is how clients and REST > services work in the following setting, rather than whether the correct > name is service composition. > > * Several REST services,exposing the (subsets of the) same API > * There are two types of resources in our use case - one (A) that > can read/write data and others (B) that can only process given > data in some magic way and generate more data on the fly; or try > to create a resource of type (A) somewhere else, not necessary on > the same server where (B) is running; > * Client(s) (and perhaps services) being aware of the other services > * Client(s) being able to talk to multiple REST services > > > Summarizing the discussion up to this point: > > * The clients/services can be made aware of existence of other REST > services by default configuration or some custom solution for > registration of the services (e.g. REST UDDI ;) > * We don't really care if magic resource (B) behaves in a RESTfull > way or not when processing the request and generating data, > without assigning an URI to it; > * OpenID (eventually) can be used for client authentication, > provided the client is a browser. If a client is not a browser, > or if a service needs to be authenticated to talk to another > service, it's an open question; > * Authorization of users, authenticated by OpenID for resource > access is an open question, also in the case when one service > needs to talk to another one, on behalf of the client. > > > Best regards, > Nina > > Jim > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > >
On Thu, Oct 8, 2009 at 8:25 AM, Jan Algermissen <algermissen1971@...> wrote: > > On Oct 8, 2009, at 5:09 PM, Moore, Jonathan (CIM) wrote: > >> Nina Jeliazkova wrote: >>> Besides, REST does not encourages cookies and sessions, meaning >>> credentials or something derived from credentials should be sent >>> on_every_ request. >> >> I'm curious about this assertion, at least as it applies to cookies. >> Cookies represent pieces of application state that are stored with >> the client, and they get sent on every request (where domains >> apply). Why isn't that RESTful? > > If used properly, cookies do not violate statelessness. I think > though, that they violate visibility because the meaning of the cookie > depends on a non-standardized contract between client and server. There's two ways to use cookies statelessly. One is to use them entirely client-side, and even though the browser will send them to the server, the server will never use them. In this scenario there's no visibility issue because there's no contract. The other way is to use them as different kind of HTTP header, and there you lose visibility because it's essentially a proprietary HTTP extension. Mark. Mark.
> In other words, how do you persist state that is shared across servers when > the state is not part of the application but part of the framework people > use to implement their application? Is there even a way out of this? > > As far as the protocol is concerned, is there a difference between the framework and the app? Other than configuring your LB to pin the client to one of the servers (can be worse than the above alternatives as it is less reliable), can't think of any way out. Subbu
Hi!
----------------------
FOREST is a GET-only REST Integration Pattern defined simply as:
"A resource's state depends on the state of other resources that it
links to."
This means that resource servers must also be clients in order to see
those dependencies.
----------------------
FOREST is a REST Pattern derived from GET-only or polling Web use-cases,
including mashups:
- feed aggregators or filters
- search index results pages
- pages that depend on a search
- Google's mobile versions of pages
- sites that create summaries of other Web pages
- sites that create feeds from Web pages
- creating pages or feeds from REST 'APIs' (GET only)
- Yahoo Pipes
----------------------
FOREST is a REST Pattern for building 'Enterprise Mashups' in a ROA /
WOA / SOA.
OK - those of you without Dion Hinchcliffe in your feed reader may be
feeling a little
queasy at this point, but I'd encourage you to read on ... Actually, I
quite like the
phrase 'Enterprise Mashup' since it lightens the gravity of that
'Enterprise' word.
Enterprise Mashup Markup Language* is the nearest thing to this that I
know about, but
FOREST is quite different: it is much simpler and is /only/ a REST Pattern.
* http://www.openmashup.org/omadocs/v1.0/emml/createMashupScript.html
----------------------
Patterns can be implemented in frameworks...
A FOREST implementation would inevitably be over HTTP. It would
initially be just XHTML
or Atom. I imagine fetching XHTML resources within which are expected
to be links to
more such documents. Any XHTML could depend on any other, and they're all
interlinked. If you depend on another resource, you must have found it
directly or
indirectly through links in your body. Alternative discovery: a
resource could be told
that it is being watched using an HTTP header in the GET request listing
the URIs of
the resources that depend on it - then it could watch and link back.
Etag would be used
for an automatically incremented version number.
----------------------
I would ideally see this work towards a formal description via 'rough
consensus and
working code'. I intend to knock up a prototype of FOREST in a Jetty
servlet and post
it to GitHub; if that code works, I may get rough consensus...
What a FOREST XHTML/HTTP formalisation would specify:
+ link-rels in XHTML heads to watched resources found through body links
+ use of HTTP headers (Etag, Cache-Control, Content-Location, Observers*)
+ API*: doc builder, XPath body get/set*, set-observe*, callbacks
(observed, notified*)
* - 'Observers' is a possible name for the header with the URIs of
dependent resources
- the API would be language-independent, but probably Java-like
- the XPath would be extended to jump links from doc to doc
- 'set-observe' adds the link-rel, so watched resource URIs can be
persisted
- 'notified' means being told when the GET returns with the observed
state
What a FOREST Java servlet and client library would implement 'under'
these specs:
+ a driver module loader: drivers animate resources through the API
+ a document cache - in memory and maybe saved to disk or database
Resource animation would either be by the application of business rules
driving the
API, or by adapting between external state and the API.
----------------------
What do you think? Enthusiatic replies preferred! =0)
Duncan Cragg
--
http://duncan-cragg.org/blog/
http://twitter.com/duncancragg
Hi Seb. What is the result of server A process? If there isn't any, why do you need to know when it stops? Question is because, if there is an output result from the process, you should not ask for server A being done or not, but for the result. That is, you ask if the result is ready or not. Any server can check that, if the result can be returned by any server in the farm. Of course, if that second request comes to server A, another thread must answer (since there is one doing the processing). Now, if you tell me the result stays only in server A, then we have a problem, since you will also need to fix the result fetch to the server A, and no idea then what the balancing is for. Finally, there is another option: callbacks. The server ends up its processing by calling back the client through a callback port or something. In case you wonder,there is no server state problem there, since the call is done in one transaction (all the processing is done as the action for the original request). Hope this helps. William Martinez. --- In rest-discuss@yahoogroups.com, "Sebastien Lambla" <seb@...> wrote: > > Hi all, > > > > Here's a modelling question for http. Let's say that I execute a > long-running operation on server A, in response to a request from a client. > So as to not hold the connection, I want to send a 202, with a response > entity describing the expected wait time and the monitor URI that one can > use until then. > > > > The issue I'm encountering is around having both server A and server B used > behind a load balancing device, where they share the same domain name. > Server B would have no way to tell if server A is done with the processing > yet or not. > > > > So I end up having to persist that data across all servers, and unsure if > it's a good idea at all. Those processes could be as short as a second and > at most a couple of seconds. The problem is that, because it's a framework > concern, there is no supporting code for my users to persist such > information across servers. > > > > How would you solve this problem? Impose on the consumer to persist this > information in their own data store? Publish a "in progress" entity to a URI > and use that as a monitor, and impose on my consumers to expose those > entities in their URI space? > > > > In other words, how do you persist state that is shared across servers when > the state is not part of the application but part of the framework people > use to implement their application? Is there even a way out of this? > > > > Same question can apply to revoking the cnonce in http digest for that > matter. > > > > Seb >
Noah, The question is not necessarily that I need to hide the server names behind a LB, but that this configuration does exist, for better or worse. I do agree entirely with your conclusions however, just pinging the community to see if this scenario I can’t find a solution for has had some attention by people smarter than me :) Seb From: Noah Campbell [mailto:noahcampbell@...] Sent: 08 October 2009 20:46 To: Sebastien Lambla Subject: Re: [rest-discuss] Asynchronous operations, webfarms and 202 If you only have two servers, then if there is a 404 on serverA redirect to serverB and vice versa. If you plan to add more servers, then chain them together like a linked list, however, probably won't scale due to redirect limits and would likely be difficult to maintain if there are a large number servers. If the client will keep a cookie, then you have a registry of running services if you lean on your load balancer. Thinking about it a little more...why do you need to hide the server name behind a load balancer? Why not load balance to the particular box and then when you return the 202 Created, put the real URL into the location header. If the client gets a 404 because of a server crash, then they need to resubmit the job. -Noah On Thu, Oct 8, 2009 at 8:43 AM, Sebastien Lambla <seb@serialseb.com> wrote: Hi all, Here’s a modelling question for http. Let’s say that I execute a long-running operation on server A, in response to a request from a client. So as to not hold the connection, I want to send a 202, with a response entity describing the expected wait time and the monitor URI that one can use until then. The issue I’m encountering is around having both server A and server B used behind a load balancing device, where they share the same domain name. Server B would have no way to tell if server A is done with the processing yet or not. So I end up having to persist that data across all servers, and unsure if it’s a good idea at all. Those processes could be as short as a second and at most a couple of seconds. The problem is that, because it’s a framework concern, there is no supporting code for my users to persist such information across servers. How would you solve this problem? Impose on the consumer to persist this information in their own data store? Publish a “in progress†entity to a URI and use that as a monitor, and impose on my consumers to expose those entities in their URI space? In other words, how do you persist state that is shared across servers when the state is not part of the application but part of the framework people use to implement their application? Is there even a way out of this? Same question can apply to revoking the cnonce in http digest for that matter. Seb
One of the ways I have used cookies is to have the server set a cookie
that simply includes a 'username' when a user authenticates with a site.
Since you cannot really be 'logged in' to a stateless service, the
username allows the client to make a separate request (using AJAX) for
each page allowing that page to be "decorated" with personalized data
(allowing the undecorated page to be cacheable for all users). The
username is used in conjunction with a URI template (delivered with the
HTML page itself). E.g., page is at http://example.com/events and URI
template might be http://example.com/{username}/events.
--peter keane
groovepapa wrote:
>
> I like to use cookies from javascript for things like preserving the
> values of input boxes between requests. That seems like it's all
> client state data anyway, so I've never felt RESTless doing it. It's
> just persisting the client-side (HTML) application state.
>
> -L
>
> --- In rest-discuss@yahoogroups.com
> <mailto:rest-discuss%40yahoogroups.com>, Nina Jeliazkova <nina@...> wrote:
> >
> >
> > Moore, Jonathan (CIM) wrote:
> > > Nina Jeliazkova wrote:
> > >
> > >> Besides, REST does not encourages cookies and sessions, meaning
> > >> credentials or something derived from credentials should be sent
> > >> on_every_ request.
> > >>
> > >
> > > I'm curious about this assertion, at least as it applies to
> cookies. Cookies represent pieces of application state that are stored
> with the client, and they get sent on every request (where domains
> apply). Why isn't that RESTful?
> > >
> > Well, this is what REST gurus [1] are telling us:
> >
> > /*"The Trouble with Cookies*
> > A web service that sends HTTP cookies violates the principle of
> > statelessness. In fact, it usually violates statelessness twice. It
> > moves application state onto the server even though it belongs on
> > the client, and it stops clients from being in charge of their own
> > application state."
> > ...
> > OK, so cookies shouldn’t contain session IDs: that’s just an excuse
> > to keep application state on the server. What about cookies that
> > really do contain application state? What if you serialize the
> > actual session hash and send it as a cookie, instead of just sending
> > a reference to a hash on the server?
> > This can be RESTful, but it’s usually not. The cookie standard says
> > that the client can get rid of a cookie when it expires, or when the
> > client terminates. This is a pretty big restriction on the client’s
> > control over application state. If you make 10 web requests and
> > suddenly the server sends you a cookie, you have to start sending
> > this cookie with your future requests. You can’t make those 10
> > precookie requests unless you quit and start over. To use a web
> > browser analogy, your “Back†button is broken. You can’t put the
> > application in any of the states it was in before you got the cookie.
> > ...
> > The only RESTful use of cookies is one where the client is in charge
> > of the cookie value. The server can suggest values for a cookie
> > using the Set-Cookie header, just like it can suggest links the
> > client might want to follow, but the client chooses what cookie to
> > send just as it chooses what links to follow. In some browser-based
> > applications, cookies are created by the client and never sent to
> > the server. The cookie is just a convenient container for
> > application state, which makes its way to the server in
> > representations and URIs. That’s a very RESTful use of cookies."
> >
> > /
> >
> > I hope the authors don't mind the long citation .
> >
> > Best regards,
> > Nina
> >
> > [1] Leonard Richardson and Sam Ruby, RESTful Web Services, O'Reilly
> > 2007, p.252
> >
> > > (Noting that some cookies may contain more interesting information
> than JSESSIONID, for example. I agree with an understand the general
> assessment that server-side session storage is not RESTful).
> > >
> > > Jon
> > > ........
> > > Jon Moore
> > > Comcast Interactive Media
> > >
> > >
> > >
> > > -----Original Message-----
> > > From: rest-discuss@yahoogroups.com
> <mailto:rest-discuss%40yahoogroups.com> on behalf of Nina Jeliazkova
> > > Sent: Thu 10/8/2009 10:53 AM
> > > To: Subbu Allamaraju
> > > Cc: jeliazkova.nina@...; Rest List
> > > Subject: Re: [rest-discuss] composition of REST services
> > >
> > > Subbu Allamaraju wrote:
> > >
> > >> Learning how to authenticate is no different from leaning about the
> > >> media types and formats.
> > >>
> > >> I don't mean to undermine the difficulty here, but the problem does
> > >> not change just because a URI belongs to a server different from the
> > >> one that served the representation.
> > >>
> > >> Subbu
> > >>
> > > The difficulty is not in the authentication itself, but with the
> > > federated authentication/authorization, encompassing multiple servers.
> > > Otherwise, it is pretty easy to protect each resource with arbitrary
> > > kind of available authentication scheme and ask the client to provide
> > > credentials on each POST. It is quite sure users will not be happy
> with
> > > such approach.
> > >
> > > Besides, REST does not encourages cookies and sessions, meaning
> > > credentials or something derived from credentials should be sent
> > > on_every_ request.
> > >
> > > Best regards,
> > > Nina
> > >
> > >> On Thu, Oct 8, 2009 at 4:24 PM, Nina Jeliazkova <nina@...
> > >> <mailto:nina@...>> wrote:
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>> The integration problem is real, but the server boundary does not
> > >>> change the problem very much. Does the client care if the
> > >>> resource it is accessing is on hateoas.com <http://hateoas.com
> <http://hateoas.com>>
> > >>> or ulser.com <http://ulser.com <http://ulser.com>> as long it
> knows the semantics of
> > >>> the link, the methods to use, security requirements, media types
> > >>> and so on?
> > >>>
> > >> At least transparently accessing resources outside of server
> > >> boundary under different domains requires /slightly/ more
> > >> complicated authentication/authorization scheme, that just a
> > >> single server, unless all resources are considered unprotected.
> > >>
> > >> Best regards,
> > >> Nina
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > ------------------------------------
> > >
> > > Yahoo! Groups Links
> > >
> > >
> > >
> >
>
>
On Thu, Oct 8, 2009 at 5:48 AM, Stefan Tilkov <stefan.tilkov@...> wrote: > On Oct 8, 2009, at 1:07 AM, Mark Baker wrote: > > So consider a "counter" page that returns the number of hits on it. > If you DELETE that, it's reasonable for the server to next respond > with a 200 and a "0" response. > > Yes, that would be a reasonable response. But why would it be reasonable for > the client to use a DELETE here? Why would a PUT not be a better option? Because the client doesn't know that the counter will reset. Mark.
On Thu, Oct 8, 2009 at 2:42 PM, Peter Keane <pkeane@...> wrote:
> One of the ways I have used cookies is to have the server set a cookie
> that simply includes a 'username' when a user authenticates with a site.
> Since you cannot really be 'logged in' to a stateless service, the
> username allows the client to make a separate request (using AJAX) for
> each page allowing that page to be "decorated" with personalized data
> (allowing the undecorated page to be cacheable for all users). The
> username is used in conjunction with a URI template (delivered with the
> HTML page itself). E.g., page is at http://example.com/events and URI
> template might be http://example.com/{username}/events.
>
If you are using the HTTP "Authorization" header (BASIC, DIGEST,
whatever) for authentication, which seems to be a fairly common
practice, you can use the security token already being sent on that
header to provide the basis for personalization, without needing a
cookie.
Craig McClanahan
> --peter keane
>
>
> groovepapa wrote:
>>
>> I like to use cookies from javascript for things like preserving the
>> values of input boxes between requests. That seems like it's all
>> client state data anyway, so I've never felt RESTless doing it. It's
>> just persisting the client-side (HTML) application state.
>>
>> -L
>>
>> --- In rest-discuss@yahoogroups.com
>> <mailto:rest-discuss%40yahoogroups.com>, Nina Jeliazkova <nina@...> wrote:
>> >
>> >
>> > Moore, Jonathan (CIM) wrote:
>> > > Nina Jeliazkova wrote:
>> > >
>> > >> Besides, REST does not encourages cookies and sessions, meaning
>> > >> credentials or something derived from credentials should be sent
>> > >> on_every_ request.
>> > >>
>> > >
>> > > I'm curious about this assertion, at least as it applies to
>> cookies. Cookies represent pieces of application state that are stored
>> with the client, and they get sent on every request (where domains
>> apply). Why isn't that RESTful?
>> > >
>> > Well, this is what REST gurus [1] are telling us:
>> >
>> > /*"The Trouble with Cookies*
>> > A web service that sends HTTP cookies violates the principle of
>> > statelessness. In fact, it usually violates statelessness twice. It
>> > moves application state onto the server even though it belongs on
>> > the client, and it stops clients from being in charge of their own
>> > application state."
>> > ...
>> > OK, so cookies shouldn’t contain session IDs: that’s just an excuse
>> > to keep application state on the server. What about cookies that
>> > really do contain application state? What if you serialize the
>> > actual session hash and send it as a cookie, instead of just sending
>> > a reference to a hash on the server?
>> > This can be RESTful, but it’s usually not. The cookie standard says
>> > that the client can get rid of a cookie when it expires, or when the
>> > client terminates. This is a pretty big restriction on the client’s
>> > control over application state. If you make 10 web requests and
>> > suddenly the server sends you a cookie, you have to start sending
>> > this cookie with your future requests. You can’t make those 10
>> > precookie requests unless you quit and start over. To use a web
>> > browser analogy, your “Back†button is broken. You can’t put the
>> > application in any of the states it was in before you got the cookie.
>> > ...
>> > The only RESTful use of cookies is one where the client is in charge
>> > of the cookie value. The server can suggest values for a cookie
>> > using the Set-Cookie header, just like it can suggest links the
>> > client might want to follow, but the client chooses what cookie to
>> > send just as it chooses what links to follow. In some browser-based
>> > applications, cookies are created by the client and never sent to
>> > the server. The cookie is just a convenient container for
>> > application state, which makes its way to the server in
>> > representations and URIs. That’s a very RESTful use of cookies."
>> >
>> > /
>> >
>> > I hope the authors don't mind the long citation .
>> >
>> > Best regards,
>> > Nina
>> >
>> > [1] Leonard Richardson and Sam Ruby, RESTful Web Services, O'Reilly
>> > 2007, p.252
>> >
>> > > (Noting that some cookies may contain more interesting information
>> than JSESSIONID, for example. I agree with an understand the general
>> assessment that server-side session storage is not RESTful).
>> > >
>> > > Jon
>> > > ........
>> > > Jon Moore
>> > > Comcast Interactive Media
>> > >
>> > >
>> > >
>> > > -----Original Message-----
>> > > From: rest-discuss@yahoogroups.com
>> <mailto:rest-discuss%40yahoogroups.com> on behalf of Nina Jeliazkova
>> > > Sent: Thu 10/8/2009 10:53 AM
>> > > To: Subbu Allamaraju
>> > > Cc: jeliazkova.nina@...; Rest List
>> > > Subject: Re: [rest-discuss] composition of REST services
>> > >
>> > > Subbu Allamaraju wrote:
>> > >
>> > >> Learning how to authenticate is no different from leaning about the
>> > >> media types and formats.
>> > >>
>> > >> I don't mean to undermine the difficulty here, but the problem does
>> > >> not change just because a URI belongs to a server different from the
>> > >> one that served the representation.
>> > >>
>> > >> Subbu
>> > >>
>> > > The difficulty is not in the authentication itself, but with the
>> > > federated authentication/authorization, encompassing multiple servers.
>> > > Otherwise, it is pretty easy to protect each resource with arbitrary
>> > > kind of available authentication scheme and ask the client to provide
>> > > credentials on each POST. It is quite sure users will not be happy
>> with
>> > > such approach.
>> > >
>> > > Besides, REST does not encourages cookies and sessions, meaning
>> > > credentials or something derived from credentials should be sent
>> > > on_every_ request.
>> > >
>> > > Best regards,
>> > > Nina
>> > >
>> > >> On Thu, Oct 8, 2009 at 4:24 PM, Nina Jeliazkova <nina@...
>> > >> <mailto:nina@...>> wrote:
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>> The integration problem is real, but the server boundary does not
>> > >>> change the problem very much. Does the client care if the
>> > >>> resource it is accessing is on hateoas.com <http://hateoas.com
>> <http://hateoas.com>>
>> > >>> or ulser.com <http://ulser.com <http://ulser.com>> as long it
>> knows the semantics of
>> > >>> the link, the methods to use, security requirements, media types
>> > >>> and so on?
>> > >>>
>> > >> At least transparently accessing resources outside of server
>> > >> boundary under different domains requires /slightly/ more
>> > >> complicated authentication/authorization scheme, that just a
>> > >> single server, unless all resources are considered unprotected.
>> > >>
>> > >> Best regards,
>> > >> Nina
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >>
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > ------------------------------------
>> > >
>> > > Yahoo! Groups Links
>> > >
>> > >
>> > >
>> >
>>
>>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
How do you prevent impersonation?
-Noah
On Thu, Oct 8, 2009 at 2:42 PM, Peter Keane <pkeane@...> wrote:
> One of the ways I have used cookies is to have the server set a cookie
> that simply includes a 'username' when a user authenticates with a site.
> Since you cannot really be 'logged in' to a stateless service, the
> username allows the client to make a separate request (using AJAX) for
> each page allowing that page to be "decorated" with personalized data
> (allowing the undecorated page to be cacheable for all users). The
> username is used in conjunction with a URI template (delivered with the
> HTML page itself). E.g., page is at http://example.com/events and URI
> template might be http://example.com/{username}/events.
>
> --peter keane
>
>
> groovepapa wrote:
> >
> > I like to use cookies from javascript for things like preserving the
> > values of input boxes between requests. That seems like it's all
> > client state data anyway, so I've never felt RESTless doing it. It's
> > just persisting the client-side (HTML) application state.
> >
> > -L
> >
> > --- In rest-discuss@yahoogroups.com
> > <mailto:rest-discuss%40yahoogroups.com<rest-discuss%2540yahoogroups.com>>,
> Nina Jeliazkova <nina@...> wrote:
> > >
> > >
> > > Moore, Jonathan (CIM) wrote:
> > > > Nina Jeliazkova wrote:
> > > >
> > > >> Besides, REST does not encourages cookies and sessions, meaning
> > > >> credentials or something derived from credentials should be sent
> > > >> on_every_ request.
> > > >>
> > > >
> > > > I'm curious about this assertion, at least as it applies to
> > cookies. Cookies represent pieces of application state that are stored
> > with the client, and they get sent on every request (where domains
> > apply). Why isn't that RESTful?
> > > >
> > > Well, this is what REST gurus [1] are telling us:
> > >
> > > /*"The Trouble with Cookies*
> > > A web service that sends HTTP cookies violates the principle of
> > > statelessness. In fact, it usually violates statelessness twice. It
> > > moves application state onto the server even though it belongs on
> > > the client, and it stops clients from being in charge of their own
> > > application state."
> > > ...
> > > OK, so cookies shouldn’t contain session IDs: that’s just an excuse
> > > to keep application state on the server. What about cookies that
> > > really do contain application state? What if you serialize the
> > > actual session hash and send it as a cookie, instead of just sending
> > > a reference to a hash on the server?
> > > This can be RESTful, but it’s usually not. The cookie standard says
> > > that the client can get rid of a cookie when it expires, or when the
> > > client terminates. This is a pretty big restriction on the client’s
> > > control over application state. If you make 10 web requests and
> > > suddenly the server sends you a cookie, you have to start sending
> > > this cookie with your future requests. You can’t make those 10
> > > precookie requests unless you quit and start over. To use a web
> > > browser analogy, your “Back†button is broken. You can’t put the
> > > application in any of the states it was in before you got the cookie.
> > > ...
> > > The only RESTful use of cookies is one where the client is in charge
> > > of the cookie value. The server can suggest values for a cookie
> > > using the Set-Cookie header, just like it can suggest links the
> > > client might want to follow, but the client chooses what cookie to
> > > send just as it chooses what links to follow. In some browser-based
> > > applications, cookies are created by the client and never sent to
> > > the server. The cookie is just a convenient container for
> > > application state, which makes its way to the server in
> > > representations and URIs. That’s a very RESTful use of cookies."
> > >
> > > /
> > >
> > > I hope the authors don't mind the long citation .
> > >
> > > Best regards,
> > > Nina
> > >
> > > [1] Leonard Richardson and Sam Ruby, RESTful Web Services, O'Reilly
> > > 2007, p.252
> > >
> > > > (Noting that some cookies may contain more interesting information
> > than JSESSIONID, for example. I agree with an understand the general
> > assessment that server-side session storage is not RESTful).
> > > >
> > > > Jon
> > > > ........
> > > > Jon Moore
> > > > Comcast Interactive Media
> > > >
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: rest-discuss@yahoogroups.com
> > <mailto:rest-discuss%40yahoogroups.com<rest-discuss%2540yahoogroups.com>>
> on behalf of Nina Jeliazkova
> > > > Sent: Thu 10/8/2009 10:53 AM
> > > > To: Subbu Allamaraju
> > > > Cc: jeliazkova.nina@...; Rest List
> > > > Subject: Re: [rest-discuss] composition of REST services
> > > >
> > > > Subbu Allamaraju wrote:
> > > >
> > > >> Learning how to authenticate is no different from leaning about the
> > > >> media types and formats.
> > > >>
> > > >> I don't mean to undermine the difficulty here, but the problem does
> > > >> not change just because a URI belongs to a server different from the
> > > >> one that served the representation.
> > > >>
> > > >> Subbu
> > > >>
> > > > The difficulty is not in the authentication itself, but with the
> > > > federated authentication/authorization, encompassing multiple
> servers.
> > > > Otherwise, it is pretty easy to protect each resource with arbitrary
> > > > kind of available authentication scheme and ask the client to provide
> > > > credentials on each POST. It is quite sure users will not be happy
> > with
> > > > such approach.
> > > >
> > > > Besides, REST does not encourages cookies and sessions, meaning
> > > > credentials or something derived from credentials should be sent
> > > > on_every_ request.
> > > >
> > > > Best regards,
> > > > Nina
> > > >
> > > >> On Thu, Oct 8, 2009 at 4:24 PM, Nina Jeliazkova <nina@...
> > > >> <mailto:nina@...>> wrote:
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>> The integration problem is real, but the server boundary does not
> > > >>> change the problem very much. Does the client care if the
> > > >>> resource it is accessing is on hateoas.com <http://hateoas.com
> > <http://hateoas.com>>
> > > >>> or ulser.com <http://ulser.com <http://ulser.com>> as long it
> > knows the semantics of
> > > >>> the link, the methods to use, security requirements, media types
> > > >>> and so on?
> > > >>>
> > > >> At least transparently accessing resources outside of server
> > > >> boundary under different domains requires /slightly/ more
> > > >> complicated authentication/authorization scheme, that just a
> > > >> single server, unless all resources are considered unprotected.
> > > >>
> > > >> Best regards,
> > > >> Nina
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > ------------------------------------
> > > >
> > > > Yahoo! Groups Links
> > > >
> > > >
> > > >
> > >
> >
> >
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Thu, Oct 8, 2009 at 9:35 AM, Bill Burke <bburke@...> wrote: > The fact of the matter is the > vast majority of the Web is a one-to-one relationship between client and > server. In other words, the Web is a simple system. As a result you > have a lot of simple answers to simple problems. These integration > problems are real and I think REST can solve many of them in a better way. > Fire up Fiddler or whatever is your preferred HTTP debugger and go visit http://www.news.com I saw about 100 HTTP requests to a total of about 15 different domains and I gave up counting how many different servers. All I did was try and read one page. Using the web has not been a 1 client to 1 server interaction for a very long time. Darrel
Has anybody attempted to serialize RDF with Link Headers? I asked some semweb people about this yesterday and was advised that doing so would restrict one to a subset of RDF that could not include 'bnodes' or 'literals'. I'm not sure much, if anything at all, is lost by this - since both bnodes and literals (to the best of my understanding) belong on the very edges of a graph anyway as they are effectively 'stubs'. I imagine one potential solution to this is to treat all literals and bnodes as resources and give them a URI; a benefit of this is that literals would not necessarily 'stub' the graph - because the server responsible for a given literal could monitor Refferer headers and dynamically monitor, build up, and 'feed back' its context by updating its link relations. 'Duplicate' literals at separate URIs could also have link relations to one another, allowing for a more distributed approach that could avoid having to rely on an external SPOF. I don't profess to have a full understanding of RDF, and I would be interested to hear others' thoughts on this. - Mike
--- In rest-discuss@yahoogroups.com, Jan Vincent <jvliwanag@...> wrote: > > Hi guys, > > While creating a RESTful web service, how would one go about > documenting it? HATEOAS seem to be key, but how do you document the > relationship between the resources linked? Most seem to use URI > templates, but depending on them doesn't seem to be RESTful. WSDL/WADL > are ok as specifications, but what do you guys think? I think a key distinction to make is the documentation of an interface and the documentation of a service. The client should only depend on the documentation of your interface, which should be in the form of media types and link relations (and maybe semantic identifiers in the media type like categories or microformats -- not ideal but sometimes necessary). You will also need to document a service to build it -- here you need to specify things like the URIs and what kinds of resources they reference, how to represent each resource in the media type etc. Something like WADL can formalize this and help with tooling etc. but isn't the only option. An analogy is the specification of HTML and the specification of a web site. That's my take anyways. Regards, Andrew
I've expanded a little on the motivation behind FOREST in this blog post: http://duncan-cragg.org/blog/post/forest-get-only-rest-integration-pattern/ Duncan Cragg
Given the guidelines REST proposes, is there a generic client for RESTful services? I know one may simply use some HTTP client and work from there. However, I tend to see this practice as being quite tedious. In the SOAP/WSDL world, for instance, there's code generation. Though many of you would hate that (and it's understandable why), perhaps in the REST world, there is one that automatically reads the proper hyperlinks we give them on some parts of the resource representations provided in some URL. Of course, all this is done dynamically. Given one thing to do for instance, this client would go from some URL (perhaps '/' of the site), then follow some link from there and so on. Of course, should the server advice the client to cache the response, it would do so accordingly. Again, doing all these by hand may seem tedious. FYI though, from the server perspective, I see webmachine (http://bitbucket.org/justin/webmachine/ ) as a pretty good example as to what I'm looking for. Jan Vincent Liwanag jvliwanag@...
The problem here is simply that your Universal REST client will need to understand the semantics of all of the media types used in the transactions. Otherwise, for example, how would it know what are links and what aren't links? If your client is media type aware, or all of your media type follow some specific conventions (such as universal use of a specific link tag and format, or the link headers), then a URC (Universal REST Client) could return a list of links in a payload, the media types they may require, and perhaps, particularly in a dynamic language, provide some kind of proxy object the client could use to populate the metadata (basically a OOL <-> XML mapper). If the URC was extra robust (assuming the server also offered support), it could send an OPTIONS request to each link to better fill in the details of the operations available at a particular link. But, again, this depends on the actual information coming from services being is some format that the URC can interpret, and this is where the "universality" of the client is likely to break down. Given convetions, I think it could go a long way, but I don't think you can make a truly universal client. Regards, Will Hartung (willh@...)
Jan Vincent <jvliwanag@...> writes:
> I know one may simply use some HTTP client and work
> from there. However, I tend to see this practice as being quite
> tedious.
I've not seen a good higher-level HTTP framework that would:
- interpret an out-of-band description of a RESTful web service to
produce high-level forms/state-machine stubs that can be coded to in
the implementation language.
- integrating that with run-time in-band conditional-GET of previous
responses, response codes, &c.
- supports the more interesting HTTP response codes like
- 202 + maybe polling some <handwave>url in the repsonse to check
final creation state</>
- 204, 205, 206
- 3xx redirection codes with stateful recoding of temp/perm
redirects.
- 503 + retry-after info.
- Supports cache control, in combination with the above.
I'd imagine such a framework to:
a/ again, use some description language that identifies the *potential*
resource-/media-types, state-space, and forms a-priori without having to
actively traverse every class of link on the site, but…
b/ would require the active traversal of links to function, ensuring
that the runtime binding of the resources is the same as the build-time
binding (withing epsilon of versioning);
c/ as such, would always start at a safe entry point (e.g., '/') for a
resource-space, with conditional requests to validate any previous
(cached) assumptions about the site are still valid.
I've built a couple of ad-hoc things that use something like Apache
HTTPClient, but they're usually just my application code using
HTTPClient to do some specific thing, not a generic solution one level
removed from that.
--
...jsled
http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
In my experience, the two biggest hurdles to a generic HTTP client are
- media types
- link relations
Media-types includes not just data format issues (are you using a
well-known format or a custom XML format?) but also the semantics
related to the format (which elements are inputs, which are elements
to query for data, what actions are supported, what data format is
used to send data, etc.). Link relations includes which relations are
supported for links, understanding custom rel values, etc.
The only data format that works even mildly well in this space is
HTML; and there is a reliable (but rather limited) client for it,
too<g>.
OPTIONS seems quite inadequate for this information, so run-time
discovery is pretty weak. That means reliance on out-of-band
information and "design-time" discovery.
That leads to, IMO, some interesting questions:
- what would it take build a "better browser"; one that supports PUT
and DELETE, understands a wider range of link relations?
- what would it take to establish rendering engines for ATOM and other
well-known, validate-able media types?
- what would it take to establish reliable rendering engines for
constrained media-types (XML, JSON)?
- what would it take to establish a standardized documentation that
includes meda-type details such as sematics, schema, link relations,
etc?
- what would it take to improve the OPTIONS method in order to be able
to include (or point to) these standardized documents?
All fertile ground for study, IMO.
mca
http://amundsen.com/blog/
On Fri, Oct 9, 2009 at 19:44, Josh Sled <jsled@...> wrote:
> Jan Vincent <jvliwanag@...> writes:
>> I know one may simply use some HTTP client and work
>> from there. However, I tend to see this practice as being quite
>> tedious.
>
> I've not seen a good higher-level HTTP framework that would:
>
> - interpret an out-of-band description of a RESTful web service to
> produce high-level forms/state-machine stubs that can be coded to in
> the implementation language.
>
> - integrating that with run-time in-band conditional-GET of previous
> responses, response codes, &c.
>
> - supports the more interesting HTTP response codes like
>
> - 202 + maybe polling some <handwave>url in the repsonse to check
> final creation state</>
>
> - 204, 205, 206
>
> - 3xx redirection codes with stateful recoding of temp/perm
> redirects.
>
> - 503 + retry-after info.
>
> - Supports cache control, in combination with the above.
>
>
> I'd imagine such a framework to:
>
> a/ again, use some description language that identifies the *potential*
> resource-/media-types, state-space, and forms a-priori without having to
> actively traverse every class of link on the site, but…
>
> b/ would require the active traversal of links to function, ensuring
> that the runtime binding of the resources is the same as the build-time
> binding (withing epsilon of versioning);
>
> c/ as such, would always start at a safe entry point (e.g., '/') for a
> resource-space, with conditional requests to validate any previous
> (cached) assumptions about the site are still valid.
>
>
> I've built a couple of ad-hoc things that use something like Apache
> HTTPClient, but they're usually just my application code using
> HTTPClient to do some specific thing, not a generic solution one level
> removed from that.
>
> --
> ...jsled
> http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
>
If humans can do it, why can't we build computers that do too? What
are the cues human users of the web follow to understand the html page
rendered to them? I was wondering if such might be applied to work
similarly to computers. I've been reading some works on the semantic
web (RDFs and such) and the topics it touches that overlaps with the
REST discussion. I was wondering though if this would help in building
the generic REST client.
My concern however is if we use RDFs couple this with ontologies,
there might be too much complexity already. On the other hand, is
there a 'smart' way for a generic REST client to figure out the
meanings of the hypermedia without explicit specifications? People
seem to do just fine when visiting a site and typing in, say, their
'Birthday', knowing what that means without really needing having to
read the specs of the site and conforming strictly to it.
Jan Vincent Liwanag
jvliwanag@...
On Oct 10, 2009, at 9:22 AM, mike amundsen wrote:
> In my experience, the two biggest hurdles to a generic HTTP client are
> - media types
> - link relations
>
> Media-types includes not just data format issues (are you using a
> well-known format or a custom XML format?) but also the semantics
> related to the format (which elements are inputs, which are elements
> to query for data, what actions are supported, what data format is
> used to send data, etc.). Link relations includes which relations are
> supported for links, understanding custom rel values, etc.
>
> The only data format that works even mildly well in this space is
> HTML; and there is a reliable (but rather limited) client for it,
> too<g>.
>
> OPTIONS seems quite inadequate for this information, so run-time
> discovery is pretty weak. That means reliance on out-of-band
> information and "design-time" discovery.
>
> That leads to, IMO, some interesting questions:
> - what would it take build a "better browser"; one that supports PUT
> and DELETE, understands a wider range of link relations?
> - what would it take to establish rendering engines for ATOM and other
> well-known, validate-able media types?
> - what would it take to establish reliable rendering engines for
> constrained media-types (XML, JSON)?
> - what would it take to establish a standardized documentation that
> includes meda-type details such as sematics, schema, link relations,
> etc?
> - what would it take to improve the OPTIONS method in order to be able
> to include (or point to) these standardized documents?
>
> All fertile ground for study, IMO.
>
> mca
> http://amundsen.com/blog/
>
>
>
>
> On Fri, Oct 9, 2009 at 19:44, Josh Sled <jsled@...>
> wrote:
>> Jan Vincent <jvliwanag@...> writes:
>>> I know one may simply use some HTTP client and work
>>> from there. However, I tend to see this practice as being quite
>>> tedious.
>>
>> I've not seen a good higher-level HTTP framework that would:
>>
>> - interpret an out-of-band description of a RESTful web service to
>> produce high-level forms/state-machine stubs that can be coded to in
>> the implementation language.
>>
>> - integrating that with run-time in-band conditional-GET of previous
>> responses, response codes, &c.
>>
>> - supports the more interesting HTTP response codes like
>>
>> - 202 + maybe polling some <handwave>url in the repsonse to check
>> final creation state</>
>>
>> - 204, 205, 206
>>
>> - 3xx redirection codes with stateful recoding of temp/perm
>> redirects.
>>
>> - 503 + retry-after info.
>>
>> - Supports cache control, in combination with the above.
>>
>>
>> I'd imagine such a framework to:
>>
>> a/ again, use some description language that identifies the
>> *potential*
>> resource-/media-types, state-space, and forms a-priori without
>> having to
>> actively traverse every class of link on the site, but…
>>
>> b/ would require the active traversal of links to function, ensuring
>> that the runtime binding of the resources is the same as the build-
>> time
>> binding (withing epsilon of versioning);
>>
>> c/ as such, would always start at a safe entry point (e.g., '/')
>> for a
>> resource-space, with conditional requests to validate any previous
>> (cached) assumptions about the site are still valid.
>>
>>
>> I've built a couple of ad-hoc things that use something like Apache
>> HTTPClient, but they're usually just my application code using
>> HTTPClient to do some specific thing, not a generic solution one
>> level
>> removed from that.
>>
>> --
>> ...jsled
>> http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@$
>> {b}
>>
--- In rest-discuss@yahoogroups.com, Jan Vincent <jvliwanag@...> wrote: > Given the guidelines REST proposes, is there a generic client for > RESTful services? Absolutely! But they aren't specific to REST, they are specific to a uniform interface (e.g. URI + HTTP + a media format). Web browsers are generic clients for URI + HTTP + HTML. In the telephony space, Voice browsers are generic clients for URI + HTTP + VoiceXML. Some Voice browsers also understand CCXML -- which is an interesting format because it's machine driven. It stands as a counter example to the argument that a generic client must be guided by a human being. The ability to write a generic client that is able to work with any service that adheres to a uniform interface is one of the key benefits of REST IMO. Regards, Andrew
+1 I've been thinking about this myself, IMO, the key to solving the problem is to 1. Tie resource links i.e. <link rel="<operation>" (hypermedia) to a known set, an ontology of operations, perhaps; or a finite set of operations 2. having the client handle different media types as "plugins" ... just the way we a "web" browsers work. IMHO, ATOM pub services are an easier subset of RESTful services to have a universal client for Best, -Dilip On Fri, Oct 9, 2009 at 9:16 PM, wahbedahbe <andrew.wahbe@...> wrote: > > > --- In rest-discuss@yahoogroups.com, Jan Vincent <jvliwanag@...> wrote: > > Given the guidelines REST proposes, is there a generic client for > > RESTful services? > > Absolutely! But they aren't specific to REST, they are specific to a > uniform interface (e.g. URI + HTTP + a media format). > > Web browsers are generic clients for URI + HTTP + HTML. > In the telephony space, Voice browsers are generic clients for URI + HTTP + > VoiceXML. Some Voice browsers also understand CCXML -- which is an > interesting format because it's machine driven. It stands as a counter > example to the argument that a generic client must be guided by a human > being. > > The ability to write a generic client that is able to work with any service > that adheres to a uniform interface is one of the key benefits of REST IMO. > > Regards, > > Andrew > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
adopting a "plug-in" strategy means you might be able to write against the Mozilla code-base and get a lot of the HTTP client "for free." mca http://amundsen.com/blog/ On Fri, Oct 9, 2009 at 21:50, Dilip Krishnan <dilip.krishnan@...>wrote: > > > +1 > I've been thinking about this myself, IMO, the key to solving the problem > is to > 1. Tie resource links i.e. <link rel="<operation>" (hypermedia) to a known > set, an ontology of operations, perhaps; or a finite set of operations > 2. having the client handle different media types as "plugins" ... just the > way we a "web" browsers work. > > IMHO, ATOM pub services are an easier subset of RESTful services to have a > universal client for > > Best, > -Dilip > > > > On Fri, Oct 9, 2009 at 9:16 PM, wahbedahbe <andrew.wahbe@...> wrote: > >> >> >> --- In rest-discuss@yahoogroups.com, Jan Vincent <jvliwanag@...> wrote: >> > Given the guidelines REST proposes, is there a generic client for >> > RESTful services? >> >> Absolutely! But they aren't specific to REST, they are specific to a >> uniform interface (e.g. URI + HTTP + a media format). >> >> Web browsers are generic clients for URI + HTTP + HTML. >> In the telephony space, Voice browsers are generic clients for URI + HTTP >> + VoiceXML. Some Voice browsers also understand CCXML -- which is an >> interesting format because it's machine driven. It stands as a counter >> example to the argument that a generic client must be guided by a human >> being. >> >> The ability to write a generic client that is able to work with any >> service that adheres to a uniform interface is one of the key benefits of >> REST IMO. >> >> Regards, >> >> Andrew >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > >
Jan Vincent <jvliwanag@...> wrote:
> If humans can do it, why can't we build computers that do too? What
Because, as somebody rightly replied to me the other day, humans have
"intuition" and computers do not, despite all Artificial Intelligence research
for the last 50 years.
Thus, all the analogues of "human readable web" to "machine readable web" are
at least a bit shaky ... if we want anything computers to do automatically,
there is (still) a need for standards.
RDF and ontologies are just one part of the game, they doesn't cover all the
way we humans reason. But they could help with standardizing, the web,
hopefully.
Best regards,
Nina
> are the cues human users of the web follow to understand the html page
> rendered to them? I was wondering if such might be applied to work
> similarly to computers. I've been reading some works on the semantic
> web (RDFs and such) and the topics it touches that overlaps with the
> REST discussion. I was wondering though if this would help in building
> the generic REST client.
>
> My concern however is if we use RDFs couple this with ontologies,
> there might be too much complexity already. On the other hand, is
> there a 'smart' way for a generic REST client to figure out the
> meanings of the hypermedia without explicit specifications? People
> seem to do just fine when visiting a site and typing in, say, their
> 'Birthday', knowing what that means without really needing having to
> read the specs of the site and conforming strictly to it.
>
> Jan Vincent Liwanag
> jvliwanag@...
>
> On Oct 10, 2009, at 9:22 AM, mike amundsen wrote:
>
> > In my experience, the two biggest hurdles to a generic HTTP client are
> > - media types
> > - link relations
> >
> > Media-types includes not just data format issues (are you using a
> > well-known format or a custom XML format?) but also the semantics
> > related to the format (which elements are inputs, which are elements
> > to query for data, what actions are supported, what data format is
> > used to send data, etc.). Link relations includes which relations are
> > supported for links, understanding custom rel values, etc.
> >
> > The only data format that works even mildly well in this space is
> > HTML; and there is a reliable (but rather limited) client for it,
> > too<g>.
> >
> > OPTIONS seems quite inadequate for this information, so run-time
> > discovery is pretty weak. That means reliance on out-of-band
> > information and "design-time" discovery.
> >
> > That leads to, IMO, some interesting questions:
> > - what would it take build a "better browser"; one that supports PUT
> > and DELETE, understands a wider range of link relations?
> > - what would it take to establish rendering engines for ATOM and other
> > well-known, validate-able media types?
> > - what would it take to establish reliable rendering engines for
> > constrained media-types (XML, JSON)?
> > - what would it take to establish a standardized documentation that
> > includes meda-type details such as sematics, schema, link relations,
> > etc?
> > - what would it take to improve the OPTIONS method in order to be able
> > to include (or point to) these standardized documents?
> >
> > All fertile ground for study, IMO.
> >
> > mca
> > http://amundsen.com/blog/
> >
> >
> >
> >
> > On Fri, Oct 9, 2009 at 19:44, Josh Sled <jsled@...>
> > wrote:
> >> Jan Vincent <jvliwanag@gmail.com> writes:
> >>> I know one may simply use some HTTP client and work
> >>> from there. However, I tend to see this practice as being quite
> >>> tedious.
> >>
> >> I've not seen a good higher-level HTTP framework that would:
> >>
> >> - interpret an out-of-band description of a RESTful web service to
> >> produce high-level forms/state-machine stubs that can be coded to in
> >> the implementation language.
> >>
> >> - integrating that with run-time in-band conditional-GET of previous
> >> responses, response codes, &c.
> >>
> >> - supports the more interesting HTTP response codes like
> >>
> >> - 202 + maybe polling some <handwave>url in the repsonse to check
> >> final creation state</>
> >>
> >> - 204, 205, 206
> >>
> >> - 3xx redirection codes with stateful recoding of temp/perm
> >> redirects.
> >>
> >> - 503 + retry-after info.
> >>
> >> - Supports cache control, in combination with the above.
> >>
> >>
> >> I'd imagine such a framework to:
> >>
> >> a/ again, use some description language that identifies the
> >> *potential*
> >> resource-/media-types, state-space, and forms a-priori without
> >> having to
> >> actively traverse every class of link on the site, but…
> >>
> >> b/ would require the active traversal of links to function, ensuring
> >> that the runtime binding of the resources is the same as the build-
> >> time
> >> binding (withing epsilon of versioning);
> >>
> >> c/ as such, would always start at a safe entry point (e.g., '/')
> >> for a
> >> resource-space, with conditional requests to validate any previous
> >> (cached) assumptions about the site are still valid.
> >>
> >>
> >> I've built a couple of ad-hoc things that use something like Apache
> >> HTTPClient, but they're usually just my application code using
> >> HTTPClient to do some specific thing, not a generic solution one
> >> level
> >> removed from that.
> >>
> >> --
> >> ...jsled
> >> http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@$
> >> {b}
> >>
>
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
What I don't understand is why in a post that started with the subject
"Generic REST client" people were so quick to turn it into a "generic
HTTP client". I mean, we all agree that the REST architecture style is
protocol independent (don't we?), and however, every time someone talks
about other protocols the post are silently ignored... So, a Generic
REST client *has* to be a HTTP one? Isn't there REST life outside HTTP?
Nina Jeliazkova wrote:
>
> Jan Vincent <jvliwanag@... <mailto:jvliwanag%40gmail.com>> wrote:
>
> > If humans can do it, why can't we build computers that do too? What
>
> Because, as somebody rightly replied to me the other day, humans have
> "intuition" and computers do not, despite all Artificial Intelligence
> research
> for the last 50 years.
>
> Thus, all the analogues of "human readable web" to "machine readable
> web" are
> at least a bit shaky ... if we want anything computers to do
> automatically,
> there is (still) a need for standards.
>
> RDF and ontologies are just one part of the game, they doesn't cover
> all the
> way we humans reason. But they could help with standardizing, the web,
> hopefully.
>
> Best regards,
> Nina
>
> > are the cues human users of the web follow to understand the html page
> > rendered to them? I was wondering if such might be applied to work
> > similarly to computers. I've been reading some works on the semantic
> > web (RDFs and such) and the topics it touches that overlaps with the
> > REST discussion. I was wondering though if this would help in building
> > the generic REST client.
> >
> > My concern however is if we use RDFs couple this with ontologies,
> > there might be too much complexity already. On the other hand, is
> > there a 'smart' way for a generic REST client to figure out the
> > meanings of the hypermedia without explicit specifications? People
> > seem to do just fine when visiting a site and typing in, say, their
> > 'Birthday', knowing what that means without really needing having to
> > read the specs of the site and conforming strictly to it.
> >
> > Jan Vincent Liwanag
> > jvliwanag@... <mailto:jvliwanag%40gmail.com>
> >
> > On Oct 10, 2009, at 9:22 AM, mike amundsen wrote:
> >
> > > In my experience, the two biggest hurdles to a generic HTTP client are
> > > - media types
> > > - link relations
> > >
> > > Media-types includes not just data format issues (are you using a
> > > well-known format or a custom XML format?) but also the semantics
> > > related to the format (which elements are inputs, which are elements
> > > to query for data, what actions are supported, what data format is
> > > used to send data, etc.). Link relations includes which relations are
> > > supported for links, understanding custom rel values, etc.
> > >
> > > The only data format that works even mildly well in this space is
> > > HTML; and there is a reliable (but rather limited) client for it,
> > > too<g>.
> > >
> > > OPTIONS seems quite inadequate for this information, so run-time
> > > discovery is pretty weak. That means reliance on out-of-band
> > > information and "design-time" discovery.
> > >
> > > That leads to, IMO, some interesting questions:
> > > - what would it take build a "better browser"; one that supports PUT
> > > and DELETE, understands a wider range of link relations?
> > > - what would it take to establish rendering engines for ATOM and other
> > > well-known, validate-able media types?
> > > - what would it take to establish reliable rendering engines for
> > > constrained media-types (XML, JSON)?
> > > - what would it take to establish a standardized documentation that
> > > includes meda-type details such as sematics, schema, link relations,
> > > etc?
> > > - what would it take to improve the OPTIONS method in order to be able
> > > to include (or point to) these standardized documents?
> > >
> > > All fertile ground for study, IMO.
> > >
> > > mca
> > > http://amundsen.com/blog/ <http://amundsen.com/blog/>
> > >
> > >
> > >
> > >
> > > On Fri, Oct 9, 2009 at 19:44, Josh Sled <jsled@...
> <mailto:jsled%40asynchronous.org>>
> > > wrote:
> > >> Jan Vincent <jvliwanag@... <mailto:jvliwanag%40gmail.com>>
> writes:
> > >>> I know one may simply use some HTTP client and work
> > >>> from there. However, I tend to see this practice as being quite
> > >>> tedious.
> > >>
> > >> I've not seen a good higher-level HTTP framework that would:
> > >>
> > >> - interpret an out-of-band description of a RESTful web service to
> > >> produce high-level forms/state-machine stubs that can be coded to in
> > >> the implementation language.
> > >>
> > >> - integrating that with run-time in-band conditional-GET of previous
> > >> responses, response codes, &c.
> > >>
> > >> - supports the more interesting HTTP response codes like
> > >>
> > >> - 202 + maybe polling some <handwave>url in the repsonse to check
> > >> final creation state</>
> > >>
> > >> - 204, 205, 206
> > >>
> > >> - 3xx redirection codes with stateful recoding of temp/perm
> > >> redirects.
> > >>
> > >> - 503 + retry-after info.
> > >>
> > >> - Supports cache control, in combination with the above.
> > >>
> > >>
> > >> I'd imagine such a framework to:
> > >>
> > >> a/ again, use some description language that identifies the
> > >> *potential*
> > >> resource-/media-types, state-space, and forms a-priori without
> > >> having to
> > >> actively traverse every class of link on the site, but�
> > >>
> > >> b/ would require the active traversal of links to function, ensuring
> > >> that the runtime binding of the resources is the same as the build-
> > >> time
> > >> binding (withing epsilon of versioning);
> > >>
> > >> c/ as such, would always start at a safe entry point (e.g., '/')
> > >> for a
> > >> resource-space, with conditional requests to validate any previous
> > >> (cached) assumptions about the site are still valid.
> > >>
> > >>
> > >> I've built a couple of ad-hoc things that use something like Apache
> > >> HTTPClient, but they're usually just my application code using
> > >> HTTPClient to do some specific thing, not a generic solution one
> > >> level
> > >> removed from that.
> > >>
> > >> --
> > >> ...jsled
> > >> http://asynchronous.org/ <http://asynchronous.org/> - a=jsled;
> b=asynchronous.org; echo ${a}@$
> > >> {b}
> > >>
> >
> >
> >
> >
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
>
>
António Mota <amsmota@...> writes:
> What I don't understand is why in a post that started with the subject
> "Generic REST client" people were so quick to turn it into a "generic
> HTTP client".
Because while the subject said "REST", it was clear in the body that he
was talking about "HTTP"; it always was a generic HTTP client thread.
Not that I think the two concepts should always be mixed together; we
should take care to separate them when appropriate. But there are a lot
of people in the world looking to create practical RESTful solutions of
course in HTTP. Trying to talk about REST in the absence of concrete
HTTP can quickly become really handwavy and imprecise, since most lack
the shared terminology and communicative rigor to have a productive
conversation.
--
...jsled
http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
Jan Vincent <jvliwanag@...> writes:
> What
> are the cues human users of the web follow to understand the html page
> rendered to them? I was wondering if such might be applied to work
> similarly to computers.
My favorite go-to example along these lines is credit card entry for
purchases. If you as a human see a form with a heading including the
words "credit card", and an input for a number, expiration date and (in
"v2" of the form) a CCV code … you know what to do. If it asks for
something different, you'd be as stuck as a computer would be.
The problem is that versioning step … when CCV codes were introduced,
sites could add explanatory text and a picture to help describe the new
concept. It's that level of teaching that's hard to reproduce for an
automated client; it's probably the point the engineer gets involved and
patches the code to support the new field, probably by interpreting the
same explanatory text + picture from a trace of the recent failures of
the automated client.
> On the other hand, is
> there a 'smart' way for a generic REST client to figure out the
> meanings of the hypermedia without explicit specifications?
At the level you're thinking of, I don't believe so, no. Given there's
no inherent meaning in any of this stuff, everything must be specified
somehow.
--
...jsled
http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
Hello Duncan. Didn't get it on your problem case. First, why the resource is "at" one server. It should be location-less, or in IT words "shared". Second, a resource may have all the URIs you want, or that it needs, that is not a restriction. Third, I plenty support that resources are locatable and accessible by any node in a network, and REST is for networking systems, so I don't see why is there a restriction for servers to be clients. Now, as FOREST to be a pattern and not an style, means it is applicable to tactic design. That means it is local, and thus may not be applicable to the whole system. In fact, as you actually go into details of implementation, it IS a pattern. What I mean is that your REST applications may contain parts implemented using FOREST and others using other patterns. Thus, you need: 1. Identify the context where your pattern can be used (as the title suggest, cases where we need integration may be). 2. Identify the particular problem it is solving. 3. You already defined the solution, even getting down to propose implementation. 4. List the consequences of applying it. Will read it again, trying to understand each bit. Cheers! William Martinez --- In rest-discuss@yahoogroups.com, "duncan_b_cragg" <rest-discuss@...> wrote: > > > > I've expanded a little on the motivation behind FOREST in this blog post: > > http://duncan-cragg.org/blog/post/forest-get-only-rest-integration-pattern/ > > Duncan Cragg >
I see, thanks for those who've pitched in. So going back to more
humbler expectations. How do you guys think is it best to 'instruct'
this 'generic rest client' properly? What do we expect as inputs and
outputs for this client and what steps it should take. Moreover, how
do we express these instructions to the client? I'd sure hate to code
everything manually. A DSL might make sense here. Any thoughts?
On Oct 10, 2009, at 10:34 PM, Josh Sled wrote:
> Jan Vincent <jvliwanag@...> writes:
>> What
>> are the cues human users of the web follow to understand the html
>> page
>> rendered to them? I was wondering if such might be applied to work
>> similarly to computers.
>
> My favorite go-to example along these lines is credit card entry for
> purchases. If you as a human see a form with a heading including the
> words "credit card", and an input for a number, expiration date and
> (in
> "v2" of the form) a CCV code … you know what to do. If it asks for
> something different, you'd be as stuck as a computer would be.
>
> The problem is that versioning step … when CCV codes were introduced,
> sites could add explanatory text and a picture to help describe the
> new
> concept. It's that level of teaching that's hard to reproduce for an
> automated client; it's probably the point the engineer gets involved
> and
> patches the code to support the new field, probably by interpreting
> the
> same explanatory text + picture from a trace of the recent failures of
> the automated client.
>
>
>> On the other hand, is
>> there a 'smart' way for a generic REST client to figure out the
>> meanings of the hypermedia without explicit specifications?
>
> At the level you're thinking of, I don't believe so, no. Given
> there's
> no inherent meaning in any of this stuff, everything must be specified
> somehow.
>
> --
> ...jsled
> http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
Jan Vincent Liwanag
jvliwanag@...
Jan:
My current thinking in this idea is that the OPTIONS method could
contain a Link Header and/or a body that contains a link to one or
more HTML documents. The documents could contain info geared toward
humans and machines.
In fact, I've been toying with some document format that could be used
by HTTP clients to generate a test-able UI (similar to the way SOAP
clients use WSDL) and, right now, my favorite format for this work is
XHTML. It has both link (A) and FORM elements already defined. Using
XPATH or other tools, any HTTP client could render this information as
either inputs for humans or for a scripted machine client.
I've used this w/ a common browser that depends on a script library to
support scanning for link relations and supporting PUT and DELETE as
well as GET and POST. But I've not created any other client (desktop,
non-browser variety). My examples are quite limited and aimed at a
single applications space, not a general tool.
mca
http://amundsen.com/blog/
On Sat, Oct 10, 2009 at 20:57, Jan Vincent <jvliwanag@...> wrote:
> I see, thanks for those who've pitched in. So going back to more
> humbler expectations. How do you guys think is it best to 'instruct'
> this 'generic rest client' properly? What do we expect as inputs and
> outputs for this client and what steps it should take. Moreover, how
> do we express these instructions to the client? I'd sure hate to code
> everything manually. A DSL might make sense here. Any thoughts?
>
> On Oct 10, 2009, at 10:34 PM, Josh Sled wrote:
>
>> Jan Vincent <jvliwanag@...> writes:
>>> What
>>> are the cues human users of the web follow to understand the html
>>> page
>>> rendered to them? I was wondering if such might be applied to work
>>> similarly to computers.
>>
>> My favorite go-to example along these lines is credit card entry for
>> purchases. If you as a human see a form with a heading including the
>> words "credit card", and an input for a number, expiration date and
>> (in
>> "v2" of the form) a CCV code … you know what to do. If it asks for
>> something different, you'd be as stuck as a computer would be.
>>
>> The problem is that versioning step … when CCV codes were introduced,
>> sites could add explanatory text and a picture to help describe the
>> new
>> concept. It's that level of teaching that's hard to reproduce for an
>> automated client; it's probably the point the engineer gets involved
>> and
>> patches the code to support the new field, probably by interpreting
>> the
>> same explanatory text + picture from a trace of the recent failures of
>> the automated client.
>>
>>
>>> On the other hand, is
>>> there a 'smart' way for a generic REST client to figure out the
>>> meanings of the hypermedia without explicit specifications?
>>
>> At the level you're thinking of, I don't believe so, no. Given
>> there's
>> no inherent meaning in any of this stuff, everything must be specified
>> somehow.
>>
>> --
>> ...jsled
>> http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
>
> Jan Vincent Liwanag
> jvliwanag@...
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Josh,
You articulated what an improved client could do very well. In Java land,
the best is HTTPClient and HTTPUrlConnection and they don't come anywhere
close (I've also heard Restlet's client is pretty snazzy too, but haven't
used it). However, these are pretty low level, handling keep-alive, proxies
and redirections, all that is specific to the HTTP protocol (this shouldn't
be a surprise since these are HTTP libraries after all).
There is a need for a client that lifts up what can be handled by the
runtime, but ultimately how the machine interacts with the resource is the
definition of the application that consume the resource. I'm not sure it
can ever be completely automated.
-Noah
On Fri, Oct 9, 2009 at 4:44 PM, Josh Sled <jsled@asynchronous.org> wrote:
> Jan Vincent <jvliwanag@...> writes:
> > I know one may simply use some HTTP client and work
> > from there. However, I tend to see this practice as being quite
> > tedious.
>
> I've not seen a good higher-level HTTP framework that would:
>
> - interpret an out-of-band description of a RESTful web service to
> produce high-level forms/state-machine stubs that can be coded to in
> the implementation language.
>
> - integrating that with run-time in-band conditional-GET of previous
> responses, response codes, &c.
>
> - supports the more interesting HTTP response codes like
>
> - 202 + maybe polling some <handwave>url in the repsonse to check
> final creation state</>
>
> - 204, 205, 206
>
> - 3xx redirection codes with stateful recoding of temp/perm
> redirects.
>
> - 503 + retry-after info.
>
> - Supports cache control, in combination with the above.
>
>
> I'd imagine such a framework to:
>
> a/ again, use some description language that identifies the *potential*
> resource-/media-types, state-space, and forms a-priori without having to
> actively traverse every class of link on the site, but…
>
> b/ would require the active traversal of links to function, ensuring
> that the runtime binding of the resources is the same as the build-time
> binding (withing epsilon of versioning);
>
> c/ as such, would always start at a safe entry point (e.g., '/') for a
> resource-space, with conditional requests to validate any previous
> (cached) assumptions about the site are still valid.
>
>
> I've built a couple of ad-hoc things that use something like Apache
> HTTPClient, but they're usually just my application code using
> HTTPClient to do some specific thing, not a generic solution one level
> removed from that.
>
> --
> ...jsled
> http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
>
Then it is worst than I thought, it seems then that in the collective subconscious of people here, to say REST is to say REST over HTTP only, which IMO is a terrible limitation for a enterprise architecture, based or not in REST. Josh Sled wrote: > António Mota <amsmota@...> writes: > >> What I don't understand is why in a post that started with the subject >> "Generic REST client" people were so quick to turn it into a "generic >> HTTP client". >> > > Because while the subject said "REST", it was clear in the body that he > was talking about "HTTP"; it always was a generic HTTP client thread. > > > Not that I think the two concepts should always be mixed together; we > should take care to separate them when appropriate. But there are a lot > of people in the world looking to create practical RESTful solutions of > course in HTTP. Trying to talk about REST in the absence of concrete > HTTP can quickly become really handwavy and imprecise, since most lack > the shared terminology and communicative rigor to have a productive > conversation. > >
Hi António, > Then it is worst than I thought, it seems then that in the collective > subconscious of people here, to say REST is to say REST over HTTP > only, > which IMO is a terrible limitation for a enterprise architecture, > based > or not in REST. Why is HTTP so limiting for an enterprise architecture? OK, so it's not low latency* (the Web trades latency for scalability), but not that many enterprise systems really need low latency. Jim * On a related not, WebHooks and Pubsubhubub aren't low latency either.
On Sun, Oct 11, 2009 at 3:49 PM, Jim Webber <jim@...> wrote: > Hi António, > > > Then it is worst than I thought, it seems then that in the collective > > subconscious of people here, to say REST is to say REST over HTTP > > only, > > which IMO is a terrible limitation for a enterprise architecture, > > based > > or not in REST. > > Why is HTTP so limiting for an enterprise architecture? > > OK, so it's not low latency* (the Web trades latency for scalability), > but not that many enterprise systems really need low latency. > While acknowledging that REST is applicable outside of HTTP, one would want to have one hell of a good reason to deviate from it (e.g. performance, latency) as using it as the "universal interface" significantly lowers the barrier to entry and in turn costs of implementing and consuming it. Sam
Hello. Actually, I would give a +1. 1. Generic may not be a client per se, but a framework that allows to create a client by some sort of configuration (description language? may not be adopted since it recalls WSDL which is hated nowadays and misunderstood) or training where it learns the semantics. 2. A standardized (or at least agreed upon) set of ways to define links to represent those semantics. (WSDL like again?) 3. The actual particular code, AKA the plugin part you will create. Just a word about WSDL. The biggest mistake was the code generation. If WSDL was to be interpreted on the fly, and the interactions dynamically followed, I guess people would not non-sensically hate it as they do now. So, that framework IS NOT A CODE GENERATION thing. It should read and process the config on the fly, and the config is to be provided by the first URL you hit in the system. See? Cheers! William Martinez Pomares. --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > adopting a "plug-in" strategy means you might be able to write against the > Mozilla code-base and get a lot of the HTTP client "for free." > > mca > http://amundsen.com/blog/ > > > > On Fri, Oct 9, 2009 at 21:50, Dilip Krishnan <dilip.krishnan@...>wrote: > > > > > > > +1 > > I've been thinking about this myself, IMO, the key to solving the problem > > is to > > 1. Tie resource links i.e. <link rel="<operation>" (hypermedia) to a known > > set, an ontology of operations, perhaps; or a finite set of operations > > 2. having the client handle different media types as "plugins" ... just the > > way we a "web" browsers work. > > > > IMHO, ATOM pub services are an easier subset of RESTful services to have a > > universal client for > > > > Best, > > -Dilip > > > > > > > > On Fri, Oct 9, 2009 at 9:16 PM, wahbedahbe <andrew.wahbe@...> wrote: > > > >> > >> > >> --- In rest-discuss@yahoogroups.com, Jan Vincent <jvliwanag@> wrote: > >> > Given the guidelines REST proposes, is there a generic client for > >> > RESTful services? > >> > >> Absolutely! But they aren't specific to REST, they are specific to a > >> uniform interface (e.g. URI + HTTP + a media format). > >> > >> Web browsers are generic clients for URI + HTTP + HTML. > >> In the telephony space, Voice browsers are generic clients for URI + HTTP > >> + VoiceXML. Some Voice browsers also understand CCXML -- which is an > >> interesting format because it's machine driven. It stands as a counter > >> example to the argument that a generic client must be guided by a human > >> being. > >> > >> The ability to write a generic client that is able to work with any > >> service that adheres to a uniform interface is one of the key benefits of > >> REST IMO. > >> > >> Regards, > >> > >> Andrew > >> > >> > >> > >> ------------------------------------ > >> > >> Yahoo! Groups Links > >> > >> > >> > >> > > > > > > >
António Mota <amsmota@...> writes:
> Then it is worst than I thought, it seems then that in the collective
> subconscious of people here, to say REST is to say REST over HTTP only, which
> IMO is a terrible limitation for a enterprise architecture, based or not in
> REST.
No, not "only". But usually, practically.
What other protocols or systems share the same REST constraints?
--
...jsled
http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
Josh Sled wrote: > António Mota <amsmota@...> writes: > >> Then it is worst than I thought, it seems then that in the collective >> subconscious of people here, to say REST is to say REST over HTTP only, which >> IMO is a terrible limitation for a enterprise architecture, based or not in >> REST. >> > > No, not "only". But usually, practically. > > What other protocols or systems share the same REST constraints? > > I don't understand. I thought "constraints" was something that one imposes on something, in the case of REST, in order to build a architecture that conforms to that style, the architect or designer has to impose those constraints to it's design. So if my interpretation is correct, no protocols or systems share the same REST constraints unless you impose them. Maybe not so for the protocols, but for sure for the systems. But REST is a system architectural style, not a protocol architectural style, you design systems that follow that style, not protocols... Protocols are the way you connect the user agents to the resources on the server, or am I wrong?
I do have a hell of a time trying to imagine a enterprise architecture that connects to the outside using only HTTP. Actually, in the infrastructure we tried to Restify we only use HTTP internally. So when we decided to expose our services thru REST resources we knew that the same resource will have to be accessed in multiple ways, not only HTTP, and didn't make any sense to expose the same services in several ways, being the REstfull only for HTTP. I mean, what company doesn't use now email, or JMS for instance, in their day to day operations? We do it, using diferent connectors to connect to our resources, in protocols like JMS and IMAP and intra-VM and I think some more. If this isn't a very usual case in the companies everywhere I would be much surprised... Sam Johnston wrote: > > > On Sun, Oct 11, 2009 at 3:49 PM, Jim Webber <jim@... > <mailto:jim@...>> wrote: > > Hi Ant�nio, > > > Then it is worst than I thought, it seems then that in the > collective > > subconscious of people here, to say REST is to say REST over HTTP > > only, > > which IMO is a terrible limitation for a enterprise architecture, > > based > > or not in REST. > > Why is HTTP so limiting for an enterprise architecture? > > OK, so it's not low latency* (the Web trades latency for scalability), > but not that many enterprise systems really need low latency. > > > While acknowledging that REST is applicable outside of HTTP, one would > want to have one hell of a good reason to deviate from it (e.g. > performance, latency) as using it as the "universal interface" > significantly lowers the barrier to entry and in turn costs of > implementing and consuming it. > > Sam > >
Sam Johnston wrote: > While acknowledging that REST is applicable outside of HTTP, one > would want to have one hell of a good reason to deviate from it Wouldn't a client requirement be a good reason?
António Mota wrote: > > Then it is worst than I thought, it seems then that in the collective > subconscious of people here, to say REST is to say REST over HTTP > only, which IMO is a terrible limitation for a enterprise > architecture, based or not in REST. > While it is true that REST is protocol-independent, is there some other RESTful protocol besides HTTP out there that I don't know about? -Eric
HTTPS :) FTP and it's ilk (SFTP, etc.) On Oct 11, 2009, at 7:48 PM, Eric J. Bowman wrote: > António Mota wrote: > >> >> Then it is worst than I thought, it seems then that in the collective >> subconscious of people here, to say REST is to say REST over HTTP >> only, which IMO is a terrible limitation for a enterprise >> architecture, based or not in REST. >> > > While it is true that REST is protocol-independent, is there some > other > RESTful protocol besides HTTP out there that I don't know about? > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > >
António Mota wrote: > > I mean, what company doesn't use now email, or JMS for instance, in > their day to day operations? We do it, using diferent connectors to > connect to our resources, in protocols like JMS and IMAP and intra-VM > and I think some more. > You're misunderstanding Roy's thesis. (BTW, if you re-check this group's charter, you might find that rest-discuss is dedicated to discussions about the REST architectural style *as described in Roy's thesis*. While you may try to redefine REST as a reference to the community, everyone else here thinks REST specifically refers to Roy's thesis. So either accept that and stop making snarky comments about the thesis and criticizing answers that try to explain said thesis, or go start rest-discuss-discuss. Please read and try to understand the thesis, instead of cherry-picking from it to support your arguments without any understanding of context.) " REST does not restrict communication to a particular protocol, but it does constrain the interface between components, and hence the scope of interaction and implementation assumptions that might otherwise be made between components. For example, the Web's primary transfer protocol is HTTP, but the architecture also includes seamless access to resources that originate on pre-existing network servers, including FTP [107], Gopher [7], and WAIS [36]. Interaction with those services is restricted to the semantics of a REST connector. This constraint sacrifices some of the advantages of other architectures, such as the stateful interaction of a relevance feedback protocol like WAIS, in order to retain the advantages of a single, generic interface for connector semantics. " What this means is, if you aren't using HTTP, then you're limited to using only those aspects of some other protocol that match the semantics of a generic REST connector. If I have a URI and I want to allow others to GET a representation of that resource, then I can just as easily implement it as FTP -- provided that no content negotiation, caching, or other benefit of REST is required for the interaction. This does not mean that any protocol can be used to build a REST system, and it does not mean that any operation of some other protocol can be made RESTful. If you build a REST system, that doesn't mean you should be able to swap FTP for HTTP on every operation -- that is not what protocol independence means. FTP is *not* a RESTful protocol, but for some application interactions FTP may very well be substituted for HTTP without breaking the uniform interface constraint, while there's no way to model FTP's MGET method as a single, RESTful interaction. HTTP 1.1 is the only protocol out there which was specifically designed as a REST application protocol. FTP is just a transport protocol. I'm not counting Atom Protocol since it requires HTTP. A REST protocol like AtomPub couldn't have been based around any other protocol, because there aren't any other REST application protocols to choose from (at least until Roy gets off his butt and finishes Waka... ;-). > > I mean, what company doesn't use now email, or JMS for instance, in > their day to day operations? We do it, using diferent connectors to > connect to our resources, in protocols like JMS and IMAP and intra-VM > and I think some more. > That's all fine and good, provided that you're implementing a generic interface as seen from the outside world. REST certainly allows you to implement a layer between your internal JMS, IMAP or whatever other components you have, and the WWW, using HTTP. Roy's thesis again: " A disadvantage of the uniform interface is that it may reduce network performance if the data needs to be converted to or from its natural format. " IOW, if you have some JMS or IMAP component, you may need to implement a layer to convert from that "natural format" to one that's compatible with HTTP. REST doesn't imply that IMAP interactions can be made RESTful -- it does imply that you can use HTTP to access a resource that's normally managed by IMAP. Think Webmail, where you can use HTTP to interact with your e-mail account from any Web browser, instead of being limited to using an e-mail client that speaks IMAP. So you can't make IMAP wholly RESTful, but you can design a REST system using HTTP to accomplish the same goals for a user, by encapsulating your IMAP system within a REST layer. (Not that I've ever seen a RESTful Webmail system, but there's no reason one can't be built.) -Eric
Noah Campbell wrote: > > HTTPS :) > That's just an extension of HTTP. > > FTP and it's ilk (SFTP, etc.) > FTP is not a RESTful application protocol, it is a transfer protocol. Please see my other response to this thread where I elaborate on this. -Eric
António Mota wrote: > > So if my interpretation is correct, no protocols or systems share the > same REST constraints unless you impose them. Maybe not so for the > protocols, but for sure for the systems. But REST is a system > architectural style, not a protocol architectural style, you design > systems that follow that style, not protocols... Protocols are the > way you connect the user agents to the resources on the server, or am > I wrong? > Atom Protocol is RESTful, because it applies the constraints of REST to the HTTP application protocol. While you can constrain FTP interactions to a uniform interface and build a system that way, the result won't be RESTful because the interactions won't be visible to intermediaries and therefore can't be cached. An application protocol like HTTP isn't RESTful until you apply constraints to your use of HTTP. Atom Protocol is more of an API than a protocol -- it defines an HTTP-based Uniform Interface for Atom representations of resources. As is clearly described in Chapter 6, HTTP 1.1 was designed using REST to guide the process, resulting in a protocol which may be used RESTfully, but may also be mis-used if the constraints of REST aren't applied. REST is an architectural style which applies to protocols, APIs or entire systems. To use a lame architectural metaphor, the Gothic style may be applied to structures of widely divergent purpose, for example: a cottage, a cathedral, or a bridge. A bridge is infrastructure, i.e. a protocol, while a cathedral is an application, i.e. a system -- but the architectural style (pointed arches, etc.) is the same for both. -Eric
mike amundsen wrote: > > - what would it take build a "better browser"; one that supports PUT > and DELETE, understands a wider range of link relations? > An XForms 1.1 extension (a few are available) to a browser supports PUT, DELETE, or any other request method you need. Browsers may also be extended to support new link relations. We already have a generic HTTP client, though, don't we? It's called curl. If an application really is RESTful, then I can make any request, and decipher the returned representation to determine the choice of state transitions. Then I can use curl to formulate whatever request I need to achieve the next application state. The libcurl library is, to my thinking, a full-featured generic HTTP client. (Or REST client, since you can specify FTP URIs, etc.) -Eric
António Mota wrote: > > What I don't understand is why in a post that started with the > subject "Generic REST client" people were so quick to turn it into a > "generic HTTP client". I mean, we all agree that the REST > architecture style is protocol independent (don't we?) > Yes, but we don't all understand what that means. > > every time someone talks about other protocols the post are silently > ignored... So, a Generic REST client *has* to be a HTTP one? Isn't > there REST life outside HTTP? > A generic REST client determines which protocol to use based on the URI of the request, just like curl or a Web browser does. HTTP 1.1 remains the only application protocol available to build RESTful systems, so it doesn't make sense to discuss generic REST clients in terms of some other protocol. -Eric
Hi William! Thanks for the reply. > First, why the resource is "at" one server. It should be location-less, or in IT words "shared". Second, a resource may have all the URIs you want, or that it needs, that is not a restriction. Sorry - I don't understand this! =0( > Third, I plenty support that resources are locatable and accessible by any node in a network, and REST is for networking systems, so I don't see why is there a restriction for servers to be clients. Good... I think... ! > Now, as FOREST to be a pattern and not an style, means it is applicable to tactic design. That means it is local, and thus may not be applicable to the whole system. In fact, as you actually go into details of implementation, it IS a pattern. No - it's a whole-system Pattern. If Patterns have to be local, then I have to stop calling it a Pattern. Any ideas? 'Sub-style'? Ug. > What I mean is that your REST applications may contain parts implemented using FOREST and others using other patterns. Thus, you need: > 1. Identify the context where your pattern can be used (as the title suggest, cases where we need integration may be). Yah - Enterprise Mashups! =0) > 2. Identify the particular problem it is solving. OK - this is tricky, because I think it has a very wide applicability. It may even satisfy /all/ your Integration Needs! =0) > 3. You already defined the solution, even getting down to propose implementation. Wait till I write the FOREST prototype for Jetty... I'm going to call it "a type of FOREST that begins with 'J' - for Java": 'Jungle'! =0) > 4. List the consequences of applying it. Ka-ching! $$ Profit!! =0) Can't really say this until I've got some real-world experience of applying 'Jungle' and the FOREST Pattern/Sub-Style. > Will read it again, trying to understand each bit. Thanks - again, I really appreciate the feedback.. If it resonates with you (or anyone else) I'd be really happy to explore the ideas with you, and to be challenged on the details. Duncan Cragg >> I've expanded a little on the motivation behind FOREST in this blog post: >> >> http://duncan-cragg.org/blog/post/forest-get-only-rest-integration-pattern/
I don't think I'm misunderstanding anything, when I read a academic dissertation or other philosophical paper i usually interpret it (thankfully I don't do it often). I would be very surprised if a "DISSERTATION submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in Information and Computer Science" has only one interpretation Now what you can argue is that my interpretation is different then yours, and you can even argue that yours is more close to the intentions of Mr Fielding, and that's OK, maybe you're right. But even then, you should say that "my understanding of Roy's thesis is different from yours" instead of "you're misunderstanding Roy's thesis"... Unless you're Roy... Neverthless, my interpretation actually is similar to the bigger part of what you said, but what you're describing at the end of your post is what is referred as "tunnelling" and that's not we're doing. We are not tunnelling HTTP over other protocols or other proctocols over HTTP. We could have done that, it had been more easy for us, since we're partially based on Jersey and Jersey kinda support that. Instead, I'v rewrited significant parts of what Jersey does with HTTP in order to decouple the HTTP part from the REST part, in order to use the later with other connectors that work over other protocols. Basically, we're applying the same RESTfull constraints that are "natural" in HTTP to other protocols. Of course that in order to do so we loose something, but that's not the point. Our aim here is to provide a service to our clients so they can give us money in return. That is provided partially by our software applications. We build software to that aim. We don't build protocols, we use them. And we're trying to do it in a Restfull way because we understand that there is value for us in doing so. But then again, in my understanding of Roy's thesis, the constraints are to be applied to the architecture, not to the communication or the protocol. That the communication and thus the protocol is part of that architecture, that is true. That some protocols are are more "natural" fits for those constraints, that is also true. But people here focus too much in "protocol" and less in "user-agents" and "connectors", that also are part of the architecture. Regarding your commentary about what you think are my intentions, and ignoring your disdain and tone of superiority, let me assure you I'm not trying to redefine anything nor have I the intellectual capabilities to do so, but I do am trying to understand REST. But more than that I'm trying to apply REST to a real world case scenario, not trying to philosophize about it. And as so, I have to learn as I'm doing, because unfortunately I don't have the time nor the money to do it otherwise. I understand that there are some superiors minds in this list that aren't interested in these minor questions and doubts, but then again this is rest-discuss, not rest-philosophical-discuss... Eric J. Bowman wrote: > Ant�nio Mota wrote: > > >> I mean, what company doesn't use now email, or JMS for instance, in >> their day to day operations? We do it, using diferent connectors to >> connect to our resources, in protocols like JMS and IMAP and intra-VM >> and I think some more. >> >> > > You're misunderstanding Roy's thesis. > > (BTW, if you re-check this group's charter, you might find that > rest-discuss is dedicated to discussions about the REST architectural > style *as described in Roy's thesis*. While you may try to redefine > REST as a reference to the community, everyone else here thinks REST > specifically refers to Roy's thesis. So either accept that and stop > making snarky comments about the thesis and criticizing answers that > try to explain said thesis, or go start rest-discuss-discuss. Please > read and try to understand the thesis, instead of cherry-picking from > it to support your arguments without any understanding of context.) > > " > REST does not restrict communication to a particular protocol, but it > does constrain the interface between components, and hence the scope of > interaction and implementation assumptions that might otherwise be made > between components. For example, the Web's primary transfer protocol is > HTTP, but the architecture also includes seamless access to resources > that originate on pre-existing network servers, including FTP [107], > Gopher [7], and WAIS [36]. Interaction with those services is > restricted to the semantics of a REST connector. This constraint > sacrifices some of the advantages of other architectures, such as the > stateful interaction of a relevance feedback protocol like WAIS, in > order to retain the advantages of a single, generic interface for > connector semantics. > " > > What this means is, if you aren't using HTTP, then you're limited to > using only those aspects of some other protocol that match the > semantics of a generic REST connector. If I have a URI and I want to > allow others to GET a representation of that resource, then I can just > as easily implement it as FTP -- provided that no content negotiation, > caching, or other benefit of REST is required for the interaction. > > This does not mean that any protocol can be used to build a REST > system, and it does not mean that any operation of some other protocol > can be made RESTful. If you build a REST system, that doesn't mean you > should be able to swap FTP for HTTP on every operation -- that is not > what protocol independence means. FTP is *not* a RESTful protocol, but > for some application interactions FTP may very well be substituted for > HTTP without breaking the uniform interface constraint, while there's no > way to model FTP's MGET method as a single, RESTful interaction. > > HTTP 1.1 is the only protocol out there which was specifically designed > as a REST application protocol. FTP is just a transport protocol. I'm > not counting Atom Protocol since it requires HTTP. A REST protocol > like AtomPub couldn't have been based around any other protocol, > because there aren't any other REST application protocols to choose > from (at least until Roy gets off his butt and finishes Waka... ;-). > > >> I mean, what company doesn't use now email, or JMS for instance, in >> their day to day operations? We do it, using diferent connectors to >> connect to our resources, in protocols like JMS and IMAP and intra-VM >> and I think some more. >> >> > > That's all fine and good, provided that you're implementing a generic > interface as seen from the outside world. REST certainly allows you to > implement a layer between your internal JMS, IMAP or whatever other > components you have, and the WWW, using HTTP. Roy's thesis again: > > " > A disadvantage of the uniform interface is that it may reduce network > performance if the data needs to be converted to or from its natural > format. > " > > IOW, if you have some JMS or IMAP component, you may need to implement > a layer to convert from that "natural format" to one that's compatible > with HTTP. REST doesn't imply that IMAP interactions can be made RESTful > -- it does imply that you can use HTTP to access a resource that's > normally managed by IMAP. Think Webmail, where you can use HTTP to > interact with your e-mail account from any Web browser, instead of > being limited to using an e-mail client that speaks IMAP. > > So you can't make IMAP wholly RESTful, but you can design a REST system > using HTTP to accomplish the same goals for a user, by encapsulating > your IMAP system within a REST layer. (Not that I've ever seen a > RESTful Webmail system, but there's no reason one can't be built.) > > -Eric >
António Mota wrote: > > Now what you can argue is that my interpretation is different then > yours, and you can even argue that yours is more close to the > intentions of Mr Fielding, and that's OK, maybe you're right. But > even then, you should say that "my understanding of Roy's thesis is > different from yours" instead of "you're misunderstanding Roy's > thesis"... Unless you're Roy... > So REST dies with Roy, because it's utterly incomprehensible to anyone else? Please. There are plenty of examples on this list of people who have taken the time to struggle with this subject, and believe it or not, learned what REST is. Since we all use the same definitions of the same terms, we can even have conversations about REST without resorting to "agreeing to disagree" because, believe it or not, there are right and wrong answers -- due to the clarity with which Roy's thesis describes his subject matter to *anyone* who makes the effort to understand the Computer Science basis of REST, even if that means reading one or more of the footnoted reference materials. The only definition of REST is Roy's thesis, and it is *not* open for interpretation. There's plenty of room to flesh out the things that aren't in the dissertation, but there is no room to re-define REST to fit your own definition of whatever you want it to be -- REST is not so abstract a notion that anything goes. If I'm wrong about something, I'm quite certain Roy will correct me if nobody else here (who knows what they're talking about) does first. REST isn't so theoretical that the only person capable of giving an answer here is Roy himself -- if that were the case, I seriously doubt anyone else would be much interested in REST. You, however, have not studied your ass off for years learning this material -- so stop reducing everything to a semantic debate. You have lots more listening and reading to do, before you're in a position to correct or even criticize the posts of others. You'd be a lot farther along with REST if you spent half as much time studying REST as you do criticizing and arguing with people on rest-discuss. > > Neverthless, my interpretation actually is similar to the bigger part > of what you said, but what you're describing at the end of your post > is what is referred as "tunnelling" and that's not we're doing. We > are not tunnelling HTTP over other protocols or other proctocols over > HTTP. We could have done that, it had been more easy for us, since > we're partially based on Jersey and Jersey kinda support that. > Instead, I'v rewrited significant parts of what Jersey does with HTTP > in order to decouple the HTTP part from the REST part, in order to > use the later with other connectors that work over other protocols. > Basically, we're applying the same RESTfull constraints that are > "natural" in HTTP to other protocols. Of course that in order to do > so we loose something, but that's not the point. > What I described has nothing to do with tunneling. Modeling a system on the Uniform Interface means that you're using methods which are as well defined as methods come. If you have some other method you need, then you redevelop your system to make it work with the Uniform Interface instead -- what you *don't* do is resort to tunneling custom methods over POST. How on Earth can you read what I read, and conclude that I'm talking about tunneling? Oh yeah, you spend more time rebutting what others say than you do actually reading what they've written, let alone trying to understand it. As a result, you simply don't make any sense when you talk about applying the constraints of REST to a protocol like JMS -- a result of failing to take the time to learn what REST's constraints actually are. That or you're a troll -- I don't know quite know what else to make of someone who has spent exponentially more time arguing about REST than they have spent trying to learn it. > > But then again, in my understanding of Roy's thesis, the constraints > are to be applied to the architecture, not to the communication or > the protocol. That the communication and thus the protocol is part of > that architecture, that is true. That some protocols are are more > "natural" fits for those constraints, that is also true. But people > here focus too much in "protocol" and less in "user-agents" and > "connectors", that also are part of the architecture. > REST is defined as a set of constraints -- you don't apply constraints to an architecture, an architecture is defined by the constraints it imposes on a system. You apply REST's constraints to the communication between network connectors, independent of protocol. The hypertext constraint is applied to the message entity, and this has absolutely nothing to do with what network protocol is involved. Again, if you want to make every thread a discussion of the collective shortcomings of this group as a whole, please start rest-discuss-discuss. > > I understand that there are some superiors minds in this list that > aren't interested in these minor questions and doubts, but then again > this is rest-discuss, not rest-philosophical-discuss... > Enough already with all the boring ad-hominem *whining* in your posts. Please. REST is an architectural style, so of *course* discussions here are going to be philosophical in nature. Again, plenty of people on this list have managed to get REST sorted out over the years, with less help than is available to those just getting started now. But you seem to have a chip on your shoulder against anyone who *has* learned, and you seem like a troll for constantly demanding special treatment that none of the rest of us needed. The rest of us had to learn the hard way, find the REST wiki by reading the archive of this list or simply lurking here for awhile (instead of being told of it, but only *after* griping that no such thing existed, etc.)... what makes you so special? REST is hard, there are no shortcuts, you simply need to invest the time (most often measured in years) that it takes to learn any complex subject matter. If I want to learn the Theory of Relativity, and actually use it in professional life, then there's no way around reading Einstein's work, and no way around reading and studying Physics before even getting to that point. It doesn't take an Einstein to grasp relativity theory (to come up with it in the first place, sure), nor is Einstein the only person capable of answering questions about it. Stephen Hawking can explain it to you, but only after you've taken the effort to learn Physics. Yet somehow, it's a failure of the REST community for not providing authoritative "Cliff's Notes" which offer some easier alternative than reading Roy's thesis, for people who haven't studied software architecture. Your trouble learning REST is nobody's problem but your own, so stop trying to assign blame to others, or demanding that others accept "your interpretation" of terms as valid before trying to answer your questions, etc. Learn the terminology, and REST won't seem nearly so ambiguous as it must seem, in order for you to think that it's somehow open for interpretation. It isn't, but you can only know that *after* studying your ass off. -Eric
It is easy to understand a hateoas mechanism where a resource contains a set of links for its related resources... but how I represent a URI where a resource can be created ? Like: I have a URI that accepts POST methods to create a new resource... How do I inform that URI to the client, and specially: how do I inform the parameters types supported by the POST method ? if it is not applicable, what is the alternative ? or hateoas is just for reading data from the service?
Felipe, the server needs to send the client hypermedia in which the client finds all the information you are asking for below. For this to work, the client must understand the media type (aka implement it). AtomPub service documents are a perfect match to your needs, BTW. Jan On Oct 12, 2009, at 3:12 PM, Felipe Gaúcho wrote: > It is easy to understand a hateoas mechanism where a resource contains > a set of links for its related resources... > > but how I represent a URI where a resource can be created ? > > Like: > > I have a URI that accepts POST methods to create a new resource... > > How do I inform that URI to the client, and specially: how do I inform > the parameters types supported by the POST method ? > > if it is not applicable, what is the alternative ? > > or hateoas is just for reading data from the service? > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Eric: Yep, curl is a top-notch HTTP client. Personally, I'd like to see this part of your description: "... I can make any request, and decipher the returned representation to determine the choice of state transitions." more explicit in a client. IOW, I'd like to be able to see a more general way to inform machine clients of the nature of the state transitions available in order to make it easier for machine-clients to make these decisions w/o the need for human interaction. The easiest way to accomplish this would be to provide only one transition option for a machine client (until some end goal is reached resulting in no further available transitions). It seems the next easiest path is to decorate all available state-transitions with "rel" values already understood by the HTTP client thus allowing the machine client to "seek" a goal by using the available state-transitions. This is the area that interests me most right now. mca http://amundsen.com/blog/ On Mon, Oct 12, 2009 at 00:34, Eric J. Bowman <eric@...> wrote: > mike amundsen wrote: > >> >> - what would it take build a "better browser"; one that supports PUT >> and DELETE, understands a wider range of link relations? >> > > An XForms 1.1 extension (a few are available) to a browser supports PUT, > DELETE, or any other request method you need. Browsers may also be > extended to support new link relations. > > We already have a generic HTTP client, though, don't we? It's called > curl. If an application really is RESTful, then I can make any > request, and decipher the returned representation to determine the > choice of state transitions. Then I can use curl to formulate whatever > request I need to achieve the next application state. The libcurl > library is, to my thinking, a full-featured generic HTTP client. (Or > REST client, since you can specify FTP URIs, etc.) > > -Eric >
You put things in my mouth, intentionally or not, that I never said. And doing so, you contradict yourself several times and you're turning this discussion in a serious of personal insults. That you don't want to discuss things I said is up to you, you don't have to answer them anyway. Bottom line on this thread, my point is that a REST Client does not have to be a HTTP Client. So a thread that start as "Generic REST client" should had been named "Generic HTTP client for REST". What I did was to point that out. I could comment some of your points - to which in some cases I agree as I said in the earlier post - and in some cases clarify what I said or what you say I said (which is not the same), but it's visible that you're not interested in coming down from your high-horse and you just dismiss as "trolling" anything that doesn't fit your way of thinking. Note that you started your *first* response to a post of mine by saying "BTW, if you re-check this group's charter, you might find that rest-discuss is dedicated to discussions about the REST architectural style as described in Roy's thesis. While you may try to redefine REST as a reference to the community, everyone else here thinks REST specifically refers to Roy's thesis. So either accept that and stop making snarky comments about the thesis and criticizing answers that try to explain said thesis, or go start rest-discuss-discuss. Please read and try to understand the thesis, instead of cherry-picking from it to support your arguments without any understanding of context.)" Now, if you read my previous posts on this thread, where did I tried to redefine REST, made snarky comments, criticizing answers, cherry-picking whatever? If what you're wrote as the beginning of your answer isn't trolling, what is it? Who's trolling who? Eric J. Bowman wrote: > Ant�nio Mota wrote: > >> Now what you can argue is that my interpretation is different then >> yours, and you can even argue that yours is more close to the >> intentions of Mr Fielding, and that's OK, maybe you're right. But >> even then, you should say that "my understanding of Roy's thesis is >> different from yours" instead of "you're misunderstanding Roy's >> thesis"... Unless you're Roy... >> >> > > So REST dies with Roy, because it's utterly incomprehensible to anyone > else? Please. > > There are plenty of examples on this list of people who have taken the > time to struggle with this subject, and believe it or not, learned what > REST is. Since we all use the same definitions of the same terms, we > can even have conversations about REST without resorting to "agreeing > to disagree" because, believe it or not, there are right and wrong > answers -- due to the clarity with which Roy's thesis describes his > subject matter to *anyone* who makes the effort to understand the > Computer Science basis of REST, even if that means reading one or more > of the footnoted reference materials. > > The only definition of REST is Roy's thesis, and it is *not* open for > interpretation. There's plenty of room to flesh out the things that > aren't in the dissertation, but there is no room to re-define REST to > fit your own definition of whatever you want it to be -- REST is not so > abstract a notion that anything goes. If I'm wrong about something, I'm > quite certain Roy will correct me if nobody else here (who knows what > they're talking about) does first. REST isn't so theoretical that the > only person capable of giving an answer here is Roy himself -- if that > were the case, I seriously doubt anyone else would be much interested > in REST. > > You, however, have not studied your ass off for years learning this > material -- so stop reducing everything to a semantic debate. You have > lots more listening and reading to do, before you're in a position to > correct or even criticize the posts of others. You'd be a lot farther > along with REST if you spent half as much time studying REST as you do > criticizing and arguing with people on rest-discuss. > > >> Neverthless, my interpretation actually is similar to the bigger part >> of what you said, but what you're describing at the end of your post >> is what is referred as "tunnelling" and that's not we're doing. We >> are not tunnelling HTTP over other protocols or other proctocols over >> HTTP. We could have done that, it had been more easy for us, since >> we're partially based on Jersey and Jersey kinda support that. >> Instead, I'v rewrited significant parts of what Jersey does with HTTP >> in order to decouple the HTTP part from the REST part, in order to >> use the later with other connectors that work over other protocols. >> Basically, we're applying the same RESTfull constraints that are >> "natural" in HTTP to other protocols. Of course that in order to do >> so we loose something, but that's not the point. >> >> > > What I described has nothing to do with tunneling. Modeling a system > on the Uniform Interface means that you're using methods which are as > well defined as methods come. If you have some other method you need, > then you redevelop your system to make it work with the Uniform > Interface instead -- what you *don't* do is resort to tunneling custom > methods over POST. > > How on Earth can you read what I read, and conclude that I'm talking > about tunneling? Oh yeah, you spend more time rebutting what others > say than you do actually reading what they've written, let alone > trying to understand it. As a result, you simply don't make any sense > when you talk about applying the constraints of REST to a protocol like > JMS -- a result of failing to take the time to learn what REST's > constraints actually are. That or you're a troll -- I don't know quite > know what else to make of someone who has spent exponentially more time > arguing about REST than they have spent trying to learn it. > > >> But then again, in my understanding of Roy's thesis, the constraints >> are to be applied to the architecture, not to the communication or >> the protocol. That the communication and thus the protocol is part of >> that architecture, that is true. That some protocols are are more >> "natural" fits for those constraints, that is also true. But people >> here focus too much in "protocol" and less in "user-agents" and >> "connectors", that also are part of the architecture. >> >> > > REST is defined as a set of constraints -- you don't apply constraints > to an architecture, an architecture is defined by the constraints it > imposes on a system. You apply REST's constraints to the communication > between network connectors, independent of protocol. The hypertext > constraint is applied to the message entity, and this has absolutely > nothing to do with what network protocol is involved. Again, if you > want to make every thread a discussion of the collective shortcomings > of this group as a whole, please start rest-discuss-discuss. > > >> I understand that there are some superiors minds in this list that >> aren't interested in these minor questions and doubts, but then again >> this is rest-discuss, not rest-philosophical-discuss... >> >> > > Enough already with all the boring ad-hominem *whining* in your posts. > Please. REST is an architectural style, so of *course* discussions here > are going to be philosophical in nature. Again, plenty of people on > this list have managed to get REST sorted out over the years, with less > help than is available to those just getting started now. But you seem > to have a chip on your shoulder against anyone who *has* learned, and > you seem like a troll for constantly demanding special treatment that > none of the rest of us needed. > > The rest of us had to learn the hard way, find the REST wiki by reading > the archive of this list or simply lurking here for awhile (instead of > being told of it, but only *after* griping that no such thing existed, > etc.)... what makes you so special? REST is hard, there are no > shortcuts, you simply need to invest the time (most often measured in > years) that it takes to learn any complex subject matter. > > If I want to learn the Theory of Relativity, and actually use it in > professional life, then there's no way around reading Einstein's work, > and no way around reading and studying Physics before even getting to > that point. It doesn't take an Einstein to grasp relativity theory (to > come up with it in the first place, sure), nor is Einstein the only > person capable of answering questions about it. Stephen Hawking can > explain it to you, but only after you've taken the effort to learn > Physics. > > Yet somehow, it's a failure of the REST community for not providing > authoritative "Cliff's Notes" which offer some easier alternative than > reading Roy's thesis, for people who haven't studied software > architecture. Your trouble learning REST is nobody's problem but your > own, so stop trying to assign blame to others, or demanding that others > accept "your interpretation" of terms as valid before trying to answer > your questions, etc. > > Learn the terminology, and REST won't seem nearly so ambiguous as it > must seem, in order for you to think that it's somehow open for > interpretation. It isn't, but you can only know that *after* studying > your ass off. > > -Eric >
Felipe. The most simple example is HTML by itself. HTML is hypermedia, the browser knows how to read it. It will try to get all the needed assets to render (when it sees an IMG, it fetches the image with an additional GET, same for scripts and such). Now, HTML has a way to describe a POST, using forms. The form is a structure with the fields (with types and all) and a URL to post the data. See, not all links are for GET only. So, in a similar way, not necessarily using HMTL though, you can manage the interaction using an Hypermedia. Hopes this clarifies. William Martinez Pomares. --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > Felipe, > > the server needs to send the client hypermedia in which the client > finds all the information you are asking for below. For this to work, > the client must understand the media type (aka implement it). > > AtomPub service documents are a perfect match to your needs, BTW. > > Jan > > On Oct 12, 2009, at 3:12 PM, Felipe Gaúcho wrote: > > > It is easy to understand a hateoas mechanism where a resource contains > > a set of links for its related resources... > > > > but how I represent a URI where a resource can be created ? > > > > Like: > > > > I have a URI that accepts POST methods to create a new resource... > > > > How do I inform that URI to the client, and specially: how do I inform > > the parameters types supported by the POST method ? > > > > if it is not applicable, what is the alternative ? > > > > or hateoas is just for reading data from the service? > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- >
ok, I amusing "application/x-www-form-urlencoded" as my media type when I create resources in the server.. it works, nice and easy... The question is: how the client knows what parameters it should submit to the server to create a new resource ? is it a "previous knowledge" ? In the case of AtomPub, there is a generic type used to create resources.. it is always the same type navigating between the client and the server.. but in case I want to use different types, how to inform the client about the form contents ? * I am in this learning curve here, so please understand and help if I am just getting out of the curve :)
Well, as far as I know, you don't simply "create" a resource, you have to follow the hipermedia the server sent to you in the first place. So suppose you enter a application for "creating things", using a well-know URI. That URI returns to you a page (the representation of that resource in that stage of the application) with several links pointing to other resources, like "create A" with link to a resource that will create object of type A, same for B and so on. When you follow say, the linkToA, the server will send you back a representation with the fields that are needed for A, as well the URI where to post it. That way, you don't have to have "previous knowledge" as each time the server drives you to what is needed. That should be driven as well by the media-types, I think, but I'm not sure how that should be done, if it should be defined in the same representation the server sent you or the client has to know already what media-type is dealing with. Hope it helps. Felipe Ga�cho wrote: > > > ok, > > I amusing "application/x-www-form-urlencoded" as my media type when I > create resources in the server.. it works, nice and easy... > > The question is: how the client knows what parameters it should submit > to the server to create a new resource ? > > is it a "previous knowledge" ? > > In the case of AtomPub, there is a generic type used to create > resources.. it is always the same type navigating between the client > and the server.. but in case I want to use different types, how to > inform the client about the form contents ? > > * I am in this learning curve here, so please understand and help if I > am just getting out of the curve :) > >
> When you follow say, the linkToA, the server will send you back a > representation with the fields that are needed for A, as well the URI where > to post it. humm.. so for non-existent resources I should return by default the information about how to create it ? and, in this response I should include also the types and ranges for the fields ? for example: To create a resource A I need a form request containing a field called "name", a String with maximum of 20 characters. this information will be found the "create information" URI ? > > That way, you don't have to have "previous knowledge" as each time the > server drives you to what is needed. > > That should be driven as well by the media-types, I think, but I'm not sure > how that should be done, if it should be defined in the same representation > the server sent you or the client has to know already what media-type is > dealing with. > > Hope it helps. > > > > Felipe Gaúcho wrote: >> >> >> ok, >> >> I amusing "application/x-www-form-urlencoded" as my media type when I >> create resources in the server.. it works, nice and easy... >> >> The question is: how the client knows what parameters it should submit to >> the server to create a new resource ? >> >> is it a "previous knowledge" ? >> >> In the case of AtomPub, there is a generic type used to create resources.. >> it is always the same type navigating between the client and the server.. >> but in case I want to use different types, how to inform the client about >> the form contents ? >> >> * I am in this learning curve here, so please understand and help if I am >> just getting out of the curve :) >> >> > > -- Looking for a client application for this service: http://fgaucho.dyndns.org:8080/arena-http/wadl
On Oct 12, 2009, at 6:21 PM, Felipe Gaúcho wrote: > > > ok, > > I amusing "application/x-www-form-urlencoded" as my media type when > I create resources in the server.. it works, nice and easy... Assuming the type is not good. There should be hypermedia telling you which type to send (e.g. a form). > > The question is: how the client knows what parameters it should > submit to the server to create a new resource ? The allowed parameters should be in the hypermedia spec (that is: in prose). Just stating them in a from (as you do it with HTML) is not enough, since the client implementation has to 'know' the meaning. That means, the parameter must be known at implementation time and that means it must be in a spec. > > is it a "previous knowledge" ? In the sense of what I say above - yes. > > In the case of AtomPub, there is a generic type used to create > resources.. it is always the same type navigating between the client > and the server.. Which type? > but in case I want to use different types, how to inform the client > about the form contents ? AtomPub provides the <accept> element for this purpose. > > * I am in this learning curve here, so please understand and help if > I am just getting out of the curve :) Keep reading and posting to verify your understanding. REST is actually very simple - it just takes so much time have that sudden shift in your brain. The good thing is that you really notice it when you indeed make such a twist. I find it helpful to think about how you would implement a client for Amazon. Think what the client would need to understand from the hypertext received from Amazon and what that implies for the media types that would have to be defined to make Amazon non-human consumable. And avoid lotsof fixed URIs and semantic URIs (those that lead to hardcoding in the client the URI construction process). HTH, Jan > > > -------------------------------------- Jan Algermissen Mail: algermissen@acm.org Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
> > Which type? "entry" like in this example: http://tools.ietf.org/html/rfc5023#page-21 there is a type "Entry" pre-assumed ? <accept>application/atom+xml;type=entry</accept>
On Sun, Oct 11, 2009 at 7:46 AM, William Martinez Pomares <wmartinez@...> wrote: > > > > Hello. > Actually, I would give a +1. > > 1. Generic may not be a client per se, but a framework that allows > to create a client by some sort of configuration (description language? > may not be adopted since it recalls WSDL which is hated nowadays > and misunderstood) or training where it learns the semantics. > 2. A standardized (or at least agreed upon) set of ways to define > links to represent those semantics. (WSDL like again?) > 3. The actual particular code, AKA the plugin part you will create. But, in itself, isn't any "good" REST payload implicitly self-describing? It may not be a COMPLETE documentation of the API or the service, but in and of itself and it's own little world view, it should be a relatively complete description, at least at rough level. If you happen to be sending XML payload, XML Schema can fill in much of the details. Assuming you can interpret the link references, possibly combined wit OPTIONS, then a client should be able to make available all of the state transitions, as well as be able to create properly formatted, though not necessarily populated, payloads. I don't know how any document can communicate semantics to a machine, frankly. A Java doc, for example, auto generated from a class definition gives a nice, worthless summary of a class. But it certainly has the information necessary to handle the physical communication with that class. > Just a word about WSDL. The biggest mistake was the code generation. > If WSDL was to be interpreted on the fly, and the interactions dynamically > followed, I guess people would not non-sensically hate it as they do now. > So, that framework IS NOT A CODE GENERATION thing. It should read > and process the config on the fly, and the config is to be provided by the > first URL you hit in the system. See? Code generation exists because of a limitation of the static implementation languages. Dynamic languages don't need code generation specifically because they're dynamic languages. People can, and have, written dynamic interfaces to WSDL web services, even for static languages. They're just a pain to use from the coders point of view, and offer none of the value of a static language. In either case, whether it's code generated or a client, I don't understand where you get "dynamically" followed. If a schema changed, and eliminated previous fields that your code was populating, that's an error. It could in theory respond to new, optional but defaulted fields, but that's quite the edge case in general IMHO. So, I guess I'm not clear on how folks think a system can interact "dynamically" with another system when there's, in the end, some code somewhere driving the transaction. Regards, Will Hartung (willh@...)
Will: While I don't think it reasonable to replace the "human" in a machine-to-machine interaction, this part of your post is the one I am most interested in pursuing: "Assuming you can interpret the link references, possibly combined with OPTIONS, then a client should be able to make available all of the state transitions, as well as be able to create properly formatted, though not necessarily populated, payloads." I think it's quite possible to create a client that can seek a simple goal if you use a limited collection of rel values to decorate the available state-transitions and teach the client how to make decisions amongst the limited collection of rel values. It would also be important for the client to understand any input options/requirements, too. I think some useful bots or appliance-type applications can be built in this way, esp. since the REST constraints greatly simplify the details of machine-to-machine interaction. mca http://amundsen.com/blog/ On Mon, Oct 12, 2009 at 13:45, Will Hartung <willh@...> wrote: > On Sun, Oct 11, 2009 at 7:46 AM, William Martinez Pomares > <wmartinez@...> wrote: >> >> >> >> Hello. >> Actually, I would give a +1. >> >> 1. Generic may not be a client per se, but a framework that allows >> to create a client by some sort of configuration (description language? >> may not be adopted since it recalls WSDL which is hated nowadays >> and misunderstood) or training where it learns the semantics. >> 2. A standardized (or at least agreed upon) set of ways to define >> links to represent those semantics. (WSDL like again?) >> 3. The actual particular code, AKA the plugin part you will create. > > But, in itself, isn't any "good" REST payload implicitly > self-describing? It may not be a COMPLETE documentation of the API or > the service, but in and of itself and it's own little world view, it > should be a relatively complete description, at least at rough level. > > If you happen to be sending XML payload, XML Schema can fill in much > of the details. > > Assuming you can interpret the link references, possibly combined wit > OPTIONS, then a client should be able to make available all of the > state transitions, as well as be able to create properly formatted, > though not necessarily populated, payloads. > > I don't know how any document can communicate semantics to a machine, > frankly. A Java doc, for example, auto generated from a class > definition gives a nice, worthless summary of a class. But it > certainly has the information necessary to handle the physical > communication with that class. > >> Just a word about WSDL. The biggest mistake was the code generation. >> If WSDL was to be interpreted on the fly, and the interactions dynamically >> followed, I guess people would not non-sensically hate it as they do now. >> So, that framework IS NOT A CODE GENERATION thing. It should read >> and process the config on the fly, and the config is to be provided by the >> first URL you hit in the system. See? > > Code generation exists because of a limitation of the static > implementation languages. Dynamic languages don't need code generation > specifically because they're dynamic languages. > > People can, and have, written dynamic interfaces to WSDL web services, > even for static languages. They're just a pain to use from the coders > point of view, and offer none of the value of a static language. > > In either case, whether it's code generated or a client, I don't > understand where you get "dynamically" followed. > > If a schema changed, and eliminated previous fields that your code was > populating, that's an error. It could in theory respond to new, > optional but defaulted fields, but that's quite the edge case in > general IMHO. > > So, I guess I'm not clear on how folks think a system can interact > "dynamically" with another system when there's, in the end, some code > somewhere driving the transaction. > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hello list, I am currentyl working on a webapplication to apply REST principles as far as possible. We leverage Spring MVC facilities to be able to design the server side to answer DELETE request for resources. Now as we server HTML in the default case, it's a requirement to present a web page to explicitly confirm the deletion of the resource. Wearing the REST glasses I wonder how to handle this requirement? I see two options: 1. Exposing the confirmation request as separate resource reachable via GET, that returns a HTML representation including the form to trigger the actually intended DELETE request via HTML form quirks. 2. Append a request parameter (e.g. ?confirm=true) to the DELETE request to indicate there are "options" on the request method (in my case true would return the HTML representation of the confirmation page). From a simple like/dislike standpoint I'd prefer the second option as I do not have to introduce a somewhat artificial additional resource. Nevertheless I do not feel weel with 2, as it clearly weakens the contract that comes with DELETE. Any ideas, opinions, thaughts? Regards, Ollie
2009/10/12 Felipe Gaúcho <fgaucho@...> > > > > > When you follow say, the linkToA, the server will send you back a > > representation with the fields that are needed for A, as well the URI where > > to post it. > > humm.. so for non-existent resources I should return by default the > information about how to create it ? > > and, in this response I should include also the types and ranges for > the fields ? > > for example: > > To create a resource A I need a form request containing a field called > "name", a String with maximum of 20 characters. > > this information will be found the "create information" URI ? > > I will say that information should be defined in the media-type, for example, if you have application/vnd.something+xml with a Schema associated to that XML, that both client and server are aware of, i.e., the client has previous knowledge of how to handle type of application/vnd.something+xml. I don't know if it is possible to use that without that prior knowledge.
On Oct 12, 2009, at 11:13 AM, António Mota wrote: > I will say that information should be defined in the media-type, for > example, if you have application/vnd.something+xml with a Schema > associated to that XML, that both client and server are aware of, > i.e., the client has previous knowledge of how to handle type of > application/vnd.something+xml. I don't know if it is possible to use > that without that prior knowledge. In case the server is not using such media types, it is better to describe such details against the link rel. Subbu
Oliver: Not sure if this is what you want/need, but one way to approach this is to grant users a "delete confirmation ticket" to allow them to delete an existing record. This can create a record of the confirmation (w/ any pertinent metadata such as the user asking for the delete, the record to delete, the date/time, etc.). It can also return a valid URI that includes a token proving the confirmation was requested and granted. This key might be the only way someone can successfully perform a DELETE operation on an existing record. *** REQUEST POST /delete-confirmations/ user-name=mike&record-id=123 *** RESPONSE 200 OK Location: http://www.example.org/delete-confirmations/abc *** REQUEST GET /delete-confirmations/abc *** RESPONSE 200 OK <delete-confirmation> <link rel="delete" href="http://www.example.org/records/123?token=q1w2e3r4t5y6" /> </delete-confirmation> *** REQUEST DELETE /records/123?token=q1w2e3r4t5y6 *** RESPONSE 200 OK mca http://amundsen.com/blog/ On Mon, Oct 12, 2009 at 09:59, oliver.gierke <oliver.gierke@...> wrote: > Hello list, > > I am currentyl working on a webapplication to apply REST principles as far as possible. We leverage Spring MVC facilities to be able to design the server side to answer DELETE request for resources. > > Now as we server HTML in the default case, it's a requirement to present a web page to explicitly confirm the deletion of the resource. Wearing the REST glasses I wonder how to handle this requirement? I see two options: > > 1. Exposing the confirmation request as separate resource reachable via GET, that returns a HTML representation including the form to trigger the actually intended DELETE request via HTML form quirks. > > 2. Append a request parameter (e.g. ?confirm=true) to the DELETE request to indicate there are "options" on the request method (in my case true would return the HTML representation of the confirmation page). > > From a simple like/dislike standpoint I'd prefer the second option as I do not have to introduce a somewhat artificial additional resource. Nevertheless I do not feel weel with 2, as it clearly weakens the contract that comes with DELETE. > > Any ideas, opinions, thaughts? > > Regards, > Ollie > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
can you provide me a short example of description using rel ? that's what I am focusing now.. I am just looking for good examples (please don't point to atompub)
On Mon, Oct 12, 2009 at 6:59 AM, oliver.gierke <oliver.gierke@...> wrote: > > > > From a simple like/dislike standpoint I'd prefer the second option as I do > not have to introduce a somewhat artificial additional resource. > Nevertheless I do not feel weel with 2, as it clearly weakens the contract > that comes with DELETE. Why is it an artificial resource? You either need to track the confirmation or you don't. For many systems, the fact that the resource is deleted implies confirmation. What it sounds like to me is that you're working on some user interface aspect (i.e. the "are you sure" dialog), but you really don't care whether they're sure or not. That's why this is "an artificial resource". That's why a lot of systems do that confirmation solely in the client, it's not an aspect of the "application" API at all. But you can look at the confirmation problem in the same light as a security problem. The system needs confirmation that you are allowed to visit a resource. Typically this is done by granting the user a credential that it can represent to the system to demonstrate the user has the appropriate privilege. By the same notion, you can make the users jump through the hoop of getting a delete credential, however short lived it may be. But, consider, there's nothing stopping a client from performing the proper actions to get that credential, and then forward that credential to the system in order to delete a resource, and during that operation, an actual User never needs to be prompted for the "Hit OK to continue". So, you can make the DELETE confirmation a first class concept in the underlying API, so you can track those confirmations, or you can simply add a confirmation step in to the interactive workflow, a step that the DELETE operation is completely ignorant of. Regards, Will Hartung (willh@...)
Hello Duncan. --- In rest-discuss@yahoogroups.com, "duncan_b_cragg" <rest-discuss@...> wrote: > > Hi William! Thanks for the reply. > > > First, why the resource is "at" one server. It should be location-less, or in IT words "shared". Second, a resource may have all the URIs you want, or that it needs, that is not a restriction. > > Sorry - I don't understand this! =0( > Ok. In your blog you mention that you have one resource in one server. The location-less in this case is that a resource cannot be bound to one server, since that hurts scalability. So, the resources are usually "shared" by various servers. Now, I now understand you post, I think. You describe a situation where two different applications are used, so each one has its own resource, that happens to be the same (semantically, at least). Well, not quite. Here you have actually two resources meaning different things, but a similar thing for the client. In this case, the client must work toward integration, by requesting from one app (A) and sending to the other (B) what this B one requires. Another way is to have the B able to talk directly to A. Which I get from your post. > > Third, I plenty support that resources are locatable and accessible by any node in a network, and REST is for networking systems, so I don't see why is there a restriction for servers to be clients. > > Good... I think... ! > > > Now, as FOREST to be a pattern and not an style, means it is applicable to tactic design. That means it is local, and thus may not be applicable to the whole system. In fact, as you actually go into details of implementation, it IS a pattern. > > No - it's a whole-system Pattern. If Patterns have to be local, then I have to stop calling it a Pattern. Any ideas? 'Sub-style'? Ug. > Ok, I would say it is local, since it is a pattern for integration. You can read more about the pattern, styles and idioms here http://wmp-archi.blogspot.com/2009/10/styles-pattern-and-idioms.html. > > What I mean is that your REST applications may contain parts implemented using FOREST and others using other patterns. Thus, you need: > > 1. Identify the context where your pattern can be used (as the title suggest, cases where we need integration may be). > > Yah - Enterprise Mashups! =0) > > > 2. Identify the particular problem it is solving. > > OK - this is tricky, because I think it has a very wide applicability. It may even satisfy /all/ your Integration Needs! =0) > I would check the particular problem of integration you are solving. > > 3. You already defined the solution, even getting down to propose implementation. > > Wait till I write the FOREST prototype for Jetty... > > I'm going to call it "a type of FOREST that begins with 'J' - for Java": 'Jungle'! =0) > Good one! > > 4. List the consequences of applying it. > > Ka-ching! $$ Profit!! =0) > > Can't really say this until I've got some real-world experience of applying 'Jungle' and the FOREST Pattern/Sub-Style. > Actually, that is the other part of a Pattern that is required: a Pattern should be a proven solution that is normally used. So, I guess you are proposing a solution and if everybody buys it, then you got yourself a pattern! :D > > Will read it again, trying to understand each bit. > > Thanks - again, I really appreciate the feedback.. If it resonates with you (or anyone else) I'd be really happy to explore the ideas with you, and to be challenged on the details. > > Duncan Cragg > Sure!. It would be a pleasure! William Martinez Pomares
On Oct 12, 2009, at 8:56 PM, Felipe Gaúcho wrote: > can you provide me a short example of description using rel ? > > that's what I am focusing now.. I am just looking for good examples > (please don't point to atompub) > Well, AtomPub is probably the best example because it is somewhat the only one that stretches into the non-human web. :-) There is also - Opensearch (opensearch.org) - the IANA link relation registry (<http://www.iana.org/assignments/link-relations/link-relations.xhtml >) - Google's pubsub at http://code.google.com/p/pubsubhubbub There are more, but they don't come to mind right now. Actually, you can go a pretty long way with the above specs and some additional link relations. The hardest part is IMO to really figure out what interface issues are actually implementation details. Many aspects of a an interface can be handled by e.g. a category on an AtomPub collections that tells the client a certain set of expectations to make about the collection. Jan > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Hola Felipe, Jan. The Spec Jan mentions is actually the shared knowledge of the domain. When a human interacts with a form, the human knows what input is required just by looking at the name of the field. If password, it is obvious that a secret password is required. That is due to the general knowledge of users. As I said before, if in a banking app, the server requests the account id, I would understand. Well, that glossary, that knowledge is the one that is required at a minimum, because if you need to send the actual definition of what an account is, then it would be almost impossible to construct a dynamic client. William Martinez. --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Oct 12, 2009, at 6:21 PM, Felipe Gaúcho wrote: > > > > > > > ok, > > > > I amusing "application/x-www-form-urlencoded" as my media type when > > I create resources in the server.. it works, nice and easy... > > Assuming the type is not good. There should be hypermedia telling you > which type to send (e.g. a form). > > > > > > The question is: how the client knows what parameters it should > > submit to the server to create a new resource ? > > The allowed parameters should be in the hypermedia spec (that is: in > prose). Just stating them in a from (as you do it with HTML) is not > enough, since the client implementation has to 'know' the meaning. > That means, the parameter must be known at implementation time and > that means it must be in a spec. > > > > > > > > is it a "previous knowledge" ? > > In the sense of what I say above - yes. > > > > > In the case of AtomPub, there is a generic type used to create > > resources.. it is always the same type navigating between the client > > and the server.. > > Which type? > > > > but in case I want to use different types, how to inform the client > > about the form contents ? > > AtomPub provides the <accept> element for this purpose. > > > > > * I am in this learning curve here, so please understand and help if > > I am just getting out of the curve :) > > Keep reading and posting to verify your understanding. REST is > actually very simple - it just takes so much time have that sudden > shift in your brain. The good thing is that you really notice it when > you indeed make such a twist. > > I find it helpful to think about how you would implement a client for > Amazon. Think what the client would need to understand from the > hypertext received from Amazon and what that implies for the media > types that would have to be defined to make Amazon non-human > consumable. And avoid lotsof fixed URIs and semantic URIs (those that > lead to hardcoding in the client the URI construction process). > > > HTH, > > Jan > > > > > > > > > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- >
Hello Will. --- In rest-discuss@yahoogroups.com, Will Hartung <willh@...> wrote: (...) > > But, in itself, isn't any "good" REST payload implicitly > self-describing? It may not be a COMPLETE documentation of the API or > the service, but in and of itself and it's own little world view, it > should be a relatively complete description, at least at rough level. > > If you happen to be sending XML payload, XML Schema can fill in much > of the details. > > Assuming you can interpret the link references, possibly combined wit > OPTIONS, then a client should be able to make available all of the > state transitions, as well as be able to create properly formatted, > though not necessarily populated, payloads. > > I don't know how any document can communicate semantics to a machine, > frankly. A Java doc, for example, auto generated from a class > definition gives a nice, worthless summary of a class. But it > certainly has the information necessary to handle the physical > communication with that class. > Totally agree with you. I think you misunderstood. I was not talking about documentation from server in a normal interaction. I was talking about a framework to create clients, design time. The config or training data is to be given to the client when it is being constructed, so it is prepared to handle the payload from the client. There is, and we cannot dismissed, a shared knowledge between client and server. That one that tells the client the account named field contains account data. That is not transferred in the actual interaction. It should be there before. And the idea is to have a base construct for the client, that can read a config file (say, a mapping from names to fields in a database) so the client can "understand" the server requests. Is it that, or hard code the data field names. See what I mean? > >(...) > Code generation exists because of a limitation of the static > implementation languages. Dynamic languages don't need code generation > specifically because they're dynamic languages. > Not sure I agree. Code generation is not actually a product for static languages, but for the static code writing styles of developers. > People can, and have, written dynamic interfaces to WSDL web services, > even for static languages. They're just a pain to use from the coders > point of view, and offer none of the value of a static language. > Exact same thing I saying above: developer's fault. > In either case, whether it's code generated or a client, I don't > understand where you get "dynamically" followed. > > If a schema changed, and eliminated previous fields that your code was > populating, that's an error. It could in theory respond to new, > optional but defaulted fields, but that's quite the edge case in > general IMHO. > I actually do, and that is something I'm working on. You have not only dynamic interaction, but also dynamic payload creation. Those are two different problems. We have tools with XSD and semantic dictionaries to work the second one, plus server versioning, even code on demand. The first one we have other tools like Options and a out-of-band business definition. If correctly done, a server may add a new field and the client should be able to handle that without error. > So, I guess I'm not clear on how folks think a system can interact > "dynamically" with another system when there's, in the end, some code > somewhere driving the transaction. > Exactly, simple follow rules with no hard code decisions, and all you need to change are the actions and/or the events. > Regards, > > Will Hartung > (willh@...) > Cheers! William Martinez Pomares
--- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > Will: > > While I don't think it reasonable to replace the "human" in a > machine-to-machine interaction, this part of your post is the one I am > most interested in pursuing: > > "Assuming you can interpret the link references, possibly combined > with OPTIONS, then a client should be able to make available all of > the state transitions, as well as be able to create properly > formatted, though not necessarily populated, payloads." > > I think it's quite possible to create a client that can seek a simple > goal if you use a limited collection of rel values to decorate the > available state-transitions and teach the client how to make decisions > amongst the limited collection of rel values. It would also be > important for the client to understand any input options/requirements, > too. > > I think some useful bots or appliance-type applications can be built > in this way, esp. since the REST constraints greatly simplify the > details of machine-to-machine interaction. > > mca > http://amundsen.com/blog/ Another approach that I really think merits investigation is CCXML's. It is an XML format for a state machine that processes call control events (call being offered, answer call, call hung up, etc.). A CCXML hypermedia processor interacts with the underlying platform via asynchronous events. Events it receives advance the state machine. Event handlers processed on transitions send events back down to the platform, run local javascript, or cause page transitions via GET and POST. CCXML is completely machine driven -- no human guiding the processing. But the hypermedia document tells the CCXML engine how map local events into network requests. Those requests yeild other documents with another mapping. This model isn't that different from an HTML browser when you think about it. The browser receives keyboard and mouse events (or higher level events like "onchange") and sends "messages" down to an underlying platform to repaint the screen. Certain events cause a page transition to occur. My point is that you don't need to "replace the human" if you look at hypermedia document processing this way. You just need to map the information in the document to the client's domain model. That's what the "semantic info" is really doing -- but where people are having problems is that they only map the information one way (down to the platform). For example, microformats map the generic HTML tags to another domain model understood by certain clients. But they usually don't include a mapping from platform events to actions/event handlers in the markup. Filling in this gap facilitates much more sophisticated machine-driven processing. Regards, Andrew
Andrew: Yep, years ago, I did some work w/ calling services (before CCXML) and I can see how that space works really well for machine-to-machine work. I must admit that I bookmarked some CCXML links a while back when you brought it up here, but I've not been reading the links yet<g>. mca http://amundsen.com/blog/ On Mon, Oct 12, 2009 at 21:14, wahbedahbe <andrew.wahbe@...> wrote: > --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: >> >> Will: >> >> While I don't think it reasonable to replace the "human" in a >> machine-to-machine interaction, this part of your post is the one I am >> most interested in pursuing: >> >> "Assuming you can interpret the link references, possibly combined >> with OPTIONS, then a client should be able to make available all of >> the state transitions, as well as be able to create properly >> formatted, though not necessarily populated, payloads." >> >> I think it's quite possible to create a client that can seek a simple >> goal if you use a limited collection of rel values to decorate the >> available state-transitions and teach the client how to make decisions >> amongst the limited collection of rel values. It would also be >> important for the client to understand any input options/requirements, >> too. >> >> I think some useful bots or appliance-type applications can be built >> in this way, esp. since the REST constraints greatly simplify the >> details of machine-to-machine interaction. >> >> mca >> http://amundsen.com/blog/ > > Another approach that I really think merits investigation is CCXML's. It is an XML format for a state machine that processes call control events (call being offered, answer call, call hung up, etc.). A CCXML hypermedia processor interacts with the underlying platform via asynchronous events. Events it receives advance the state machine. Event handlers processed on transitions send events back down to the platform, run local javascript, or cause page transitions via GET and POST. > > CCXML is completely machine driven -- no human guiding the processing. But the hypermedia document tells the CCXML engine how map local events into network requests. Those requests yeild other documents with another mapping. > > This model isn't that different from an HTML browser when you think about it. The browser receives keyboard and mouse events (or higher level events like "onchange") and sends "messages" down to an underlying platform to repaint the screen. Certain events cause a page transition to occur. > > My point is that you don't need to "replace the human" if you look at hypermedia document processing this way. You just need to map the information in the document to the client's domain model. That's what the "semantic info" is really doing -- but where people are having problems is that they only map the information one way (down to the platform). For example, microformats map the generic HTML tags to another domain model understood by certain clients. But they usually don't include a mapping from platform events to actions/event handlers in the markup. Filling in this gap facilitates much more sophisticated machine-driven processing. > > Regards, > > Andrew > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi, I recently wrote a entry on how to do a photo printing service RESTfully. That is: how to give access to a service to a subset of your photos securely, whilst sticking to web architecture. It seems to be a very easy thing to do - conceptually at least - requiring just one new relation. http://blogs.sun.com/bblfish/entry/sketch_of_a_restful_photo So could it be that I have missed something? Anyone find any flaws in this? Henry Social Web Architect Sun Microsystems Blog: http://blogs.sun.com/bblfish
António Mota wrote: > > You put things in my mouth, intentionally or not, that I never said. > And doing so, you contradict yourself several times and you're > turning this discussion in a serious of personal insults. That you > don't want to discuss things I said is up to you, you don't have to > answer them anyway. > Where have I contradicted myself? What I'm doing is pointing out to you that your attitude needs adjustment, unless your intention is to come across as a troll. This is an observation, not an insult, since obviously English is your second language. > > Bottom line on this thread, my point is that a REST Client does not > have to be a HTTP Client. So a thread that start as "Generic REST > client" should had been named "Generic HTTP client for REST". What I > did was to point that out. > But that isn't right, as I pointed out in response. For example, curl is not a generic HTTP client, since if you enter an FTP URI it will work. A generic REST client can't be limited to being a generic HTTP client, since some URIs in a REST system may very well use different protocols. > > I could comment some of your points - to which in some cases I agree > as I said in the earlier post - and in some cases clarify what I > said or what you say I said (which is not the same), but it's visible > that you're not interested in coming down from your high-horse and > you just dismiss as "trolling" anything that doesn't fit your way of > thinking. > Yes, and this has been your response to my posts since you joined this list -- that I must have something wrong with me, because I attempt to correct your mistakes. Your obstinance to responses from myself and others is hardly an example of me being on any sort of horse. > > Note that you started your *first* response to a post of mine by > saying > I've been responding to your posts for months. You get huffy with me every time I do, so I checked your posts in threads I didn't participate in, and found the same attitude towards those helping you that I myself experienced, off and on over the past several months. > > When I refer to REST in my post I was referring to the REST community, > from which the members of this list are a subset, and as such I was > referring the necessity of having a community-driven resource, such as > a wiki or such, where the community could agree on those "formal" > definitions (even if those definitions are just a copy&paste from > Fielding dissertation excerpts), and could be used as reference by > everybody, specially by new-comers. > Or did you forget saying that? REST always and only refers to the architectural style described by Dr. Fielding's dissertation. Notice how you complain about the entire community on a regular basis? Did you forget complaining that REST needs a wiki or something, despite such a wiki already existing? That response of yours from a little while back, plus your recurring complaint about responses steeped in references to said thesis, is why I prefaced my last response with a note about what this group's purpose is. > > However, regrettably, judging by several conversations on this list, > it seems that the REST community likes to cultivate some sort of > obscurantism at the level of concepts, and like to avoid all the > practical questions of developing software based on the REST style. > There you went again. You didn't immediately understand REST, which must be a failure of the group, because we deliberately cultivate "obscurantism" instead of answering questions. No, we assume you've read the thesis, and searched the list archives for answers, before asking questions. As I've said before, REST is not easy to learn. This doesn't mean the answers you're given are deliberately meant to confuse you. > > Eric, don't take it wrong but I read your post and sincerely I think > "what does this has to do with what?" Take this for example: > > > In a RESTful system, once a user has completed their series of > > application interactions, a history of the steady-states is > > contained in the browser history. > > What browser (and what user)? We have a Restfull (almost) > infrastructure that we use in order to put our different software > modules communicating with each other, sometimes using HTTP, other JMS > or IMAP... > If you can't extrapolate "browser" into "client" then I don't know how to help you. If one uses curl, then the history is right there in the shell history -- scroll up. The nature of a REST application is that each request and response may be logged. What client is used to run that REST application is simply not relevant. > > And the rest of your post is similar to this quote, and I don't > understand what it has to do with "application" and "application > state" in the realm of a RESTfull based system (not a HTTP based > system). > Yes, your failure to understand my post is obviously because I don't know what I'm talking about, or am deliberately trying to confuse you. I really don't understand how you fail to see that your responses may be considered rude. The particular client is irrelevant, and does not change the definition of "REST application" as "what the user intends to accomplish," and it is also irrelevant whether that user is a person or a machine -- application means the same thing. > > There are things that we can extrapolate from HTTP to a more general > level in order to fit other protocols, like we did with some HTTP > headers that we use generically, but not concepts like "browser > history". > If you're using a generic interface, then the protocol doesn't matter, and it's simple to extrapolate "browser history" into "a log of the requests made and the responses received" regardless of client. Since your response came as fast as you could type it, pardon me for assuming that you didn't make any effort to understand what I wrote, first. That's exactly what trolls do, which is why I used that word. If you don't want to be accused of trolling, then please, take the time to read and re-read peoples' responses until you do understand, instead of immediately finding fault and rushing to make rude posts in response. Or, ask a question or two at a time, instead of a wholesale disassembly of an entire response for the purpose of pointing out its shortcomings as relates to your particular needs. > > So maybe I'm contradicting myself regarding the conceptual/practical > dichotomy I referred in other post, but concepts like "application" > and "application state" have to be formally defined at the most > abstract level as possible, so they can be applied on the ground. > But they are formally defined, unless you don't consider a thesis which describes these terms as "formal". In your next reply to that thread, you seemed surprised to find that I was right, "application" and "application state" and "transitional state" are all defined in black and white in the thesis. Believe it or not, these terms are a regular topic of discussion here. In fact, "What does application state mean?" is a FAQ here, because it is central to understanding REST, where 'S' stands for 'State'. If Roy's thesis isn't clear enough, then search the archives of this list, or check the wiki, before complaining that terms like "transitional state" were just coined recently, for the purpose of confusing newbies. They're discussed all the time, that was just the first thread about it since you joined. Again, it seemed you were more interested in criticizing my answer than understanding it, since you immediately responded with criticism. Which, after several occurences, some not involving me at all, led me to ask if you are a troll. > > Now, if you read my previous posts on this thread, where did I tried > to redefine REST, made snarky comments, criticizing answers, > cherry-picking whatever? > Does this response jog your memory? I could post plenty of examples of everything I have accused you of, including from threads I didn't participate in. I haven't flamed anybody anywhere for over a year, it's hardly something I do on a whim, or without reason. If you don't like how you come across to people, then change the nature of your posts. People here are genuinely trying to help you understand REST, and your response is to continually criticize the help you're given, and whine about how the entire group is out to get you. Maybe that's why many of your questions are ignored? I'm just suggesting that perhaps that's *your* problem, not everyone else's failing. Because it seems like you're the one on a high horse. -Eric
Gee, am i suppose to answer this? I think now you got me, because if I do, I'm trolling because I don't accept what others tell me. If I don't, I'm just a nut-case that lacks arguments to maintain a discussion, or that you just convinced me how low I am and have not the guts to admit it. However the true is, I'm working 10 hours a day including week-ends, part of that work is even building a RESTfull infrastructure on our system. That, imagine that, has connectors that are not HTTP. I know that is against your bible, but like the other said, "it woks in practice, will it work in theory"? I don't want to waste time in a flame war that we both know will lead nowhere. If I wanted a flame war, which I don't want, I'll have to say that your position until now is basically "I've studied REST for years now, you don't, so you have to accept what I tell you without question it". I think that makes you some sort of high-priest, perhaps? But now, even worse, you assume not only the role of high-priest but also the role of rest-discuss police, telling me what I can do and what I can't. More like a politic police, actually, cause you had the trouble to read all my posts in order to quote (out of context) some of them here.. How's that for cherry-picking? Gee, I hope you won't go to the entire net looking for my posts, you even may find some saying that VisualBasic was a good thing... In this post again you interpret things I say in a wrong way. I even assume that's probably my fault because as you said English is not my main language, actually I learned English after I learned Cobol. But that don't give you the right to extrapolate from what I said to what you say I've said. When I question answers, and say "why that and not this", is not criticising, it's not trying to redefine, it's not trolling, it's the way I have to learn things, to try to understand things, instead of just accepting as a fact "just because is like that". And I've been learning things this way, specially in IT, for a long time. And if sometimes I appear to be offensive, as I said in other post I sometimes use irony and imagery as figures of speech to emphasize some point. Is not a uncommon thing to do be it in literature or on newspapers or blogs or other media. There is nothing personal on that, I assure you... If you want to have a proper discussion on the subjects you mention below it will be my pleasure to do so, why do I think you contradict yourself a lot, why I question your opinion that a text has to be taken literally and should not be a subject of interpretation, and lot more of things you say that for me are completely, how should I say it in English, non-sense? And also, of course, why in some other points I agree with you and in others I changed my mind because you pointed me to right point, like other in this list. If not, I don't really care, but let me assure you also that I'm not intimidated by your name-calling, by your tone of superiority and disdain, or by your fascist practice of trying to find in all my old posts something that can, somehow, incriminate me to the eyes of the list participants. Eric J. Bowman wrote: > Ant�nio Mota wrote: > >> You put things in my mouth, intentionally or not, that I never said. >> And doing so, you contradict yourself several times and you're >> turning this discussion in a serious of personal insults. That you >> don't want to discuss things I said is up to you, you don't have to >> answer them anyway. >> >> > > Where have I contradicted myself? What I'm doing is pointing out to > you that your attitude needs adjustment, unless your intention is to > come across as a troll. This is an observation, not an insult, since > obviously English is your second language. > > >> Bottom line on this thread, my point is that a REST Client does not >> have to be a HTTP Client. So a thread that start as "Generic REST >> client" should had been named "Generic HTTP client for REST". What I >> did was to point that out. >> >> > > But that isn't right, as I pointed out in response. For example, curl > is not a generic HTTP client, since if you enter an FTP URI it will > work. A generic REST client can't be limited to being a generic HTTP > client, since some URIs in a REST system may very well use different > protocols. > > >> I could comment some of your points - to which in some cases I agree >> as I said in the earlier post - and in some cases clarify what I >> said or what you say I said (which is not the same), but it's visible >> that you're not interested in coming down from your high-horse and >> you just dismiss as "trolling" anything that doesn't fit your way of >> thinking. >> >> > > Yes, and this has been your response to my posts since you joined this > list -- that I must have something wrong with me, because I attempt to > correct your mistakes. Your obstinance to responses from myself and > others is hardly an example of me being on any sort of horse. > > >> Note that you started your *first* response to a post of mine by >> saying >> >> > > I've been responding to your posts for months. You get huffy with me > every time I do, so I checked your posts in threads I didn't > participate in, and found the same attitude towards those helping you > that I myself experienced, off and on over the past several months. > > >> When I refer to REST in my post I was referring to the REST community, >> from which the members of this list are a subset, and as such I was >> referring the necessity of having a community-driven resource, such as >> a wiki or such, where the community could agree on those "formal" >> definitions (even if those definitions are just a copy&paste from >> Fielding dissertation excerpts), and could be used as reference by >> everybody, specially by new-comers. >> >> > > Or did you forget saying that? REST always and only refers to the > architectural style described by Dr. Fielding's dissertation. Notice > how you complain about the entire community on a regular basis? Did you > forget complaining that REST needs a wiki or something, despite such a > wiki already existing? That response of yours from a little while back, > plus your recurring complaint about responses steeped in references to > said thesis, is why I prefaced my last response with a note about what > this group's purpose is. > > >> However, regrettably, judging by several conversations on this list, >> it seems that the REST community likes to cultivate some sort of >> obscurantism at the level of concepts, and like to avoid all the >> practical questions of developing software based on the REST style. >> >> > > There you went again. You didn't immediately understand REST, which > must be a failure of the group, because we deliberately cultivate > "obscurantism" instead of answering questions. No, we assume you've > read the thesis, and searched the list archives for answers, before > asking questions. As I've said before, REST is not easy to learn. > This doesn't mean the answers you're given are deliberately meant to > confuse you. > > >> Eric, don't take it wrong but I read your post and sincerely I think >> "what does this has to do with what?" Take this for example: >> >> >>> In a RESTful system, once a user has completed their series of >>> application interactions, a history of the steady-states is >>> contained in the browser history. >>> >> What browser (and what user)? We have a Restfull (almost) >> infrastructure that we use in order to put our different software >> modules communicating with each other, sometimes using HTTP, other JMS >> or IMAP... >> >> > > If you can't extrapolate "browser" into "client" then I don't know how > to help you. If one uses curl, then the history is right there in the > shell history -- scroll up. The nature of a REST application is that > each request and response may be logged. What client is used to run > that REST application is simply not relevant. > > >> And the rest of your post is similar to this quote, and I don't >> understand what it has to do with "application" and "application >> state" in the realm of a RESTfull based system (not a HTTP based >> system). >> >> > > Yes, your failure to understand my post is obviously because I don't > know what I'm talking about, or am deliberately trying to confuse you. > I really don't understand how you fail to see that your responses may > be considered rude. The particular client is irrelevant, and does not > change the definition of "REST application" as "what the user intends > to accomplish," and it is also irrelevant whether that user is a person > or a machine -- application means the same thing. > > >> There are things that we can extrapolate from HTTP to a more general >> level in order to fit other protocols, like we did with some HTTP >> headers that we use generically, but not concepts like "browser >> history". >> >> > > If you're using a generic interface, then the protocol doesn't matter, > and it's simple to extrapolate "browser history" into "a log of the > requests made and the responses received" regardless of client. Since > your response came as fast as you could type it, pardon me for assuming > that you didn't make any effort to understand what I wrote, first. > That's exactly what trolls do, which is why I used that word. > > If you don't want to be accused of trolling, then please, take the time > to read and re-read peoples' responses until you do understand, instead > of immediately finding fault and rushing to make rude posts in > response. Or, ask a question or two at a time, instead of a wholesale > disassembly of an entire response for the purpose of pointing out its > shortcomings as relates to your particular needs. > > >> So maybe I'm contradicting myself regarding the conceptual/practical >> dichotomy I referred in other post, but concepts like "application" >> and "application state" have to be formally defined at the most >> abstract level as possible, so they can be applied on the ground. >> >> > > But they are formally defined, unless you don't consider a thesis which > describes these terms as "formal". In your next reply to that thread, > you seemed surprised to find that I was right, "application" and > "application state" and "transitional state" are all defined in black > and white in the thesis. Believe it or not, these terms are a regular > topic of discussion here. > > In fact, "What does application state mean?" is a FAQ here, because it > is central to understanding REST, where 'S' stands for 'State'. If > Roy's thesis isn't clear enough, then search the archives of this list, > or check the wiki, before complaining that terms like "transitional > state" were just coined recently, for the purpose of confusing newbies. > They're discussed all the time, that was just the first thread about it > since you joined. > > Again, it seemed you were more interested in criticizing my answer than > understanding it, since you immediately responded with criticism. > Which, after several occurences, some not involving me at all, led me to > ask if you are a troll. > > >> Now, if you read my previous posts on this thread, where did I tried >> to redefine REST, made snarky comments, criticizing answers, >> cherry-picking whatever? >> >> > > Does this response jog your memory? I could post plenty of examples of > everything I have accused you of, including from threads I didn't > participate in. I haven't flamed anybody anywhere for over a year, it's > hardly something I do on a whim, or without reason. If you don't like > how you come across to people, then change the nature of your posts. > People here are genuinely trying to help you understand REST, and your > response is to continually criticize the help you're given, and whine > about how the entire group is out to get you. > > Maybe that's why many of your questions are ignored? I'm just > suggesting that perhaps that's *your* problem, not everyone else's > failing. Because it seems like you're the one on a high horse. > > -Eric >
On Oct 13, 2009, at 5:57 PM, António Mota wrote: > That, imagine that, has connectors that are not HTTP. António, which connectors are you using? Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
At the moment we are using a HTTP, a IMAP, a JMS and a IntraVM connector. I think at some point we had more (at least a JCR one I think) but we drop it along the way. We'll also drop the IntraVM one in favour of the JMS one by using his VM protocol. This is a work in progress, of course, so I think probably it has it's flaws and probably some things could be improved from a REST point of view, but they are stable and working, meaning we have the same uniform interface accessing the same resources using the same URI's receiving the same responses in all the 4 connectors. Cheers. Jan Algermissen wrote: > > On Oct 13, 2009, at 5:57 PM, Ant�nio Mota wrote: > >> That, imagine that, has connectors that are not HTTP. > > Ant�nio, > which connectors are you using? > > Jan > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > >
On Oct 14, 2009, at 10:52 AM, António Mota wrote: > At the moment we are using a HTTP, a IMAP, a JMS and a IntraVM > connector. I think at some point we had more (at least a JCR one I > think) but we drop it along the way. We'll also drop the IntraVM one > in favour of the JMS one by using his VM protocol. > > This is a work in progress, of course, so I think probably it has > it's flaws and probably some things could be improved from a REST > point of view, but they are stable and working, meaning we have the > same uniform interface accessing the same resources using the same > URI's receiving the same responses in all the 4 connectors. > Out of curiosity: can you provide an example? And, why are you not doing everything via HTTP? Thanks, Jan > Cheers. > > Jan Algermissen wrote: >> >> On Oct 13, 2009, at 5:57 PM, António Mota wrote: >> >>> That, imagine that, has connectors that are not HTTP. >> >> António, >> which connectors are you using? >> >> Jan >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Regarding the 2nd question, and not entering specific details as as usual there are nda's around this, I can say that they are clearly business reasons as required by some of the external entities with whom we relate. Regarding a example, what kind of example do you want? A description of a use case? A description of the components and their interaction? A code example? Well, again concerned about nda's I'll try to give a overview. We have internal uses and external uses for the connectors. Our architecture is build on very modular modules (what else?) akind of OSGi (in spirit, not in implementation) except they are not hot-deployable. Those modules are just a (or several) normally written Java class, with it's public methods exposed in a interface. No REST up to here. The classes and public methods that are to be exposed as resources/uri's are then annotated with Jersey and injected into a Resource class (our class, not Jersey's, although it depends a lot on Jersey stuff). This resource receives the request coming from one of several connectors and dispatch the request to the specified injected method. Content negotiation happens here. In the first version of it we also used Spring Integration, that we had to drop at some point but we're planning to integrate again in the future. Very easy to do that, just a question of configuration. A good word to the guys of Spring Integration with whom I had very interesting discussions in their forum and thank God for not being easily flammable. And a good word for the guys at Jersey also who are restless answering questions. Did I say restless? I meant tireless... The connectors are responsible for detaching the message from the protocol, deal with the parameters and headers, find the right resource and forward the message to it. And wait for the response and send it back if it's the case. All the connectors have exactly the same interface and behave the same. Actually, most of the functionality is in a AbstractConnector that the others extend. We also have "user-agents" classes, in the development of which I wasn't involved, but there is one per connector and I think they should be called "user-agent connectors" instead of just user-agents. They are used in our internal, inter-module communication. For the external ones, our starting point is the connectors I described. I guess this is it as a overview. I hope is what you wanted. I think I cannot describe much more than this, specially not the business use cases. Cheers. Jan Algermissen wrote: > > On Oct 14, 2009, at 10:52 AM, Ant�nio Mota wrote: > >> At the moment we are using a HTTP, a IMAP, a JMS and a IntraVM >> connector. I think at some point we had more (at least a JCR one I >> think) but we drop it along the way. We'll also drop the IntraVM one >> in favour of the JMS one by using his VM protocol. >> >> This is a work in progress, of course, so I think probably it has >> it's flaws and probably some things could be improved from a REST >> point of view, but they are stable and working, meaning we have the >> same uniform interface accessing the same resources using the same >> URI's receiving the same responses in all the 4 connectors. >> > > Out of curiosity: can you provide an example? And, why are you not > doing everything via HTTP? > > Thanks, > Jan > > >> Cheers. >> >> Jan Algermissen wrote: >>> >>> On Oct 13, 2009, at 5:57 PM, Ant�nio Mota wrote: >>> >>>> That, imagine that, has connectors that are not HTTP. >>> >>> Ant�nio, >>> which connectors are you using? >>> >>> Jan >>> >>> -------------------------------------- >>> Jan Algermissen >>> >>> Mail: algermissen@... >>> Blog: http://algermissen.blogspot.com/ >>> Home: http://www.jalgermissen.com >>> -------------------------------------- >>> >>> >>> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > >
On Oct 14, 2009, at 1:55 PM, António Mota wrote: > Regarding the 2nd question, and not entering specific details as as > usual there are nda's around this, I can say that they are clearly > business reasons as required by some of the external entities with > whom > we relate. > > Regarding a example, what kind of example do you want? A description > of > a use case? A description of the components and their interaction? A > code example? Well, again concerned about nda's I'll try to give a > overview. > > We have internal uses and external uses for the connectors. Our > architecture is build on very modular modules (what else?) akind of > OSGi > (in spirit, not in implementation) except they are not hot-deployable. > Those modules are just a (or several) normally written Java class, > with > it's public methods exposed in a interface. No REST up to here. > > The classes and public methods that are to be exposed as resources/ > uri's > are then annotated with Jersey and injected into a Resource class Hmmm - I do not understand where you are using non-HTTP connectors? So far you only describe that you use an HTTP connector (Servlet Container +Jersey). What are you using as IMAP connectors for example? Jan > (our > class, not Jersey's, although it depends a lot on Jersey stuff). This > resource receives the request coming from one of several connectors > and > dispatch the request to the specified injected method. Content > negotiation happens here. In the first version of it we also used > Spring > Integration, that we had to drop at some point but we're planning to > integrate again in the future. Very easy to do that, just a question > of > configuration. A good word to the guys of Spring Integration with > whom I > had very interesting discussions in their forum and thank God for not > being easily flammable. And a good word for the guys at Jersey also > who > are restless answering questions. Did I say restless? I meant > tireless... > > The connectors are responsible for detaching the message from the > protocol, deal with the parameters and headers, find the right > resource > and forward the message to it. And wait for the response and send it > back if it's the case. All the connectors have exactly the same > interface and behave the same. Actually, most of the functionality > is in > a AbstractConnector that the others extend. > > We also have "user-agents" classes, in the development of which I > wasn't > involved, but there is one per connector and I think they should be > called "user-agent connectors" instead of just user-agents. They are > used in our internal, inter-module communication. For the external > ones, > our starting point is the connectors I described. > > I guess this is it as a overview. I hope is what you wanted. I think I > cannot describe much more than this, specially not the business use > cases. > > Cheers. > > > > > > Jan Algermissen wrote: >> >> On Oct 14, 2009, at 10:52 AM, António Mota wrote: >> >>> At the moment we are using a HTTP, a IMAP, a JMS and a IntraVM >>> connector. I think at some point we had more (at least a JCR one I >>> think) but we drop it along the way. We'll also drop the IntraVM one >>> in favour of the JMS one by using his VM protocol. >>> >>> This is a work in progress, of course, so I think probably it has >>> it's flaws and probably some things could be improved from a REST >>> point of view, but they are stable and working, meaning we have the >>> same uniform interface accessing the same resources using the same >>> URI's receiving the same responses in all the 4 connectors. >>> >> >> Out of curiosity: can you provide an example? And, why are you not >> doing everything via HTTP? >> >> Thanks, >> Jan >> >> >>> Cheers. >>> >>> Jan Algermissen wrote: >>>> >>>> On Oct 13, 2009, at 5:57 PM, António Mota wrote: >>>> >>>>> That, imagine that, has connectors that are not HTTP. >>>> >>>> António, >>>> which connectors are you using? >>>> >>>> Jan >>>> >>>> -------------------------------------- >>>> Jan Algermissen >>>> >>>> Mail: algermissen@acm.org >>>> Blog: http://algermissen.blogspot.com/ >>>> Home: http://www.jalgermissen.com >>>> -------------------------------------- >>>> >>>> >>>> >>> >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Jan Algermissen wrote: > Hmmm - I do not understand where you are using non-HTTP connectors? So > far > you only describe that you use an HTTP connector (Servlet > Container+Jersey). > > What are you using as IMAP connectors for example? > No, I describe the use of ALL our connectors. I never said nothing about a ServletContainer in my post. When I said Jersey I didn't mean Servlet Container+Jersey. I only referred Jersey, specifically their annotations and specially > (our class, not Jersey's, although it depends a lot on Jersey stuff) That means I use bits and pieces of Jersey, but *not* their specific HTTP stuff, like the injected interfaces and contexts and the like. We do use Servlet stuff (the spring.web stuff) but *only* in the HTTP connector. Both the AbstractConnector and the Resource use bits and pieces of Jersey and Jax-rs but, again, *not* the HTTP stuff. And again, our Resource has nothing to do with the Jersey Resource* interfaces and classes, doesn't implement any of the Jersey or Jax-rs interfaces nor extends any class. It uses some of the Jersey and Jax-rs classes that don't have to do with HTTP. The IMAP connector is just a IMAP Server listener, like the JMS connector is just a JMS listener, that detaches the message/parameters/headers from the specific protocol messages. That is then treated by the AbstractConector code, that is protocol independent, that is then sent to the Resource, that is protocol independent, and so on as I described in my other post. So, the HTTP connector is at the same level as the others and have the same importance. It is very easy to create other connectors this way, only one method has to be implemented from the AbstractConnector. Of course, the protocol under the connector has to be constrained by the... constraints. If you can't do it you can not implement that connector. I hope I explained correctly.
Hi,
I'm trying to think about a RESTful interface to an order system. Orders can be created, updated, paid, completed, cancelled etc.
Creating an order seems straightforward: POST a description of the order to some Orders collection, and perhaps PUT some further details at the URL of the new order. But now suppose we want to update the order - to confirm that it has been paid, and that the ordered items should now be shipped.
Coming from an OOP perspective, it feels as if we would like to send a "complete" message to the Order object. But "complete" is not a bit of order state - it's an instruction to some system to change the state of an order and make sure a number of side-effects are taken care of into the bargain.
I've seen one purportedly "RESTful" interface, that has operations like this:
POST > /Orders/Ord001/Complete
{
shipping: "express"
}
- but this seems wrong ("complete" is a verb, not a noun). Another option might be this:
POST > /Orders/Ord001
{
cmd: "complete",
shipping: "express"
}
- which seems reasonable, but is still a bit more like RMI than feels quite right.
The third option I've considered is to say that an Order has associated with it a Completion form, which one can fill in and submit by POSTing to it. Here, a GET to the Order might return a completion-uri (amongst other order details), and we might POST our shipping options to that URI to get the Order completed. But now we are updating the state of one resource (the Completion form) in order to set in progress a chain of side-effects that will include the updating of another resource (the Order itself). It seems a little forced, compared to the RMI-ish example.
Is there any really RESTful alternative I haven't considered?
Best wishes,
Dominic Fox
On Wed, Oct 14, 2009 at 11:55 AM, domfox <dominic.fox@...> wrote: > Creating an order seems straightforward: POST a description of the order to some Orders collection, and perhaps PUT some further details at the URL of the new order. But now suppose we want to update the order - to confirm that it has been paid, and that the ordered items should now be shipped. > [...] > Is there any really RESTful alternative I haven't considered? A payment is usually (and should be) a resource. Whatever economic event "completes" an order will usually be a resource. When posted, the server could legitimately update the order resource, and signal that the ordered items should be shipped. In other words, the server can do whatever it wants in responding to a request.
On Oct 14, 2009, at 6:55 PM, domfox wrote:
> Hi,
>
> I'm trying to think about a RESTful interface to an order system.
> Orders can be created, updated, paid, completed, cancelled etc.
>
> Creating an order seems straightforward: POST a description of the
> order to some Orders collection, and perhaps PUT some further
> details at the URL of the new order. But now suppose we want to
> update the order - to confirm that it has been paid, and that the
> ordered items should now be shipped.
Why would the client want to confirm that it has paid? Usually the
seller would initiate shipment once the payment has been made. The
client would just wait for the goods to arive at her doorstep.
In general, for these kinds of business transactions I think that
POSTing business documents is the way to go. Look at UBL (http://www.oasis-open.org/committees/ubl/
) for examples. Even order changes I think should be done by sending
an OrderChange document to an appropriate resource (e.g. the one that
the server told the client would be the target for order change
requests).
Jan
>
> Coming from an OOP perspective, it feels as if we would like to send
> a "complete" message to the Order object. But "complete" is not a
> bit of order state - it's an instruction to some system to change
> the state of an order and make sure a number of side-effects are
> taken care of into the bargain.
>
> I've seen one purportedly "RESTful" interface, that has operations
> like this:
>
> POST > /Orders/Ord001/Complete
>
> {
> shipping: "express"
> }
>
> - but this seems wrong ("complete" is a verb, not a noun). Another
> option might be this:
>
> POST > /Orders/Ord001
>
> {
> cmd: "complete",
> shipping: "express"
> }
>
> - which seems reasonable, but is still a bit more like RMI than
> feels quite right.
>
> The third option I've considered is to say that an Order has
> associated with it a Completion form, which one can fill in and
> submit by POSTing to it. Here, a GET to the Order might return a
> completion-uri (amongst other order details), and we might POST our
> shipping options to that URI to get the Order completed. But now we
> are updating the state of one resource (the Completion form) in
> order to set in progress a chain of side-effects that will include
> the updating of another resource (the Order itself). It seems a
> little forced, compared to the RMI-ish example.
>
> Is there any really RESTful alternative I haven't considered?
>
> Best wishes,
> Dominic Fox
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
Does the service that accepts orders also support shipping? I would
imagine
POST /Fulfillment
Entity: {
order: http://.../Orders/Ord001
}
with a response:
201 Created
Location: /Fulfillment/Request01
Get /Fulfillment/Request01
200 Ok
Entity: {
order: http://.../Orders/Ord001
status: <in progress>|<completed>
}
-Noah
On Wed, Oct 14, 2009 at 9:55 AM, domfox <dominic.fox@...> wrote:
> Hi,
>
> I'm trying to think about a RESTful interface to an order system. Orders
> can be created, updated, paid, completed, cancelled etc.
>
> Creating an order seems straightforward: POST a description of the order to
> some Orders collection, and perhaps PUT some further details at the URL of
> the new order. But now suppose we want to update the order - to confirm that
> it has been paid, and that the ordered items should now be shipped.
>
> Coming from an OOP perspective, it feels as if we would like to send a
> "complete" message to the Order object. But "complete" is not a bit of order
> state - it's an instruction to some system to change the state of an order
> and make sure a number of side-effects are taken care of into the bargain.
>
> I've seen one purportedly "RESTful" interface, that has operations like
> this:
>
> POST > /Orders/Ord001/Complete
>
> {
> shipping: "express"
> }
>
> - but this seems wrong ("complete" is a verb, not a noun). Another option
> might be this:
>
> POST > /Orders/Ord001
>
> {
> cmd: "complete",
> shipping: "express"
> }
>
> - which seems reasonable, but is still a bit more like RMI than feels quite
> right.
>
> The third option I've considered is to say that an Order has associated
> with it a Completion form, which one can fill in and submit by POSTing to
> it. Here, a GET to the Order might return a completion-uri (amongst other
> order details), and we might POST our shipping options to that URI to get
> the Order completed. But now we are updating the state of one resource (the
> Completion form) in order to set in progress a chain of side-effects that
> will include the updating of another resource (the Order itself). It seems a
> little forced, compared to the RMI-ish example.
>
> Is there any really RESTful alternative I haven't considered?
>
> Best wishes,
> Dominic Fox
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Hello. As Jan suggests, there are several ways of modeling this. If you think of resources mapped to data entities, then you will find your client needing to update fields and such like the state of the order. Too much detail and exposure. If instead, your resource is an order processing service, you can just post business documents and the database complexities are hidden. This possibility is very often (if not always) overlooked. BTW, OASIS bizz docs are good in the way of they being standards, still sometimes they are too complex for little things. But useful, yes. Hope this helps. William. --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Oct 14, 2009, at 6:55 PM, domfox wrote: > > > Hi, > > > > I'm trying to think about a RESTful interface to an order system. > > Orders can be created, updated, paid, completed, cancelled etc. > > > > Creating an order seems straightforward: POST a description of the > > order to some Orders collection, and perhaps PUT some further > > details at the URL of the new order. But now suppose we want to > > update the order - to confirm that it has been paid, and that the > > ordered items should now be shipped. > > Why would the client want to confirm that it has paid? Usually the > seller would initiate shipment once the payment has been made. The > client would just wait for the goods to arive at her doorstep. > > In general, for these kinds of business transactions I think that > POSTing business documents is the way to go. Look at UBL (http://www.oasis-open.org/committees/ubl/ > ) for examples. Even order changes I think should be done by sending > an OrderChange document to an appropriate resource (e.g. the one that > the server told the client would be the target for order change > requests). > > Jan > > ... > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- >
I started to redesign my service towards HATEOAS. The goal is to allow
a robot to navigate through the application without previous knowledge
of the mapping between the URIs and the states of the application. Not
easy, but I will try...
So, please check these links and tell me if I am going in the right direction:
------- Loads all institutions:
http://fgaucho.dyndns.org:8080/arena-http/institution
* notice the "next" link inside the response
* ok, perhaps here I need to inform the supported types of
institutions (sponsors, owners, schools, etc.) in the OPTIONS method..
------- Loads only the competition of type "competition owner":
http://fgaucho.dyndns.org:8080/arena-http/institution?role=PUJ_OWNER
* it is an example, this link is embedded in the previous link...
------- Loads the competitions by "institution owner":
http://fgaucho.dyndns.org:8080/arena-http/competition?institution=CEJUG
* it is an example, this link is embedded in the previous link...
--------
I am also including only a relative path, instead of a complete URI..
perhaps it is enough for an alpha release..
* of course there are much more coming up, but if you can bring me
some thoughts about the above sample URI, I can avoid to spread
mistakes all over the new coming code :)
* the goal is to makes it the most HATEOAS compatible as possible :)
it remains in the proof of concept phase ... and now using EclipseLink
and Java EE 6 on Glassfish V3 :)
thanks for your contribution,
Felipe Gaúcho
On Oct 15, 2009, at 3:51 PM, Felipe Gaúcho wrote: > I started to redesign my service towards HATEOAS. The goal is to allow > a robot to navigate through the application without previous knowledge > of the mapping between the URIs and the states of the application. Not > easy, but I will try... > > So, please check these links and tell me if I am going in the right > direction: > > ------- Loads all institutions: > http://fgaucho.dyndns.org:8080/arena-http/institution > > * notice the "next" link inside the response The semantic of 'next' is 'subsequent page' not 'drill down' which you seem to use it for. > * ok, perhaps here I need to inform the supported types of > institutions (sponsors, owners, schools, etc.) in the OPTIONS method.. No, not necessarily. But a hint is allways good. > > ------- Loads only the competition of type "competition owner": > http://fgaucho.dyndns.org:8080/arena-http/institution?role=PUJ_OWNER > This is a problem (see 'next' above) since the specific intention of 'Loads only the competition of type "competition owner"' is not communicated ny the link rel. > * it is an example, this link is embedded in the previous link... > > ------- Loads the competitions by "institution owner": > http://fgaucho.dyndns.org:8080/arena-http/competition? > institution=CEJUG > > * it is an example, this link is embedded in the previous link... > > > -------- > > I am also including only a relative path, instead of a complete URI.. > perhaps it is enough for an alpha release.. Depends on the media type that defines the link element. Usually it is ok. > > * of course there are much more coming up, but if you can bring me > some thoughts about the above sample URI, I can avoid to spread > mistakes all over the new coming code :) Instead of application/xml you need to use a media type that allows the client to understand what is really meant by the message. If you use application/xml then all the client can assume is 'process this as xml' and that does not get you very far :-) HTH, Jan > * the goal is to makes it the most HATEOAS compatible as possible :) > > > it remains in the proof of concept phase ... and now using EclipseLink > and Java EE 6 on Glassfish V3 :) > > thanks for your contribution, > > Felipe Gaúcho > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
> Instead of application/xml you need to use a media type that allows the > client to understand what is really meant by the message. If you use > application/xml then all the client can assume is 'process this as xml' and > that does not get you very far :-) humm.. good hint.. any easy-to-understand example out there? I got your point, and agree, I just need to find out how to implement it..
The following docs should give you some guidance on current "known" link relation values and an agreed process for creating and using custom rel values when you need them: - IANA Link Relations [http://www.iana.org/assignments/link-relations/link-relations.xhtml] - Link Relations and HTTP Header Linking ID [http://tools.ietf.org/html/draft-nottingham-http-link-header-03] mca http://amundsen.com/blog/ 2009/10/15 Felipe Gaúcho <fgaucho@gmail.com>: >> Instead of application/xml you need to use a media type that allows the >> client to understand what is really meant by the message. If you use >> application/xml then all the client can assume is 'process this as xml' and >> that does not get you very far :-) > > humm.. good hint.. any easy-to-understand example out there? > > I got your point, and agree, I just need to find out how to implement it.. > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Oct 15, 2009, at 4:17 PM, Felipe Gaúcho wrote: >> Instead of application/xml you need to use a media type that allows >> the >> client to understand what is really meant by the message. If you use >> application/xml then all the client can assume is 'process this as >> xml' and >> that does not get you very far :-) > > humm.. good hint.. any easy-to-understand example out there? > > I got your point, and agree, I just need to find out how to > implement it.. Sometimes you just need to create a new media type and/or new link relations[1]. If there is 'public relevance' in your use case, your new type might even become a standard type. IOW, don't be afraid to ceate a new type if you have to (but think long and hard if you really have to :-) Jan [1] If you can use link relations - they are cheaper than media types. > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
> * ok, perhaps here I need to inform the supported types of > institutions (sponsors, owners, schools, etc.) in the OPTIONS method.. > I would avoid OPTIONS at runtime. It lacks enough interoperable runtime/dev-time information about resources. Moreover, its response is not cacheable. Subbu
+1 on the link type and change your media type to application/xhtml+xml or text/html, the latter being subject to invalid xml, but valid html. Then you can put context around the links that is human readable, but not necessarily machine readable. Then look for link tags with your rel tag. 2009/10/15 Jan Algermissen <algermissen1971@...> > > On Oct 15, 2009, at 4:17 PM, Felipe Gaúcho wrote: > > >> Instead of application/xml you need to use a media type that allows > >> the > >> client to understand what is really meant by the message. If you use > >> application/xml then all the client can assume is 'process this as > >> xml' and > >> that does not get you very far :-) > > > > humm.. good hint.. any easy-to-understand example out there? > > > > I got your point, and agree, I just need to find out how to > > implement it.. > > Sometimes you just need to create a new media type and/or new link > relations[1]. If there is 'public relevance' in your use case, your > new type might even become a standard type. IOW, don't be afraid to > ceate a new type if you have to (but think long and hard if you really > have to :-) > > Jan > > > [1] If you can use link relations - they are cheaper than media types. > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
2009/10/15 Jan Algermissen <algermissen1971@...> > Sometimes you just need to create a new media type and/or new link > relations[1]. If there is 'public relevance' in your use case, your > new type might even become a standard type. IOW, don't be afraid to > ceate a new type if you have to (but think long and hard if you really > have to :-) Does it make any sense to qualify the link with the verb and the data type? A client doesn't need to know the content type of a GET, as it can ask for what it wants. But do you think for a PUT or POST, the data type can (should?) be referenced in the link? The Atom link can look like: <link href="http://example.com/resource" rel="create" type="xml/ubl-invoice"/> I guess the media types for both POST and PUT should ideally be the same, as they're typically closely related. I was just thinking if they were different, how would you best communicate that to a client. And it's up to the standard interface, to know what you can do with that media type at that link. I don't see how you could simply have "xml/application" as a media type for most cases, especially for creating or changing resources. It's just too generic. I suppose it's up to the client to "know" what schema "xml/ubl-invoice" is, through some internal mapping. Or, the service can post a resource that provides the mapping. Regards, Will Hartung (willh@...)
Wouldn't a prior GET return the material to construct a POST or PUT request? In HTML, a form is returned with semantics the server will understand. The same should likely apply to the other media types. With application/xml is specific to client and server, basically increasing coupling and reducing adoptability. I might be so bold to say that application/xml is like "tunneling HATEOAS" in the same way SOAP is tunneling through HTTP. -Noah On Thu, Oct 15, 2009 at 10:02 AM, Will Hartung <willh@...> wrote: > 2009/10/15 Jan Algermissen <algermissen1971@...> > > Sometimes you just need to create a new media type and/or new link > > relations[1]. If there is 'public relevance' in your use case, your > > new type might even become a standard type. IOW, don't be afraid to > > ceate a new type if you have to (but think long and hard if you really > > have to :-) > > Does it make any sense to qualify the link with the verb and the data type? > > A client doesn't need to know the content type of a GET, as it can ask > for what it wants. > > But do you think for a PUT or POST, the data type can (should?) be > referenced in the link? > > The Atom link can look like: > <link href="http://example.com/resource" rel="create" > type="xml/ubl-invoice"/> > > I guess the media types for both POST and PUT should ideally be the > same, as they're typically closely related. I was just thinking if > they were different, how would you best communicate that to a client. > > And it's up to the standard interface, to know what you can do with > that media type at that link. > > I don't see how you could simply have "xml/application" as a media > type for most cases, especially for creating or changing resources. > It's just too generic. > > I suppose it's up to the client to "know" what schema > "xml/ubl-invoice" is, through some internal mapping. Or, the service > can post a resource that provides the mapping. > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Thu, Oct 15, 2009 at 11:17 AM, Noah Campbell <noahcampbell@...> wrote: > Wouldn't a prior GET return the material to construct a POST or PUT > request? In HTML, a form is returned with semantics the server will > understand. The same should likely apply to the other media types. For automata it is often not particularly helpful for the server to provide a template for request representations at run-time. The primary purpose for forms/templates is to allow the server to modify it's demands on the client over time. However, non-user-agent clients tend to be unable to handle such changing demands gracefully. In my experience, automata work better when the media type of the document that contains a link defines the semantics of the link and what representations are expected. > With application/xml is specific to client and server, basically > increasing coupling and reducing adoptability. I might be so bold to > say that application/xml is like "tunneling HATEOAS" in the same way > SOAP is tunneling through HTTP. Are you suggesting that proprietary/non-standard formats should have a specific MIME type, rather than using `application/xml`, because it raises the visibility of the application semantics of the representations? Or that one should avoid creating new formats altogether? Peter
Will Hartung wrote: > 2009/10/15 Jan Algermissen <algermissen1971@...> > > A client doesn't need to know the content type of a GET, as it can ask > for what it wants. > Is this always the case? There might be an application context in which an hyperlink should drive a client towards a specific representation
On Thu, Oct 15, 2009 at 11:29 AM, Peter Williams <pezra@...> wrote: > Are you suggesting that proprietary/non-standard formats should have a > specific MIME type, rather than using `application/xml`, because it > raises the visibility of the application semantics of the > representations? Or that one should avoid creating new formats > altogether? As much as using standard formats is desirable, I think there will always be a need for custom formats. And I think they should have a specific mime type. At a minimum, using a mime type allows a client to detect if something has changed underneath their feet dramatically. Obviously, a mime type can be silently versioned (i.e. it's actual representation has changed, but not it's mime type), which can cause issues. But at least the mime type gives some reasonable self documentation to what is expected. Mapping the type to a representation is up to the client. Regards, Will Hartung (willh@...)
Peter, My suggestion was inline with "raises the visibility of the application semantics of the representations." I'd also suggest that new mime-types not be introduced except when necessary. -Noah On Thu, Oct 15, 2009 at 11:29 AM, Peter Williams <pezra@...>wrote: > On Thu, Oct 15, 2009 at 11:17 AM, Noah Campbell <noahcampbell@...> > wrote: > > > Wouldn't a prior GET return the material to construct a POST or PUT > > request? In HTML, a form is returned with semantics the server will > > understand. The same should likely apply to the other media types. > > For automata it is often not particularly helpful for the server to > provide a template for request representations at run-time. The > primary purpose for forms/templates is to allow the server to modify > it's demands on the client over time. However, non-user-agent clients > tend to be unable to handle such changing demands gracefully. In my > experience, automata work better when the media type of the document > that contains a link defines the semantics of the link and what > representations are expected. > > > With application/xml is specific to client and server, basically > > increasing coupling and reducing adoptability. I might be so bold to > > say that application/xml is like "tunneling HATEOAS" in the same way > > SOAP is tunneling through HTTP. > > Are you suggesting that proprietary/non-standard formats should have a > specific MIME type, rather than using `application/xml`, because it > raises the visibility of the application semantics of the > representations? Or that one should avoid creating new formats > altogether? > > > Peter >
On Thu, Oct 15, 2009 at 11:41 AM, Mike Kelly <mike@...> wrote: > Will Hartung wrote: >> >> 2009/10/15 Jan Algermissen <algermissen1971@mac.com> >> A client doesn't need to know the content type of a GET, as it can ask >> for what it wants. >> > > Is this always the case? > > There might be an application context in which an hyperlink should drive a > client towards a specific representation Yes, of course. The crux, of course, is that any transaction could potentially trigger a round of content negotiation. But also, many systems tend to be more liberal with what they return than what they accept. Easy to see a system the returns both text/html as well as an application specific XML format for a GET (simple filter can simply send the XML in an HTML escaped format as a result, for example), but will out right refuse a text/html type on a POST/PUT. Regards, Will Hartung (willh@...)
Will Hartung wrote: > On Thu, Oct 15, 2009 at 11:41 AM, Mike Kelly <mike@...> wrote: > >> Will Hartung wrote: >> >>> 2009/10/15 Jan Algermissen <algermissen1971@...> >>> A client doesn't need to know the content type of a GET, as it can ask >>> for what it wants. >>> >>> >> Is this always the case? >> >> There might be an application context in which an hyperlink should drive a >> client towards a specific representation >> > > Yes, of course. > > The crux, of course, is that any transaction could potentially trigger > a round of content negotiation. But also, many systems tend to be more > liberal with what they return than what they accept. > > Easy to see a system the returns both text/html as well as an > application specific XML format for a GET (simple filter can simply > send the XML in an HTML escaped format as a result, for example), but > will out right refuse a text/html type on a POST/PUT. > > Regards, > > Will Hartun Agree that servers are, in general, more liberal in terms of what they provide, than what they consume. I was suggesting that a hyperlink for a GET request could also have the same ability to specify the appropriate conneg control data (i.e. Accept header) for a given hyperlink, in the context of a particular application. If a client to an application supports multiple representations; it may have it's own fixed preference, however there may be situations within the flow of an application where the client preference should be over-ridden. e.g. linking to example.com/blog specifying Accept: application/atom+xml over text/html from a page for a web browser. Without this type of conneg related control mechanism for hyperlinks, we are forced to treat specific representations that we want to link to within an application as if they are resources - i.e. give them their own URI. If this is an accepted practice, and without knowing what representations could be relevant to applications over the course of time; does it make sense to ever use negotiated representations for GET requests? Does the distinction between resource and representation have any practical value? Arguably not if this is the case. - Mike
The XInclude spec [1] defines the accept and accept-language attributes for links. I use this quite often. mca http://amundsen.com/blog/ [1] http://www.w3.org/TR/xinclude/#include_element On Thu, Oct 15, 2009 at 16:53, Mike Kelly <mike@...> wrote: > Will Hartung wrote: >> On Thu, Oct 15, 2009 at 11:41 AM, Mike Kelly <mike@...> wrote: >> >>> Will Hartung wrote: >>> >>>> 2009/10/15 Jan Algermissen <algermissen1971@...> >>>> A client doesn't need to know the content type of a GET, as it can ask >>>> for what it wants. >>>> >>>> >>> Is this always the case? >>> >>> There might be an application context in which an hyperlink should drive a >>> client towards a specific representation >>> >> >> Yes, of course. >> >> The crux, of course, is that any transaction could potentially trigger >> a round of content negotiation. But also, many systems tend to be more >> liberal with what they return than what they accept. >> >> Easy to see a system the returns both text/html as well as an >> application specific XML format for a GET (simple filter can simply >> send the XML in an HTML escaped format as a result, for example), but >> will out right refuse a text/html type on a POST/PUT. >> >> Regards, >> >> Will Hartun > > Agree that servers are, in general, more liberal in terms of what they > provide, than what they consume. > > I was suggesting that a hyperlink for a GET request could also have the > same ability to specify the appropriate conneg control data (i.e. Accept > header) for a given hyperlink, in the context of a particular application. > > If a client to an application supports multiple representations; it may > have it's own fixed preference, however there may be situations within > the flow of an application where the client preference should be > over-ridden. > e.g. linking to example.com/blog specifying Accept: application/atom+xml > over text/html from a page for a web browser. > > Without this type of conneg related control mechanism for hyperlinks, we > are forced to treat specific representations that we want to link to > within an application as if they are resources - i.e. give them their > own URI. If this is an accepted practice, and without knowing what > representations could be relevant to applications over the course of > time; does it make sense to ever use negotiated representations for GET > requests? Does the distinction between resource and representation have > any practical value? Arguably not if this is the case. > > - Mike > > > ------------------------------------ > > Yahoo! Groups Links > > > >
mike amundsen wrote: > The XInclude spec [1] defines the accept and accept-language > attributes for links. I use this quite often. > > mca > http://amundsen.com/blog/ > > [1] http://www.w3.org/TR/xinclude/#include_element > Ah, nice one - thanks Mike :) Unfortunately there is no plan to include such an attribute in HTML5 hyperlinks, so in practical terms I think there is still a question mark over this. Apparently conneg is a "failed mechanism that doesn't work in practice".. the fact that HTML and browsers would appear to be key contributors to this situation seems to be lost on them! :) - Mike
I raised a bug with the HTML5 spec with the hope of finding a relatively simple solution to this issue: http://www.w3.org/Bugs/Public/show_bug.cgi?id=7697 - Mike Mike Kelly wrote: > mike amundsen wrote: > >> The XInclude spec [1] defines the accept and accept-language >> attributes for links. I use this quite often. >> >> mca >> http://amundsen.com/blog/ >> >> [1] http://www.w3.org/TR/xinclude/#include_element >> >> > > Ah, nice one - thanks Mike :) > > Unfortunately there is no plan to include such an attribute in HTML5 > hyperlinks, so in practical terms I think there is still a question mark > over this. Apparently conneg is a "failed mechanism that doesn't work in > practice".. the fact that HTML and browsers would appear to be key > contributors to this situation seems to be lost on them! :) > > - Mike > >
Yep, HTML4 & 5 have "type" and "hreflang" which are the closest equivalents [1], but are marked as purely advisory. I've used these in my own clients and treated them the same as accept and accept-language, but that's only in my custom clients. mca http://amundsen.com/blog/ [1] http://www.w3.org/TR/html5/semantics.html#attr-link-type On Thu, Oct 15, 2009 at 17:15, Mike Kelly <mike@...> wrote: > mike amundsen wrote: >> >> The XInclude spec [1] defines the accept and accept-language >> attributes for links. I use this quite often. >> >> mca >> http://amundsen.com/blog/ >> >> [1] http://www.w3.org/TR/xinclude/#include_element >> > > Ah, nice one - thanks Mike :) > > Unfortunately there is no plan to include such an attribute in HTML5 > hyperlinks, so in practical terms I think there is still a question mark > over this. Apparently conneg is a "failed mechanism that doesn't work in > practice".. the fact that HTML and browsers would appear to be key > contributors to this situation seems to be lost on them! :) > > - Mike >
Perhaps then, a collective statement might cause more serious consideration to including these guidelines in the HTML5 spec, and/or encourage browsers to implement the change? - Mike mike amundsen wrote: > Yep, HTML4 & 5 have "type" and "hreflang" which are the closest > equivalents [1], but are marked as purely advisory. I've used these in > my own clients and treated them the same as accept and > accept-language, but that's only in my custom clients. > > mca > http://amundsen.com/blog/ > > [1] http://www.w3.org/TR/html5/semantics.html#attr-link-type > > > On Thu, Oct 15, 2009 at 17:15, Mike Kelly <mike@...> wrote: > >> mike amundsen wrote: >> >>> The XInclude spec [1] defines the accept and accept-language >>> attributes for links. I use this quite often. >>> >>> mca >>> http://amundsen.com/blog/ >>> >>> [1] http://www.w3.org/TR/xinclude/#include_element >>> >>> >> Ah, nice one - thanks Mike :) >> >> Unfortunately there is no plan to include such an attribute in HTML5 >> hyperlinks, so in practical terms I think there is still a question mark >> over this. Apparently conneg is a "failed mechanism that doesn't work in >> practice".. the fact that HTML and browsers would appear to be key >> contributors to this situation seems to be lost on them! :) >> >> - Mike >> >>
Mike: I don't have a bugzilla account, but I'd recommend keeping "type" and "hreflang" as they are and _adding_ optional attributes "accept" and "accept-language." The problem I see is that existing clients will all break if "accept" and "accept-language" are now added w/ a "MUST" constraint. That's just not going to fly, eh? So, the results of adding two more attributes will probably be a "MAY" constraint; which is how "type" and "hreflang" work today. mca http://amundsen.com/blog/ On Thu, Oct 15, 2009 at 17:36, Mike Kelly <mike@...> wrote: > Perhaps then, a collective statement might cause more serious consideration > to including these guidelines in the HTML5 spec, and/or encourage browsers > to implement the change? > > - Mike > > mike amundsen wrote: >> >> Yep, HTML4 & 5 have "type" and "hreflang" which are the closest >> equivalents [1], but are marked as purely advisory. I've used these in >> my own clients and treated them the same as accept and >> accept-language, but that's only in my custom clients. >> >> mca >> http://amundsen.com/blog/ >> >> [1] http://www.w3.org/TR/html5/semantics.html#attr-link-type >> >> >> On Thu, Oct 15, 2009 at 17:15, Mike Kelly <mike@...> wrote: >> >>> >>> mike amundsen wrote: >>> >>>> >>>> The XInclude spec [1] defines the accept and accept-language >>>> attributes for links. I use this quite often. >>>> >>>> mca >>>> http://amundsen.com/blog/ >>>> >>>> [1] http://www.w3.org/TR/xinclude/#include_element >>>> >>>> >>> >>> Ah, nice one - thanks Mike :) >>> >>> Unfortunately there is no plan to include such an attribute in HTML5 >>> hyperlinks, so in practical terms I think there is still a question mark >>> over this. Apparently conneg is a "failed mechanism that doesn't work in >>> practice".. the fact that HTML and browsers would appear to be key >>> contributors to this situation seems to be lost on them! :) >>> >>> - Mike >>> >>> > >
On Thu, Oct 15, 2009 at 1:53 PM, Mike Kelly <mike@...> wrote: > Agree that servers are, in general, more liberal in terms of what they > provide, than what they consume. > > I was suggesting that a hyperlink for a GET request could also have the same > ability to specify the appropriate conneg control data (i.e. Accept header) > for a given hyperlink, in the context of a particular application. > > If a client to an application supports multiple representations; it may have > it's own fixed preference, however there may be situations within the flow > of an application where the client preference should be over-ridden. > e.g. linking to example.com/blog specifying Accept: application/atom+xml > over text/html from a page for a web browser. > > Without this type of conneg related control mechanism for hyperlinks, we are > forced to treat specific representations that we want to link to within an > application as if they are resources - i.e. give them their own URI. If this > is an accepted practice, and without knowing what representations could be > relevant to applications over the course of time; does it make sense to ever > use negotiated representations for GET requests? Does the distinction > between resource and representation have any practical value? Arguably not > if this is the case. I would argue that there is no distinction between resource and representation. If a resource has multiple representations, and they are not equal, then there's a problem. If they're not the same, the options shouldn't be offered up at all. There can certainly be different "views" of a resource, in different formats, but I do not think any of these views should be considered canonical for the resource. The format of the resource is really up to the client to choose. Asking an application to process a JSON formatted request when it can only handle a XML formatted request isn't really useful. If a server is going to serve up multiple representations, it will ideally also accept them back. But, you can see where this can cause problems, notably with lossy conversions. Consider: http://example.com/image.png http://example.com/image.jpg http://example.com/image.tiff Some could consider these the same resource, but clearly the JPEG is not the same as the PNG, or, potentially, the TIFF. I would like to think that GET and PUT should be a commutative operation. For example, you would like to hope that this is true, when practical A = GET http://example.com/resource Accept: application/xml B = GET http://example.com/resource Accept: application/json PUT http://example.com/resource Content-Type: application/json $B C = GET http://example.com/resource Accept: application/xml A.equals(C) == true Clearly, though, that won't be the case if you GET and PUT a JPEG, the PNG will likely change due to the fact that the JPEG is lossy. But it's hard to not argue that a JPEG is not a "representation" of the PNG, it's just not the canonical representation. Seems to me that it would be better to somehow make it clear whether a returned representation is the actual resource, rather than a projected view of the resource that is, essentially, "read only". For example: GET http://example.com/resource Accept: image/png works fine. But GET http://example.com/resource Accept: image/jpeg fails However, GET http://example.com/resource?view Accept: image/jpeg works because we define the ?view attribute as a concept used to get non-authorative representations of a resource, and the fact that the client used the ?view attribute "acknowledges" that is what they're getting. Clearly, this can get messy, but that's what I have buzzing around in my head. Regards, Will Hartung (willh@...)
As it stands @type implies nothing about the outgoing request and simply advises the client on what content-type should be expected from a response - so this kind of proposal would change how they work today. I originally suggested new attributes but apparently adding them to the spec is a costly process. A "SHOULD" constraint might be more appropriate in this instance - Mike mike amundsen wrote: > Mike: > > I don't have a bugzilla account, but I'd recommend keeping "type" and > "hreflang" as they are and _adding_ optional attributes "accept" and > "accept-language." > > The problem I see is that existing clients will all break if "accept" > and "accept-language" are now added w/ a "MUST" constraint. That's > just not going to fly, eh? So, the results of adding two more > attributes will probably be a "MAY" constraint; which is how "type" > and "hreflang" work today. > > mca > http://amundsen.com/blog/ > > > > > On Thu, Oct 15, 2009 at 17:36, Mike Kelly <mike@...> wrote: > >> Perhaps then, a collective statement might cause more serious consideration >> to including these guidelines in the HTML5 spec, and/or encourage browsers >> to implement the change? >> >> - Mike >> >> mike amundsen wrote: >> >>> Yep, HTML4 & 5 have "type" and "hreflang" which are the closest >>> equivalents [1], but are marked as purely advisory. I've used these in >>> my own clients and treated them the same as accept and >>> accept-language, but that's only in my custom clients. >>> >>> mca >>> http://amundsen.com/blog/ >>> >>> [1] http://www.w3.org/TR/html5/semantics.html#attr-link-type >>> >>> >>> On Thu, Oct 15, 2009 at 17:15, Mike Kelly <mike@...> wrote: >>> >>> >>>> mike amundsen wrote: >>>> >>>> >>>>> The XInclude spec [1] defines the accept and accept-language >>>>> attributes for links. I use this quite often. >>>>> >>>>> mca >>>>> http://amundsen.com/blog/ >>>>> >>>>> [1] http://www.w3.org/TR/xinclude/#include_element >>>>> >>>>> >>>>> >>>> Ah, nice one - thanks Mike :) >>>> >>>> Unfortunately there is no plan to include such an attribute in HTML5 >>>> hyperlinks, so in practical terms I think there is still a question mark >>>> over this. Apparently conneg is a "failed mechanism that doesn't work in >>>> practice".. the fact that HTML and browsers would appear to be key >>>> contributors to this situation seems to be lost on them! :) >>>> >>>> - Mike >>>> >>>> >>>> >>
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I would certainly be in favor of something like this. I think I think
used to be skeptical of this, but I have since developed multiple
applications where we needed to provide a link/button to download an
alternate representation of a resource (for example so that a user can
download a CSV file instead of getting JSON) and have had to use query
parameters where it would have been far easier and nicer to be able to
specify an Accept header.
Kris
mike amundsen wrote:
>
>
> Mike:
>
> I don't have a bugzilla account, but I'd recommend keeping "type" and
> "hreflang" as they are and _adding_ optional attributes "accept" and
> "accept-language."
>
> The problem I see is that existing clients will all break if "accept"
> and "accept-language" are now added w/ a "MUST" constraint. That's
> just not going to fly, eh? So, the results of adding two more
> attributes will probably be a "MAY" constraint; which is how "type"
> and "hreflang" work today.
>
> mca
> http://amundsen.com/blog/ <http://amundsen.com/blog/>
>
> On Thu, Oct 15, 2009 at 17:36, Mike Kelly <mike@...
> <mailto:mike%40mykanjo.co.uk>> wrote:
> > Perhaps then, a collective statement might cause more serious
> consideration
> > to including these guidelines in the HTML5 spec, and/or encourage
> browsers
> > to implement the change?
> >
> > - Mike
> >
> > mike amundsen wrote:
> >>
> >> Yep, HTML4 & 5 have "type" and "hreflang" which are the closest
> >> equivalents [1], but are marked as purely advisory. I've used
> these in
> >> my own clients and treated them the same as accept and
> >> accept-language, but that's only in my custom clients.
> >>
> >> mca
> >> http://amundsen.com/blog/ <http://amundsen.com/blog/>
> >>
> >> [1] http://www.w3.org/TR/html5/semantics.html#attr-link-type
> <http://www.w3.org/TR/html5/semantics.html#attr-link-type>
> >>
> >>
> >> On Thu, Oct 15, 2009 at 17:15, Mike Kelly <mike@...
> <mailto:mike%40mykanjo.co.uk>> wrote:
> >>
> >>>
> >>> mike amundsen wrote:
> >>>
> >>>>
> >>>> The XInclude spec [1] defines the accept and accept-language
> >>>> attributes for links. I use this quite often.
> >>>>
> >>>> mca
> >>>> http://amundsen.com/blog/ <http://amundsen.com/blog/>
> >>>>
> >>>> [1] http://www.w3.org/TR/xinclude/#include_element
> <http://www.w3.org/TR/xinclude/#include_element>
> >>>>
> >>>>
> >>>
> >>> Ah, nice one - thanks Mike :)
> >>>
> >>> Unfortunately there is no plan to include such an attribute in HTML5
> >>> hyperlinks, so in practical terms I think there is still a
> question mark
> >>> over this. Apparently conneg is a "failed mechanism that doesn't
> work in
> >>> practice".. the fact that HTML and browsers would appear to be key
> >>> contributors to this situation seems to be lost on them! :)
> >>>
> >>> - Mike
> >>>
> >>>
> >
> >
>
>
> <!-- #ygrp-mkp{ border: 1px solid #d8d8d8; font-family: Arial;
> margin: 14px 0px; padding: 0px 14px; } #ygrp-mkp hr{ border: 1px
> solid #d8d8d8; } #ygrp-mkp #hd{ color: #628c2a; font-size: 85%;
> font-weight: bold; line-height: 122%; margin: 10px 0px; } #ygrp-mkp
> #ads{ margin-bottom: 10px; } #ygrp-mkp .ad{ padding: 0 0; }
> #ygrp-mkp .ad a{ color: #0000ff; text-decoration: none; } --> <!--
> #ygrp-sponsor #ygrp-lc{ font-family: Arial; } #ygrp-sponsor #ygrp-lc
> #hd{ margin: 10px 0px; font-weight: bold; font-size: 78%;
> line-height: 122%; } #ygrp-sponsor #ygrp-lc .ad{ margin-bottom:
> 10px; padding: 0 0; } --> <!-- #ygrp-mlmsg {font-size:13px;
> font-family:
> arial,helvetica,clean,sans-serif;*font-size:small;*font:x-small;}
> #ygrp-mlmsg table {font-size:inherit;font:100%;} #ygrp-mlmsg select,
> input, textarea {font:99% arial,helvetica,clean,sans-serif;}
> #ygrp-mlmsg pre, code {font:115% monospace;*font-size:100%;}
> #ygrp-mlmsg * {line-height:1.22em;} #ygrp-text{ font-family:
> Georgia; } #ygrp-text p{ margin: 0 0 1em 0; } dd.last p a {
> font-family: Verdana; font-weight: bold; } #ygrp-vitnav{
> padding-top: 10px; font-family: Verdana; font-size: 77%; margin: 0;
> } #ygrp-vitnav a{ padding: 0 1px; } #ygrp-mlmsg #logo{
> padding-bottom: 10px; } #ygrp-reco { margin-bottom: 20px; padding:
> 0px; } #ygrp-reco #reco-head { font-weight: bold; color: #ff7900; }
> #reco-category{ font-size: 77%; } #reco-desc{ font-size: 77%; }
> #ygrp-vital a{ text-decoration: none; } #ygrp-vital a:hover{
> text-decoration: underline; } #ygrp-sponsor #ov ul{ padding: 0 0 0
> 8px; margin: 0; } #ygrp-sponsor #ov li{ list-style-type: square;
> padding: 6px 0; font-size: 77%; } #ygrp-sponsor #ov li a{
> text-decoration: none; font-size: 130%; } #ygrp-sponsor #nc{
> background-color: #eee; margin-bottom: 20px; padding: 0 8px; }
> #ygrp-sponsor .ad{ padding: 8px 0; } #ygrp-sponsor .ad #hd1{
> font-family: Arial; font-weight: bold; color: #628c2a; font-size:
> 100%; line-height: 122%; } #ygrp-sponsor .ad a{ text-decoration:
> none; } #ygrp-sponsor .ad a:hover{ text-decoration: underline; }
> #ygrp-sponsor .ad p{ margin: 0; font-weight: normal; color: #000000;
> } o{font-size: 0; } .MsoNormal{ margin: 0 0 0 0; } #ygrp-text tt{
> font-size: 120%; } blockquote{margin: 0 0 0 4px;} .replbq{margin:4}
> dd.last p span { margin-right: 10px; font-family: Verdana;
> font-weight: bold; } dd.last p span.yshortcuts { margin-right: 0; }
> div.photo-title a, div.photo-title a:active, div.photo-title
> a:hover, div.photo-title a:visited { text-decoration: none; }
> div.file-title a, div.file-title a:active, div.file-title a:hover,
> div.file-title a:visited { text-decoration: none; } #ygrp-msg
> p#attach-count { clear: both; padding: 15px 0 3px 0; overflow:
> hidden; } #ygrp-msg p#attach-count span { color: #1E66AE;
> font-weight: bold; } div#ygrp-mlmsg #ygrp-msg p a span.yshortcuts {
> font-family: Verdana; font-size: 10px; font-weight: normal; }
> #ygrp-msg p a { font-family: Verdana; } #ygrp-mlmsg a { color:
> #1E66AE; } div.attach-table div div a { text-decoration: none; }
> div.attach-table { width: 400px; } -->
- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAkrXmY0ACgkQ9VpNnHc4zAw7FACgts7+kgWhQix9DhHVFaNPajU9
odsAnRWZewkFXdWifELMEY/y5Mdrk5aB
=Q3ta
-----END PGP SIGNATURE-----
Will Hartung wrote: > I would argue that there is no distinction between resource and > representation. OK, I disagree with that. There are, obviously, cases where concepts might seem intuitively like they should be a collection of representations when they are actually resources. The example with lossy images; your system will treat them mechanically as completely separate resources, and therefore it makes sense to respect that. That does not speak for every case, though. - Mike
> I would argue that there is no distinction between resource and > representation. If a resource has multiple representations, and they > are not equal, then there's a problem. I think this in inherent distinction because there can be a "one-to-many" relationship between a resource and its representations. But maybe that's semantic. More importantly, how is equality determined?
On Thu, Oct 15, 2009 at 3:01 PM, Mike Kelly <mike@...> wrote: > Will Hartung wrote: >> >> I would argue that there is no distinction between resource and >> representation. > > OK, I disagree with that. > > There are, obviously, cases where concepts might seem intuitively like they > should be a collection of representations when they are actually resources. > > The example with lossy images; your system will treat them mechanically as > completely separate resources, and therefore it makes sense to respect that. > That does not speak for every case, though. Just to be clear. If there is a single representation, then this is certainly not an issue. It's when there are multiple representations that the conflict appears. On a generic web server, image.png and image.jpg are completely separate resources. They can easily be completely different images. Arguably, they have completely different names, so this is OK. But with: http://example.com/image Accept: image/png http://exampe.com/image Accept: image/jpeg They're arguably the same image, and for many purposes they're identical, but I think it's fair to say that the PNG is "more equal" than the JPEG (assuming it properly represents the source, it could just be a PNG of the JPEG). The PNG is a "better" representation. If they were always read only, then this likely isn't a conflict either. Just the reality of the world rearing its head here. But I hope you can appreciate why I think the commutative property should exist for a read/write resource that supports multiple representations for reading and writing. Regards, Will Hartung (willh@...)
We have similar challenges modeling virtual machines - it doesn't make sense to set "state" to "running" when you don't even know if the things going to boot, nor does it make sense to update the status to "shipped" if you don't know the address is parseable. Accordingly "start" and "ship" verbs make much more sense, but how do you advertise and trigger them? IMO the best way to do this is to advertise the possible actions as "links" and have the clients POST to them. Often these will be empty posts, but if you need to parametrise the verb (for example, "resize" for a storage device doesn't make sense without a new "size" parameter) then you can just submit it as a web form. That way you're not inventing some new REST microformat (for want of a better term). Sam On Thu, Oct 15, 2009 at 5:29 AM, William Martinez Pomares < wmartinez@...> wrote: > Hello. > As Jan suggests, there are several ways of modeling this. > If you think of resources mapped to data entities, then you will find your > client needing to update fields and such like the state of the order. Too > much detail and exposure. > > If instead, your resource is an order processing service, you can just post > business documents and the database complexities are hidden. This > possibility is very often (if not always) overlooked. > BTW, OASIS bizz docs are good in the way of they being standards, still > sometimes they are too complex for little things. But useful, yes. > > Hope this helps. > > William. > > --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> > wrote: > > > > > > On Oct 14, 2009, at 6:55 PM, domfox wrote: > > > > > Hi, > > > > > > I'm trying to think about a RESTful interface to an order system. > > > Orders can be created, updated, paid, completed, cancelled etc. > > > > > > Creating an order seems straightforward: POST a description of the > > > order to some Orders collection, and perhaps PUT some further > > > details at the URL of the new order. But now suppose we want to > > > update the order - to confirm that it has been paid, and that the > > > ordered items should now be shipped. > > > > Why would the client want to confirm that it has paid? Usually the > > seller would initiate shipment once the payment has been made. The > > client would just wait for the goods to arive at her doorstep. > > > > In general, for these kinds of business transactions I think that > > POSTing business documents is the way to go. Look at UBL ( > http://www.oasis-open.org/committees/ubl/ > > ) for examples. Even order changes I think should be done by sending > > an OrderChange document to an appropriate resource (e.g. the one that > > the server told the client would be the target for order change > > requests). > > > > Jan > > > > > ... > > -------------------------------------- > > Jan Algermissen > > > > Mail: algermissen@... > > Blog: http://algermissen.blogspot.com/ > > Home: http://www.jalgermissen.com > > -------------------------------------- > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Oct 15, 2009, at 11:48 PM, Will Hartung wrote:
> Clearly, this can get messy, but that's what I have buzzing around
> in my head.
The best way to approach this is to give every distinct physical
'thing' it's own resource and use a negotiated resource that
- for GET sends the negotiated representation and Content-Location
- for PUT redirects the client based on the Accept header to the
appropriate 'type specific' resource.
Jan
On Oct 16, 2009, at 1:58 AM, Jan Algermissen wrote:
>
> On Oct 15, 2009, at 11:48 PM, Will Hartung wrote:
>
>> Clearly, this can get messy, but that's what I have buzzing around
>> in my head.
>
> The best way to approach this is to give every distinct physical
> 'thing' it's own resource and use a negotiated resource that
>
> - for GET sends the negotiated representation and Content-Location
> - for PUT redirects the client based on the Accept header to the
> appropriate 'type specific' resource.
Doh - this should of course have been
> - for PUT redirects the client to the appropriate 'type specific'
> resource
based on the request body's media type.
Sorry,
Jan
>
> Jan
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
On Thu, Oct 15, 2009 at 4:58 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Oct 15, 2009, at 11:48 PM, Will Hartung wrote: > >> Clearly, this can get messy, but that's what I have buzzing around in my >> head. > > The best way to approach this is to give every distinct physical 'thing' > it's own resource and use a negotiated resource that > > - for GET sends the negotiated representation and Content-Location > - for PUT redirects the client based on the Accept header to the > appropriate 'type specific' resource. That's an interesting way to look at it I guess. That implies that you rely on names for typing, yes? So: GET /resource.xml Accept: application/json Gets a Redirect to /resource.json? Do you find any difference between GET /resource Accept: application/xml and GET /resource.xml ? Regards, Will Hartung (willh@...)
Jan Algermissen wrote: > On Oct 16, 2009, at 1:58 AM, Jan Algermissen wrote: > >> On Oct 15, 2009, at 11:48 PM, Will Hartung wrote: >> >>> Clearly, this can get messy, but that's what I have buzzing around >>> in my head. >>> >> The best way to approach this is to give every distinct physical >> 'thing' it's own resource and use a negotiated resource that >> >> - for GET sends the negotiated representation and Content-Location >> - for PUT redirects the client based on the Accept header to the >> appropriate 'type specific' resource. >> > > Doh - this should of course have been > > - for PUT redirects the client to the appropriate 'type specific' > resource based on the request body's media type. > > > Sorry, > Jan The solution you've described effectively forces representations to pretend they are resources. This practice creates sets of resources that share state - but with no visible indication of this fact; which may be of little consequence to a client or server, but to intermediaries this will present significant problems and must be accounted for by introducing bespoke rules (e.g. cache control). Negotiating and serving all representations from one URI would allow for automatic cache invalidation for all representations upon a successful PUT request - this is a far simpler mechanism because it respects the distinction between resource and representation and increases visibility. - Mike
On Oct 16, 2009, at 2:24 AM, Will Hartung wrote: > On Thu, Oct 15, 2009 at 4:58 PM, Jan Algermissen > <algermissen1971@...> wrote: >> >> On Oct 15, 2009, at 11:48 PM, Will Hartung wrote: >> >>> Clearly, this can get messy, but that's what I have buzzing around >>> in my >>> head. >> >> The best way to approach this is to give every distinct physical >> 'thing' >> it's own resource and use a negotiated resource that >> >> - for GET sends the negotiated representation and Content-Location >> - for PUT redirects the client based on the Accept header to the >> appropriate 'type specific' resource. > > That's an interesting way to look at it I guess. > > That implies that you rely on names for typing, yes? Sorry, I do not understand what you ask. Can you re-phrase? > > So: > > GET /resource.xml > Accept: application/json > > Gets a Redirect to /resource.json? make the first GET /resource > > Do you find any difference between > > GET /resource > Accept: application/xml > > and > > GET /resource.xml ? > Sure. resource.xml implies (to us as readers) that the URI identifies an XML document. The negotiated resource should not hae a suffix. Jan > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Oct 16, 2009, at 10:57 AM, Mike Kelly wrote: > Jan Algermissen wrote: >> On Oct 16, 2009, at 1:58 AM, Jan Algermissen wrote: >> >>> On Oct 15, 2009, at 11:48 PM, Will Hartung wrote: >>> >>>> Clearly, this can get messy, but that's what I have buzzing around >>>> in my head. >>>> >>> The best way to approach this is to give every distinct physical >>> 'thing' it's own resource and use a negotiated resource that >>> >>> - for GET sends the negotiated representation and Content-Location >>> - for PUT redirects the client based on the Accept header to the >>> appropriate 'type specific' resource. >>> >> >> Doh - this should of course have been >> >> - for PUT redirects the client to the appropriate 'type specific' >> resource based on the request body's media type. >> >> >> Sorry, >> Jan > > The solution you've described effectively forces representations to > pretend they are resources. What does that mean? > This practice creates sets of resources that > share state - but with no visible indication of this fact; which may > be > of little consequence to a client or server, but to intermediaries > this > will present significant problems and must be accounted for by > introducing bespoke rules (e.g. cache control). I do not see a problem. Can you explain? > > Negotiating and serving all representations from one URI would allow > for > automatic cache invalidation for all representations upon a successful > PUT request - this is a far simpler mechanism because it respects the > distinction between resource and representation and increases > visibility. I do not see how providing different resources for different entities and informing the client (and intermediaries) about them using the Content- Location header decreases visibility. Actually it proides more isibility since the client (and the intermediaries) receive more information about the URI space. Jan > > - Mike > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Jan Algermissen wrote: > > On Oct 16, 2009, at 10:57 AM, Mike Kelly wrote: > >> Jan Algermissen wrote: >>> On Oct 16, 2009, at 1:58 AM, Jan Algermissen wrote: >>> >>>> On Oct 15, 2009, at 11:48 PM, Will Hartung wrote: >>>> >>>>> Clearly, this can get messy, but that's what I have buzzing around >>>>> in my head. >>>>> >>>> The best way to approach this is to give every distinct physical >>>> 'thing' it's own resource and use a negotiated resource that >>>> >>>> - for GET sends the negotiated representation and Content-Location >>>> - for PUT redirects the client based on the Accept header to the >>>> appropriate 'type specific' resource. >>>> >>> >>> Doh - this should of course have been >>> >>> - for PUT redirects the client to the appropriate 'type specific' >>> resource based on the request body's media type. >>> >>> >>> Sorry, >>> Jan >> >> The solution you've described effectively forces representations to >> pretend they are resources. > > What does that mean? > You are treating representations as if they are resources. > >> This practice creates sets of resources that >> share state - but with no visible indication of this fact; which may be >> of little consequence to a client or server, but to intermediaries this >> will present significant problems and must be accounted for by >> introducing bespoke rules (e.g. cache control). > > I do not see a problem. Can you explain? e.g.: PUT /resource.xml what does that mean for /resource.json /resource.atom .. How would intermediaries know that the state of the json and atom resources (that are really just representations of the same resource) have also changed? > >> >> Negotiating and serving all representations from one URI would allow for >> automatic cache invalidation for all representations upon a successful >> PUT request - this is a far simpler mechanism because it respects the >> distinction between resource and representation and increases >> visibility. > > I do not see how providing different resources for different entities and > informing the client (and intermediaries) about them using the > Content-Location > header decreases visibility. Actually it proides more isibility since > the client > (and the intermediaries) receive more information about the URI space. How does this work for the above PUT example? I don't see how implying that your resource space is larger than it actually is, by separating out interdependent representations as isolated resources in their own right, increases visibility - quite the opposite, as per the above example. - Mike
On Oct 16, 2009, at 12:19 PM, Mike Kelly wrote: > Jan Algermissen wrote: >> >> On Oct 16, 2009, at 10:57 AM, Mike Kelly wrote: >> >>> Jan Algermissen wrote: >>>> On Oct 16, 2009, at 1:58 AM, Jan Algermissen wrote: >>>> >>>>> On Oct 15, 2009, at 11:48 PM, Will Hartung wrote: >>>>> >>>>>> Clearly, this can get messy, but that's what I have buzzing >>>>>> around >>>>>> in my head. >>>>>> >>>>> The best way to approach this is to give every distinct physical >>>>> 'thing' it's own resource and use a negotiated resource that >>>>> >>>>> - for GET sends the negotiated representation and Content-Location >>>>> - for PUT redirects the client based on the Accept header to the >>>>> appropriate 'type specific' resource. >>>>> >>>> >>>> Doh - this should of course have been >>>> >>>> - for PUT redirects the client to the appropriate 'type specific' >>>> resource based on the request body's media type. >>>> >>>> >>>> Sorry, >>>> Jan >>> >>> The solution you've described effectively forces representations to >>> pretend they are resources. >> >> What does that mean? >> > > You are treating representations as if they are resources. > >> >>> This practice creates sets of resources that >>> share state - but with no visible indication of this fact; which >>> may be >>> of little consequence to a client or server, but to intermediaries >>> this >>> will present significant problems and must be accounted for by >>> introducing bespoke rules (e.g. cache control). >> >> I do not see a problem. Can you explain? > > e.g.: > > PUT /resource.xml > > what does that mean for > > /resource.json > /resource.atom > > .. How would intermediaries know that the state of the json and atom > resources (that are really just representations of the same > resource) have also changed? Well, if you PUT some XML to a resource and it response with 2xx then there will not be any json or atom anymore because you told the server to explicitly replace whatever state the resource has with the XML. The serer should tell the client that the edit-uri of a resource is the negotiated one (/resource) and then should redirect the PUT basedon media type: PUT /resource Content-Type: application/xhtml+xml [...] 307 Location /resource.html Jan > >> >>> >>> Negotiating and serving all representations from one URI would >>> allow for >>> automatic cache invalidation for all representations upon a >>> successful >>> PUT request - this is a far simpler mechanism because it respects >>> the >>> distinction between resource and representation and increases >>> visibility. >> >> I do not see how providing different resources for different >> entities and >> informing the client (and intermediaries) about them using the >> Content-Location >> header decreases visibility. Actually it proides more isibility >> since the client >> (and the intermediaries) receive more information about the URI >> space. > > How does this work for the above PUT example? > > I don't see how implying that your resource space is larger than it > actually is, by separating out interdependent representations as > isolated resources in their own right, increases visibility - quite > the opposite, as per the above example. > > - Mike > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Jan Algermissen wrote: > On Oct 16, 2009, at 12:19 PM, Mike Kelly wrote: > > >> Jan Algermissen wrote: >> >>> On Oct 16, 2009, at 10:57 AM, Mike Kelly wrote: >>> >>> >>>> Jan Algermissen wrote: >>>> >>>>> On Oct 16, 2009, at 1:58 AM, Jan Algermissen wrote: >>>>> >>>>> >>>>>> On Oct 15, 2009, at 11:48 PM, Will Hartung wrote: >>>>>> >>>>>> >>>>>>> Clearly, this can get messy, but that's what I have buzzing >>>>>>> around >>>>>>> in my head. >>>>>>> >>>>>>> >>>>>> The best way to approach this is to give every distinct physical >>>>>> 'thing' it's own resource and use a negotiated resource that >>>>>> >>>>>> - for GET sends the negotiated representation and Content-Location >>>>>> - for PUT redirects the client based on the Accept header to the >>>>>> appropriate 'type specific' resource. >>>>>> >>>>>> >>>>> Doh - this should of course have been >>>>> >>>>> - for PUT redirects the client to the appropriate 'type specific' >>>>> resource based on the request body's media type. >>>>> >>>>> >>>>> Sorry, >>>>> Jan >>>>> >>>> The solution you've described effectively forces representations to >>>> pretend they are resources. >>>> >>> What does that mean? >>> >>> >> You are treating representations as if they are resources. >> >> >>>> This practice creates sets of resources that >>>> share state - but with no visible indication of this fact; which >>>> may be >>>> of little consequence to a client or server, but to intermediaries >>>> this >>>> will present significant problems and must be accounted for by >>>> introducing bespoke rules (e.g. cache control). >>>> >>> I do not see a problem. Can you explain? >>> >> e.g.: >> >> PUT /resource.xml >> >> what does that mean for >> >> /resource.json >> /resource.atom >> >> .. How would intermediaries know that the state of the json and atom >> resources (that are really just representations of the same >> resource) have also changed? >> > > Well, if you PUT some XML to a resource and it response with 2xx then > there will not be any json or atom anymore because you told the server > to explicitly replace whatever state the resource has with the XML. > I believe this to be a fundamental misinterpretation of what it means to PUT a representation. The significance of the request is to update the *resource* state by transfering an XML representation - this should not cause other the other representations to cease to exist. That doesn't make sense to me if they are simply representations of the resource that has been updated - regardless of which specific representation caused the update. > The serer should tell the client that the edit-uri of a resource is > the negotiated one (/resource) and then should redirect the PUT > basedon media type: > > PUT /resource > Content-Type: application/xhtml+xml > > [...] > > 307 > Location /resource.html > I understand the mechanism you are describing but, again, I don't understand how an intermediary can establish whether the state of /resource.atom is also supposed to have changed as a result of this request - which it should, by definition, if it is a representation of the same resource. - Mike
On Fri, Oct 16, 2009 at 1:05 PM, Mike Kelly <mike@...> wrote: > > >> PUT /resource.xml > >> > >> what does that mean for > >> > >> /resource.json > >> /resource.atom > >> > >> .. How would intermediaries know that the state of the json and atom > >> resources (that are really just representations of the same > >> resource) have also changed? > > > > Well, if you PUT some XML to a resource and it response with 2xx then > > there will not be any json or atom anymore because you told the server > > to explicitly replace whatever state the resource has with the XML. > > I believe this to be a fundamental misinterpretation of what it means to > PUT a representation. The significance of the request is to update the > *resource* state by transfering an XML representation - this should not > cause other the other representations to cease to exist. That doesn't > make sense to me if they are simply representations of the resource that > has been updated - regardless of which specific representation caused > the update. > Exactly. As another example, if an OCCI <http://www.occi-wg.org/>implementation supports multiple formats for virtual machines (say, OVF, Xen and Hyper-V) then PUTting (or PATCHing<http://tools.ietf.org/html/draft-dusseault-http-patch>) any one of these formats will update the *resource* and with it all of its representations. If the support is export-only (that is, it is able to render the resource to it but not parse and update the resource from it) then such requests should be rejected with e.g. 415 Unsupported Media Type. Sam
Thank you all for the huge thread.. I will consider and read what I can out there.... just a last question: do you have an example of HATEOAS API ? any public service compliant with hateoas I an use to inspire my own design here ? On Fri, Oct 16, 2009 at 1:34 PM, Sam Johnston <samj@...> wrote: > > > On Fri, Oct 16, 2009 at 1:05 PM, Mike Kelly <mike@....uk> wrote: > >> >> >> PUT /resource.xml >> >> >> >> what does that mean for >> >> >> >> /resource.json >> >> /resource.atom >> >> >> >> .. How would intermediaries know that the state of the json and atom >> >> resources (that are really just representations of the same >> >> resource) have also changed? >> > >> > Well, if you PUT some XML to a resource and it response with 2xx then >> > there will not be any json or atom anymore because you told the server >> > to explicitly replace whatever state the resource has with the XML. >> >> I believe this to be a fundamental misinterpretation of what it means to >> PUT a representation. The significance of the request is to update the >> *resource* state by transfering an XML representation - this should not >> cause other the other representations to cease to exist. That doesn't >> make sense to me if they are simply representations of the resource that >> has been updated - regardless of which specific representation caused >> the update. >> > > Exactly. As another example, if an OCCI <http://www.occi-wg.org/>implementation supports multiple formats for virtual machines (say, OVF, Xen > and Hyper-V) then PUTting (or PATCHing<http://tools.ietf.org/html/draft-dusseault-http-patch>) > any one of these formats will update the *resource* and with it all of its > representations. > > If the support is export-only (that is, it is able to render the resource > to it but not parse and update the resource from it) then such requests > should be rejected with e.g. 415 Unsupported Media Type. > > Sam > > > -- Looking for a client application for this service: http://fgaucho.dyndns.org:8080/arena-http/wadl
Hi,
I'm designing a small web service interface, trying to make it as RESTful as possible. The content-type served is XML, and I've got a few calls, for example:
GET /people/
GET /people/1/
GET /people/1/friends
etc.
I'm trying to use HATEOAS so I decided to return kind of an "index" of all supported services when the client GETs the root ("/")
This "root" resource is going to be something like this:
<api>
<network href="/capabilities" />
<profile href="/people" />
<friends href="/people/id/friends" />
</api>
My question is, in the "friends" tag, how do I state that the id is a parameter? This value should be generated by the client.
Also kind of any advice would be really appreciated.
Thanks
On Fri, Oct 16, 2009 at 10:51 AM, pablo.fernandez@...
<fernandezpablo85@...> wrote:
> Hi,
>
> I'm designing a small web service interface, trying to make it as RESTful as possible.
"Do, or do not. There is no try." --yoda
Sorry, I really just couldn't resist.
> The content-type served is XML, and I've got a few calls, for example:
>
> GET /people/
> GET /people/1/
> GET /people/1/friends
>
> etc.
>
> I'm trying to use HATEOAS so I decided to return kind of an "index" of all supported services when the client GETs the root ("/")
>
> This "root" resource is going to be something like this:
>
> <api>
> <network href="/capabilities" />
> <profile href="/people" />
> <friends href="/people/id/friends" />
> </api>
>
> My question is, in the "friends" tag, how do I state that the id is a parameter? This value
> should be generated by the client.
You could use a URITemplate, but I'm wondering if your really
hypertext driven? Where do they get the "id" in the first place? In
this case, maybe the interaction is:
GET /people/1
returns:
<a rel="friends" href="/people/1/friends">Friends</a>
GET /people/1/friends
In other words, I'm wondering if the "friends" is really a root
resource at all or a resource that's hyperlinked from some person? I
use *some* URITemplates when it's well-defined in the media type, but
so far it's never been an ID, so it causes me to wonder.
--tim
There's a couple ways around the: resource.xml == resource.json issue. The first, is you make them both synthetic resources on the server. Either they simply convert the state on the fly to the client, using the proper representation, or even as crude as whenever the state changes, the server simply creates both representations. Another solution is that rather than doing that, they both simply redirect to the root resource. But that's an interesting issue. Because it's easy to see how a client can look at the extension, say .xml, and ensure that the Accept header conforms to the proper xml media type. But when it gets the redirect to simply /resource, the client would need to make sure that it keeps the original media type (xml), rather than falling back to some default because "it doesn't know" what "/resource" is sending. You don't want "/resource.json" to redirect to "/resource" without the proper json Accept header, you may well get the wrong representation from the server. But, that's all up to the redirect logic. Regards, Will Hartung (willh@...)
Take a look at AtomPub [1] (not Atom) for some examples on how to kickstart
HATEOAS. You're dealing with collections at the root level and AtomPub does
nicely to get the ball rolling.
-Noah
[1] http://tools.ietf.org/html/rfc5023
On Fri, Oct 16, 2009 at 8:31 AM, Tim Williams <williamstw@...> wrote:
> On Fri, Oct 16, 2009 at 10:51 AM, pablo.fernandez@...
> <fernandezpablo85@...> wrote:
> > Hi,
> >
> > I'm designing a small web service interface, trying to make it as RESTful
> as possible.
>
> "Do, or do not. There is no try." --yoda
>
> Sorry, I really just couldn't resist.
>
> > The content-type served is XML, and I've got a few calls, for example:
> >
> > GET /people/
> > GET /people/1/
> > GET /people/1/friends
> >
> > etc.
> >
> > I'm trying to use HATEOAS so I decided to return kind of an "index" of
> all supported services when the client GETs the root ("/")
> >
> > This "root" resource is going to be something like this:
> >
> > <api>
> > <network href="/capabilities" />
> > <profile href="/people" />
> > <friends href="/people/id/friends" />
> > </api>
> >
> > My question is, in the "friends" tag, how do I state that the id is a
> parameter? This value
> > should be generated by the client.
>
> You could use a URITemplate, but I'm wondering if your really
> hypertext driven? Where do they get the "id" in the first place? In
> this case, maybe the interaction is:
>
> GET /people/1
>
> returns:
> <a rel="friends" href="/people/1/friends">Friends</a>
>
> GET /people/1/friends
>
> In other words, I'm wondering if the "friends" is really a root
> resource at all or a resource that's hyperlinked from some person? I
> use *some* URITemplates when it's well-defined in the media type, but
> so far it's never been an ID, so it causes me to wonder.
>
> --tim
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Fri, Oct 16, 2009 at 7:51 AM, pablo.fernandez@...
<fernandezpablo85@...> wrote:
> GET /people/
> GET /people/1/
> GET /people/1/friends
...
> My question is, in the "friends" tag, how do I state that the id is a parameter? This value should be generated by the client.
Why isn't the URI the ID?
If you want the "IDs" of a persons friends, you follow their friends
link. And that gives you a list of their friends. Why do you need to
make a URI from an ID?
> GET /people/1/friends
And this returns
<friends-list>
<friends>
<friend>
<person>
<alias>Rocketman</alias>
<link rel="person" href="/people/123"
type="application/xml+person"/>
</person>
<rating>Best Buddy</rating>
<link rel="friend" href="/people/1/friends/1"
type="application/xml+friend"/>
</friend>
</friends>
<total>1</total>
<start>1</start>
<end>1</start>
<link rel="start" href="/people/1/friends?start=1&size=20"
type="application/xml+friends-list"/>
<link rel="end" href="/people/1/friends?start=1&size=20"
type="application/xml+friends-list"/>
<link rel="next" href="/people/1/friends?start=1&size=20"
type="application/xml+friends-list"/>
<link rel="prev" href="/people/1/friends?start=1&size=20"
type="application/xml+friends-list"/>
</friends-list>
The URI is handed right to you. Why would you need to build it?
Regards,
Will Hartung
(willh@...)
I haven't tested this, but isn't the *.jpg, *.png, *.bmp all hints to the browser? The browser will actually inspect the headers of the image files and choose the appropriate rendering engine regardless of the indicated media type? Again...I haven't tested this, just remember reading it back in the day. -Noah On Fri, Oct 16, 2009 at 10:07 AM, Will Hartung <willh@...> wrote: > There's a couple ways around the: > > resource.xml == resource.json issue. > > The first, is you make them both synthetic resources on the server. > Either they simply convert the state on the fly to the client, using > the proper representation, or even as crude as whenever the state > changes, the server simply creates both representations. > > Another solution is that rather than doing that, they both simply > redirect to the root resource. > > But that's an interesting issue. > > Because it's easy to see how a client can look at the extension, say > .xml, and ensure that the Accept header conforms to the proper xml > media type. > > But when it gets the redirect to simply /resource, the client would > need to make sure that it keeps the original media type (xml), rather > than falling back to some default because "it doesn't know" what > "/resource" is sending. > > You don't want "/resource.json" to redirect to "/resource" without the > proper json Accept header, you may well get the wrong representation > from the server. > > But, that's all up to the redirect logic. > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > >
No. That leads to MIME type sniffing which is extremely problematic on the web. There are numerous security bugs due to MIME sniffing in IE. None of the Content-xxx headers are hints. They convey metadata of the representation. Subbu On Oct 16, 2009, at 1:47 PM, Noah Campbell wrote: > > > I haven't tested this, but isn't the *.jpg, *.png, *.bmp all hints > to the browser? The browser will actually inspect the headers of > the image files and choose the appropriate rendering engine > regardless of the indicated media type? Again...I haven't tested > this, just remember reading it back in the day. > > -Noah > > On Fri, Oct 16, 2009 at 10:07 AM, Will Hartung <willh@...> > wrote: > There's a couple ways around the: > > resource.xml == resource.json issue. > > The first, is you make them both synthetic resources on the server. > Either they simply convert the state on the fly to the client, using > the proper representation, or even as crude as whenever the state > changes, the server simply creates both representations. > > Another solution is that rather than doing that, they both simply > redirect to the root resource. > > But that's an interesting issue. > > Because it's easy to see how a client can look at the extension, say > .xml, and ensure that the Accept header conforms to the proper xml > media type. > > But when it gets the redirect to simply /resource, the client would > need to make sure that it keeps the original media type (xml), rather > than falling back to some default because "it doesn't know" what > "/resource" is sending. > > You don't want "/resource.json" to redirect to "/resource" without the > proper json Accept header, you may well get the wrong representation > from the server. > > But, that's all up to the redirect logic. > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > > > > >
Hola Pablo.
First, I wouldn't take the URI templates approach. If you base your app on URI juggling, you will lose much flexibility. The rule of thumb is that you should find your resource without composing it piece by piece. It should be the same /people/{id} than /12345 where 12345 can be a friends list or a single person.
Second, the IDs or a resource are the URIs. That is why it sounds odd to add the ID to the URI. Maybe that ID is the Database ID, in that case please remember another rule: try not to expose your whole database schema using URIs and such. Make the client's life as close to business as possible (hide the database IDs).
Third, try working from client's view. That is, in a typical business case, what is your client trying to do, what does it want? What info does it have to work with?
For instance, you may want, before requesting for friends, search for a profile. In that case, the URI from the root should point to a search resource (people itself) that when you do a get it will return either a list of people, or a list for forms to search by (name, last name, security number, etc). So you send another request with the info that best matches your situation.
Once you get the profile URI you are looking for, then you can ask for its friends. But then look that the friends search is an "operation" of the profile resource, not of root.
In another example, if your client has the so called ID, and you want to directly search for its friends, then you may have a friends search resource, that will accept that ID as a query parameter. Note this is sent as a query (in the payload is POST, or as a query in the URI), not as part of the URI path.
Both paths are valid, and there may be more. Please check which is the most business oriented and that makes more sense from the client's point of view, and then implement from there.
The "Menu" of options when you get the root, should be the ones that are permitted at that state of the application. If you do not have the famous ID, then friends is not a valid link from there and should not be included in the list. When you have the URI of the profile, then you may want to see the new "menu" that may not include some of the root but will include new ones more related to working with an specific profile. See?
Hope this helps.
William Martinez Pomares.
--- In rest-discuss@yahoogroups.com, "pablo.fernandez@..." <fernandezpablo85@...> wrote:
>
> Hi,
>
> I'm designing a small web service interface, trying to make it as RESTful as possible. The content-type served is XML, and I've got a few calls, for example:
>
> GET /people/
> GET /people/1/
> GET /people/1/friends
>
> etc.
>
> I'm trying to use HATEOAS so I decided to return kind of an "index" of all supported services when the client GETs the root ("/")
>
> This "root" resource is going to be something like this:
>
> <api>
> <network href="/capabilities" />
> <profile href="/people" />
> <friends href="/people/id/friends" />
> </api>
>
> My question is, in the "friends" tag, how do I state that the id is a parameter? This value should be generated by the client.
>
> Also kind of any advice would be really appreciated.
>
> Thanks
>
Hi All,
I am new to REST and trying to make secure fileupload any suggestions are appreciated.
Thanks
Ala
I'm looking into applying the REST architectural style to a binary network protocol, and I am getting hung up on how to identify server resources in a manner that would be true to the style, probably because I'm used to looking at URIs. For example, would an address-port pair qualify as a resource identifier; assuming one resource per pair? In this case the resource isn't really identified in the *request* explicitly, but would be assumed by the service port number. Is that how it would work, or is there a better/other way? Thanks!
The identifier in ReST is a conceptual mapping between a concept and some entity. There would be nothing wrong with identifying everything with a GUID, provided you document the way a client would go about dereferencing such identifier at runtime. In the case of the web, http uris are to be resolved using DNS and http. In your case, your binary identifiers could identify a server by name, or a server / port combination or anything else, as long as the server is responsible for telling you if dereferencing of the opaque identifier was successful, a.k.a. as long as there is no assumption from the client that the ip / port combination is resolvable / exists. Seb From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of George Ryan Sent: 20 October 2009 00:37 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Applying the REST architecture outside of the web: resource identification I'm looking into applying the REST architectural style to a binary network protocol, and I am getting hung up on how to identify server resources in a manner that would be true to the style, probably because I'm used to looking at URIs. For example, would an address-port pair qualify as a resource identifier; assuming one resource per pair? In this case the resource isn't really identified in the *request* explicitly, but would be assumed by the service port number. Is that how it would work, or is there a better/other way? Thanks!
Goerge, I think your main challenge will end up being one in which you need to have the notion of a uniform interface in your solution. I'd be curious to know how you plan to map that to a binary protocol. The Uri mapping problem is the easy part. Like Sebastien said, as long as the server is responsible for telling you if dereferencing of the opaque identifier was successful, a.k.a. as long as there is no assumption from the client that the ip / port combination is resolvable / exists. ... you should be fine if you follow that tenet. Best, -Dilip On Tue, Oct 20, 2009 at 4:41 AM, Sebastien Lambla <seb@...> wrote: > > > The identifier in ReST is a conceptual mapping between a concept and some > entity. There would be nothing wrong with identifying everything with a > GUID, provided you document the way a client would go about dereferencing > such identifier at runtime. > > > > In the case of the web, http uris are to be resolved using DNS and http. In > your case, your binary identifiers could identify a server by name, or a > server / port combination or anything else, as long as the server is > responsible for telling you if dereferencing of the opaque identifier was > successful, a.k.a. as long as there is no assumption from the client that > the ip / port combination is resolvable / exists. > > > > Seb > > > > *From:* rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] > *On Behalf Of *George Ryan > *Sent:* 20 October 2009 00:37 > *To:* rest-discuss@yahoogroups.com > *Subject:* [rest-discuss] Applying the REST architecture outside of the > web: resource identification > > > > > > > I'm looking into applying the REST architectural style to a binary > network protocol, and I am getting hung up on how to identify server > resources in a manner that would be true to the style, probably because I'm > used to looking at URIs. > > For example, would an address-port pair qualify as a resource identifier; > assuming one resource per pair? In this case the resource isn't really > identified in the *request* explicitly, but would be assumed by the service > port number. > > Is that how it would work, or is there a better/other way? > > Thanks! > > > > > > > >
Hi all, this is a conf about how to do Social Networks in a RESTful way. So I thought this would interest you. :-) --- There will be a Social Web Camp in Sun Offices in Santa Clara on Monday November 2. It's is being hosted by SUN Microsystems and organized by Henry Story and Daniel Appelquist of Vodafone, co-chair of the W3C Social Web XG. Imagine a world where everybody could participate easily in a distributed yet secure social web. In such a world people could place their photos, music, or other content on their web site and give access to some of it to their friends, some to their family, the rest to their colleagues, and some even to the friends of their friends... How can one do this without requiring every participant to create one login for each of their friends site? How does one do this in a distributed and flexible manner compatible with web architecture? What issues would need to be solved to make this possible? All topics related to the Social Web will be covered, in a bar-camp style. It will be broad and open to all types of participants. See the subjects up for discussuion on the wiki/registration page: http://barcamp.org/SocialWebCamp-Santa-Clara When: Monday, 2nd of November starting 9:00 am ( up to 5pm ) Where: The Auditorium at Sun's Campus, 4030 George Sellon Circle Santa Clara, 95054 California Please forward to interested parties, tweet, blog! Henry Story Social Web Architect Sun Microsystems Blog: http://blogs.sun.com/bblfish
Thanks to you both for replying. > I'd be curious to know how you plan to map that to a binary protocol. Me too! :-) I am sure I will have more questions. > > as long as the server is responsible for telling you if dereferencing of > the opaque identifier was successful, a.k.a. as long as there is no > assumption from the client that the ip / port combination is resolvable / > exists. > > Okay, let's see if I understand. In a (for lack of a better term) "average" client-server relationship, a client would explicitly know that a service runs on a specific IP on a specific port (e.g., ssh on port 22). So would I send an identifier (e.g. some ID) to the server's IP and port, and then the server would map that ID to some IP and port (possibly its own). Then it would respond to the client with the actual IP and port for the service (i.e., a redirect)? Essentially some kind of service discovery.
Assign a person to each of your stereotypes, let them and everyone else know. Still a good idea? Bill willmarpo wrote: > > > Hello. > Reading through all this good material about REST, I find some old time > discussions around. Someone suggested naming things is not so good, but > I love doing so somethings to know what am I referring to. > > So, having all of you as REST fans, I wanted to present a classification > I did two days ago while riding the bus to work. Silly? It may be, but I > guess it helps understand where are we standing in terms of REST > usability and knowledge. > > *API Makers*: I find them everywhere. They have a system, usually not > built thinking on REST, and they want an API created. They usually think > REST is an API making technique or recipe, for the web. > /Subcategories: > / - *URI Jugglers*. This are the ones that think REST is all about > creating URIs, and nothing more. So their discussions are solely focused > on URIs, and their presentations are about URIs definitions. > - *RPCers*. Bad group that think REST is a way to map RPC in disguise > using URIs in a web API. The most of them don't know they speak RPC at all. > - *Exposers*: This type is repeated below. Those are the guys that > think you need to expose things in REST using resources. So REST is an > API for exposing things on the web. > - *CRUDers*: Another repeated group. They think REST is a web api for > CRUD. Simple. > > *Mappers*: This other category may use the API idea, but they actually > thing REST is a representation type and the work to be done is to map > all that is know used to that new type. Interesting? > - *CRUDers*. Again, the idea is that CRUD can be mapped naturally to > HTTP operations, and that > makes it RESTful. > - *HTTPers*. They believe REST is HTTP. Deep enough. > - *Exposers*. Again too. They usually try to map all classes, data > entities, elements into resources, and then call their systems RESTful. > * > FAD followers?*: This is a group of t he reminders of the types. > Usually, they tend to follow a lead. > - *Standard Haters*: Here you have all those that think Standards are > evil and that REST is an anarchy where you have the freedom to do > whatever you like, so they follow REST doing whatever they want. > - *KISS lovers*. This are the ones that like thinks to be simple. And > someone told them REST is easy, so they follow doing easy things with > URIs. There are lots of URI jugglers in this group. > - *Servicers*. They think Services is good, and someone told them REST > is a way to do services without SOAP. So they follow. > - *BuzzWorders*. This is a vast majority. They like buzz words, so they > follow REST just because it is cool and all people talk about it. There > are some Buzz creators too, with thinks like ROA and REST in WOA. No pun > intended on REST-* > > Is there some one I'm missing? Well, yes, probably the group that knows > REST as it actually is and understan ds it. That may be a one person > group (yes Roy). > > I'm may not be saying all those believes up there are wrong. I'm NOT > saying they are good, at all. > > What do you think? Do you find yourself in any of those groups? > > William Martinez Pomares. > >
Okay, let's see if I understand. In a (for lack of a better term) "average" client-server relationship, a client would explicitly know that a service runs on a specific IP on a specific port (e.g., ssh on port 22). Lets put it in "average" terms. I receive an envelope, and I happen to be distributing mail for my neighbours in my building. The letter got to our building because there was some location information that the postman and the sender agreed upon (the postcode). So all the people in my building get the postman delivering the message to our building. The identifier (the address) gives enough information to the postman (the postman :) ) for delivery to a building. The mail system is a protocol that defines a way to route, in network terms, the mail to buildings. So we got mail in the building. The postman has no idea if the actual name lives in the building. There may be people that moved out, people for which the name has changed (my neighbour John is now called Johanna). While the postman is right in delivering to my address, I may return the mail if it is for an unknown person (a 404). I may return it to Royal Mail notifying the postal service that the person it was addressed to has changed (a 307?). Or I may just put it in the bin because the routing was wrong and was addressed to the neighbour (a 50x). The point is, while the identifier (the address) correctly pointed to where the server should receivle the email (the destination address in a building), nothing is proven until I (the dispatcher of mail) decide to give an answer on the validity of the destination name existing. Point is, you may have as part of your protocol a way to identify *where to send* the message. It doesn't mean you *know* that the destinary exist. You have to listen to me *the mail router* to tell you if your message was accepted or dispatched to anyone, or if one of the error conditions I described happened. This is what late binding of resource identifiers is all about, just because you ahde an identifier doesn't mean it maps to an existing resource, nor does it mean this resource is in a state where it can be interacted with through state transfer. So would I send an identifier (e.g. some ID) to the server's IP and port, and then the server would map that ID to some IP and port (possibly its own). Then it would respond to the client with the actual IP and port for the service (i.e., a redirect)? Essentially some kind of service discovery. Yes, you could model it that way, if your destination is not the one handling the dereferencing of the URI, or if some other server owns it. The important case, IMO, is not so much to handle redirection, as it is to decide on how to handle your recovery model when I reply that the destination doesn't exist anymore (granny from first floor is dead), or you didn't format the message properly (I can only read people names in Egyptian hieroglyphic cartouches if they are actual kings). In all those error conditions, the capability of your client to recover from error condition will be partly what may make it restful or not. S
Hello George. I see you got very good advice in the post. Here are my 2 cents. 1. Not a good idea, to me, to map address-port to a resource. It is too static. 2. Try no to think in terms of URI, but on URI ideas, if you are not using HTTP. You said you have your own protocol. That means the protocol must have a way to map resource IDs to the resources. In URI standard, you have the domain (which helps the HTTP to route the message to the host) plus other URI parts which help the host to find the resource. Same thing here. Your IP Address may help your protocol to find the host, and some other ID sections will help that host to find the resource. 3. For example, take this: mp:10.0.0.1:8081:id123 That may be our "URI", where mp is the protocol identifier, then the ip address, the port, and the id of the resource. Your protocol handler will read the string, it nows the protocol to follow is mp, that protocol says to send a "check" command to the ip address and port, for the id123 resource. See how simple? Note that you also need to define the operations of the protocol, cache, gateways and all that other stuff that REST defines. William Martinez. --- In rest-discuss@yahoogroups.com, George Ryan <george.ryan@...> wrote: > > I'm looking into applying the REST architectural style to a binary network > protocol, and I am getting hung up on how to identify server resources in a > manner that would be true to the style, probably because I'm used to looking > at URIs. > > For example, would an address-port pair qualify as a resource identifier; > assuming one resource per pair? In this case the resource isn't really > identified in the *request* explicitly, but would be assumed by the service > port number. > > Is that how it would work, or is there a better/other way? > > Thanks! >
I want to start utilizing the framework presented in the first half of the thesis to communicate. When trying to describe it in terms of technology, as Roy did[0] I've found that preconceived notions get in the way, so I've tried to craft an analogy that communicates the essence (constraints, desired properties, styles, etc.) without getting bogged down by assumptions. I'd appreciate any feedback on the correctness and clarity. Thanks, --tim [0] - http://roy.gbiv.com/untangled/2008/on-software-architecture ************************** Suppose a customer comes to your "information business" and says, "I have a need for all the information on organic gardening." The customer travels a lot and needs the information available to him for reference when his customers ask gardening questions, but he's frequently in the fields and, so, isn't always technology enabled. Your organization is quite large and information comes from a variety of departments. To make matters worse, you're in the Solutions Architecture side of the business and so you have no real authority to dictate a precise solution - these are 'information engineers' after all, who need room to flex their creativity. You are, however, allowed to define the solution architecture by way of "constraints" on their solution. These "constraints" take the form of: 1) All the information must be together. 2) There must be a Table of Content. 3) All information must have a reference back to the original source. You have done this so often though, that you and these engineers have agreed that these constraints can be grouped together and referenced by a simple name, its architectural style name, instead of enumerating each one every time. This is beneficial because you know that certain constraints, when grouped together, evoke certain properties that are commonly desired by your customer. This allows you to quickly match up your customer's needs with some starting constraints. Now, you've previously agreed that constraints 1+2+3 above will be referred to as the Compilation Architectural Style. It turns out that the constraints of the Compilation Style are a good starting point but they don't evoke all the properties that your customer really wants. They want something that's lightweight because they travel, they want something that's easily readable, and they also want something that doesn't require electricity/technology. So you begin with an instance of the Compilation Architectural Style and add some concrete constraints to get you from "style" to a real architecture and evoke some properties specifically desired by your customer. Namely, you add the following: A) Compilation Architectural Style - evokes all properties known by the style B) Information must be on paper - evokes lack-of-technology property C) Information must be printed in Times New Roman - evokes readability property D) Must be in a thin plastic binder - evokes lightweight property So, you pass along the customer order and your solution architecture to the engineers. Because you've chosen to define your solution architecture in terms of "constraints that evoke properties," you're able to objectively reason about them. So, when the engineers come back and say that they'd prefer Helvetica because, being sans-serif, it would save on toner cost, you can reason about how changing this constraint might effective your overall solution architecture. In this case, that level of font-specificity was simply you trying to flex some control where you have none, so you acquiesce. Likewise, the engineers come back and ask that you change constraint D to heavy-weight paper since it'd be a bit lighter - you, again, agree that it still evokes your desired property. You deliver your solution, which makes your customer happy. But then you realize that you ought to capitalize on your latest back and forth with the information engineers. So, you go to the engineers and agree to call constraints B+C+D the Paperback Architectural Style. In future requests like the original, this allows you to simply refer to a hybrid (Compilation Architectural Style + Paperback Architectural Style) solution architecture and know that the desired properties will be evoked.
I'm writing an RESTful web service to update content on a mobile device. We are currently using the If-Modified-Since header along with 304 "Not Modified" response codes to ensure that the device does not download the file more than is absolutely necessary, but I'd like to go a step further and only provide the changed records to the device (this is an XML file FWIW). After combing over the HTTP spec and not finding much on Google I think this might be a valid approach: Client sends a GET request with an If-Range header that specifies the last download date and a Range header that specifies the same. The server could then send back a response of 304 "Not Modified", 206 "Partial Resource" along with the deltas as the body or a 200 with all of the records as the body. Sample Request Headers: If-Range: Sun, 18 Oct 2009 08:49:37 GMT Range: lastmod=Sun, 18 Oct 2009 08:49:37 GMT Is this approach correct? The HTTP spec suggests that I may define my own custom units for the Range header but they may not be portable[0] is there a standard already in place that I'm missing? -mike [0] http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.12 -- ________________________________ Michael E. Crute http://mike.crute.org God put me on this earth to accomplish a certain number of things. Right now I am so far behind that I will never die. --Bill Watterson
Since Range headers were initially designed to refer to bytes and not characters, using Range in the way you suggest for XML can cause problems. Consider the different ways XML renders the space character: "Â " is not the same number of bytes as " ". While it's possible to create your own Range units, you'll still run into similar challenges and there is a possibility that an intermediary (caching proxy) might not implement the Range header to meet your custom implementation. For XML, I think it's much safer to use a "DIFF" representation. One solution would be to allow the client to make a GET request along with the concurrency token (last-mod or etag) for the diff resource. Upon receiving the request, the server could compare the version associated w/ the client's token to the most current version on the server and, if needed, produce the DIFF for the client to process or just return 304. GET /my-content/changes Last-Modified: XXXXXX ETag: XXXXXX If it's important to keep a history the DIFFs that are produced, you could change the interaction to require the client to use POST, allow the server to produce a new addressable resource, and have the server redirect the client to the newly created resource: # request POST /my-content/changes;token=XXXXXX # response 201 Created Location /my-content/changes/1 # request GET /my-content;/changes/1 In either case, I would include a link the content document itself w/ a rel-tag that informs the client of the availability of the DIFF: <content> <link href="http://www.example.org/my-content/changes;token=XXXXX: rel="changes" /> ... </content> This last item would need to be documented as part of the semantics of your <content> media-type so that clients will be able to support the DIFF operation. mca http://amundsen.com/blog/ On Wed, Oct 21, 2009 at 08:43, Michael Crute <mcrute@...> wrote: > I'm writing an RESTful web service to update content on a mobile > device. We are currently using the If-Modified-Since header along with > 304 "Not Modified" response codes to ensure that the device does not > download the file more than is absolutely necessary, but I'd like to > go a step further and only provide the changed records to the device > (this is an XML file FWIW). After combing over the HTTP spec and not > finding much on Google I think this might be a valid approach: > > Client sends a GET request with an If-Range header that specifies the > last download date and a Range header that specifies the same. The > server could then send back a response of 304 "Not Modified", 206 > "Partial Resource" along with the deltas as the body or a 200 with all > of the records as the body. > > Sample Request Headers: > If-Range: Sun, 18 Oct 2009 08:49:37 GMT > Range: lastmod=Sun, 18 Oct 2009 08:49:37 GMT > > Is this approach correct? The HTTP spec suggests that I may define my > own custom units for the Range header but they may not be portable[0] > is there a standard already in place that I'm missing? > > -mike > > [0] http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.12 > > -- > ________________________________ > Michael E. Crute > http://mike.crute.org > > God put me on this earth to accomplish a certain number of things. > Right now I am so far behind that I will never die. --Bill Watterson > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Wed, Oct 21, 2009 at 8:43 AM, Michael Crute <mcrute@...> wrote: > > > > I'm writing an RESTful web service to update content on a mobile > device. We are currently using the If-Modified-Since header along with > 304 "Not Modified" response codes to ensure that the device does not > download the file more than is absolutely necessary, but I'd like to > go a step further and only provide the changed records to the device > (this is an XML file FWIW). After combing over the HTTP spec and not > finding much on Google I think this might be a valid approach: > > Client sends a GET request with an If-Range header that specifies the > last download date and a Range header that specifies the same. The > server could then send back a response of 304 "Not Modified", 206 > "Partial Resource" along with the deltas as the body or a 200 with all > of the records as the body. > > Sample Request Headers: > If-Range: Sun, 18 Oct 2009 08:49:37 GMT > Range: lastmod=Sun, 18 Oct 2009 08:49:37 GMT > > Is this approach correct? The HTTP spec suggests that I may define my > own custom units for the Range header but they may not be portable[0] > is there a standard already in place that I'm missing? > Would it be bad to is a If-None-Match header to tell the server that the client has a particular version of a cached resource? Then the server can look at the entity-tag and decide what diffs need to be sent. Clients of this service would just need to know how to apply diffs to the cached resource. An interesting server side consequence is that you'd want an efficient way to calculate the diffs. -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek
Hello Bill. Not sure of your intention with the comment. Sounds like disapprovement, due to the negative conception of the "stereotype" word. Not even sure of your idea of my idea by writing the list. Let me state it again: "I love doing so sometimes (Naming things) to know what am I referring to" "I guess it helps understand where are we standing in terms of REST usability and knowledge." I had been in hundreds of discussions. Most of the time, two people discuss about different things that are named the same. So, my first action is to clarify what I mean by that name. Just as I did now with my idea in the lines above. Ok, let's start again: I've created a classification of the common understandings people that I've read and discussed with, had shown related to REST. The intention is to identify how is the REST term used and understood. It makes no sense to put a face on each class, when we want common behavior for analysis. Sorry if taken as a bad thing. Not my intention. William. --- In rest-discuss@yahoogroups.com, Bill de hOra <bill@...> wrote: > > Assign a person to each of your stereotypes, let them and everyone else > know. > > Still a good idea? > > Bill > > willmarpo wrote: > > > > > > Hello. > > Reading through all this good material about REST, I find some old time > > discussions around. Someone suggested naming things is not so good, but > > I love doing so somethings to know what am I referring to. > > > > So, having all of you as REST fans, I wanted to present a classification > > I did two days ago while riding the bus to work. Silly? It may be, but I > > guess it helps understand where are we standing in terms of REST > > usability and knowledge. > > > > *API Makers*: I find them everywhere. They have a system, usually not > > built thinking on REST, and they want an API created. They usually think > > REST is an API making technique or recipe, for the web. > > /Subcategories: > > / - *URI Jugglers*. This are the ones that think REST is all about > > creating URIs, and nothing more. So their discussions are solely focused > > on URIs, and their presentations are about URIs definitions. > > - *RPCers*. Bad group that think REST is a way to map RPC in disguise > > using URIs in a web API. The most of them don't know they speak RPC at all. > > - *Exposers*: This type is repeated below. Those are the guys that > > think you need to expose things in REST using resources. So REST is an > > API for exposing things on the web. > > - *CRUDers*: Another repeated group. They think REST is a web api for > > CRUD. Simple. > > > > *Mappers*: This other category may use the API idea, but they actually > > thing REST is a representation type and the work to be done is to map > > all that is know used to that new type. Interesting? > > - *CRUDers*. Again, the idea is that CRUD can be mapped naturally to > > HTTP operations, and that > > makes it RESTful. > > - *HTTPers*. They believe REST is HTTP. Deep enough. > > - *Exposers*. Again too. They usually try to map all classes, data > > entities, elements into resources, and then call their systems RESTful. > > * > > FAD followers?*: This is a group of t he reminders of the types. > > Usually, they tend to follow a lead. > > - *Standard Haters*: Here you have all those that think Standards are > > evil and that REST is an anarchy where you have the freedom to do > > whatever you like, so they follow REST doing whatever they want. > > - *KISS lovers*. This are the ones that like thinks to be simple. And > > someone told them REST is easy, so they follow doing easy things with > > URIs. There are lots of URI jugglers in this group. > > - *Servicers*. They think Services is good, and someone told them REST > > is a way to do services without SOAP. So they follow. > > - *BuzzWorders*. This is a vast majority. They like buzz words, so they > > follow REST just because it is cool and all people talk about it. There > > are some Buzz creators too, with thinks like ROA and REST in WOA. No pun > > intended on REST-* > > > > Is there some one I'm missing? Well, yes, probably the group that knows > > REST as it actually is and understan ds it. That may be a one person > > group (yes Roy). > > > > I'm may not be saying all those believes up there are wrong. I'm NOT > > saying they are good, at all. > > > > What do you think? Do you find yourself in any of those groups? > > > > William Martinez Pomares. > > > > >
I like it. It quite refreshing compared to the normal technical argument about technology platform X or language Y. I wish more architects presented in this way. On Wed, Oct 21, 2009 at 6:17 AM, Tim Williams <williamstw@...> wrote: > I want to start utilizing the framework presented in the first half of > the thesis to communicate. When trying to describe it in terms of > technology, as Roy did[0] I've found that preconceived notions get in > the way, so I've tried to craft an analogy that communicates the > essence (constraints, desired properties, styles, etc.) without > getting bogged down by assumptions. I'd appreciate any feedback on > the correctness and clarity. > > Thanks, > --tim > > [0] - http://roy.gbiv.com/untangled/2008/on-software-architecture > > > ************************** > > Suppose a customer comes to your "information business" and says, "I > have a need for all the information on organic gardening." The > customer travels a lot and needs the information available to him for > reference when his customers ask gardening questions, but he's > frequently in the fields and, so, isn't always technology enabled. > > Your organization is quite large and information comes from a variety > of departments. To make matters worse, you're in the Solutions > Architecture side of the business and so you have no real authority to > dictate a precise solution - these are 'information engineers' after > all, who need room to flex their creativity. You are, however, > allowed to define the solution architecture by way of "constraints" > on their solution. These "constraints" take the form of: > > 1) All the information must be together. > 2) There must be a Table of Content. > 3) All information must have a reference back to the original source. > > You have done this so often though, that you and these engineers have > agreed that these constraints can be grouped together and referenced > by a simple name, its architectural style name, instead of enumerating > each one every time. This is beneficial because you know that certain > constraints, when grouped together, evoke certain properties that are > commonly desired by your customer. This allows you to quickly match > up your customer's needs with some starting constraints. Now, you've > previously agreed that constraints 1+2+3 above will be referred to as > the Compilation Architectural Style. > > It turns out that the constraints of the Compilation Style are a good > starting point but they don't evoke all the properties that your > customer really wants. They want something that's lightweight because > they travel, they want something that's easily readable, and they also > want something that doesn't require electricity/technology. > > So you begin with an instance of the Compilation Architectural Style > and add some concrete constraints to get you from "style" to a real > architecture and evoke some properties specifically desired by your > customer. Namely, you add the following: > > A) Compilation Architectural Style > - evokes all properties known by the style > B) Information must be on paper > - evokes lack-of-technology property > C) Information must be printed in Times New Roman > - evokes readability property > D) Must be in a thin plastic binder > - evokes lightweight property > > So, you pass along the customer order and your solution architecture > to the engineers. Because you've chosen to define your solution > architecture in terms of "constraints that evoke properties," you're > able to objectively reason about them. So, when the engineers come > back and say that they'd prefer Helvetica because, being sans-serif, > it would save on toner cost, you can reason about how changing this > constraint might effective your overall solution architecture. In > this case, that level of font-specificity was simply you trying to > flex some control where you have none, so you acquiesce. Likewise, > the engineers come back and ask that you change constraint D to > heavy-weight paper since it'd be a bit lighter - you, again, agree > that it still evokes your desired property. > > You deliver your solution, which makes your customer happy. But then > you realize that you ought to capitalize on your latest back and forth > with the information engineers. So, you go to the engineers and agree > to call constraints B+C+D the Paperback Architectural Style. In > future requests like the original, this allows you to simply refer to > a hybrid (Compilation Architectural Style + Paperback Architectural > Style) solution architecture and know that the desired properties will > be evoked. > > > ------------------------------------ > > Yahoo! Groups Links > > > >
I though your initial post was something like a "humorous" post, that the intention of it was to put a smile in everybody faces. Because otherwise it tends to be seen a little diminishing for everybody that falls in your categories. "They like buzz words, so they follow REST just because it is cool and all people talk about it." "someone told them REST is a way to do services without SOAP. So they follow..." "someone told them REST is easy, so they follow doing easy things with URIs" "so they follow REST doing whatever they want." "They believe REST is HTTP. Deep enough." "They think REST is a web api for CRUD. Simple." Now I appreciate a good irony when it is said... ironically. But now you say that this is not a "humorous" post but it should be taken literally, because "the intention is to identify how is the REST term used and understood." And we have to identify ourselves in one of the groups you mention, most of them "because it's cool", because "someone told then" and "so they follow", practically all of them described in terms that are either humorous or (xor) diminishing... I myself think I can fall in more than one group, but I prefer to think that is because I'm in a early stage of working with REST and I have lot's of things yet to grasp and little time to do it, and not just because I found REST cool and all people talk about it., or because someone told me something and I just follow, or because I'm shallow enough to believe REST is HTTP, or naive enough to think is simple CRUD... That would be to dismiss a person as of little intelligence, not to say other harsh words... Finally, do you really believe there is only a person that knows REST as it actually is and understands it, being that person Roy, or is that part irony/humorous? Because I don't belong to that particular group for sure, but to some people on this list that wrote lot's of good articles in blogs from which I learned a lot that is, at least, unfair... Not that I think your intention was that, of course. William Martinez Pomares wrote: > > > Hello Bill. > Not sure of your intention with the comment. Sounds like > disapprovement, due to the negative conception of the "stereotype" > word. Not even sure of your idea of my idea by writing the list. > > Let me state it again: > "I love doing so sometimes (Naming things) to know what am I referring to" > "I guess it helps understand where are we standing in terms of REST > usability and knowledge." > > I had been in hundreds of discussions. Most of the time, two people > discuss about different things that are named the same. So, my first > action is to clarify what I mean by that name. Just as I did now with > my idea in the lines above. > > Ok, let's start again: I've created a classification of the common > understandings people that I've read and discussed with, had shown > related to REST. The intention is to identify how is the REST term > used and understood. It makes no sense to put a face on each class, > when we want common behavior for analysis. > > Sorry if taken as a bad thing. Not my intention. > > William. > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Bill de hOra <bill@...> wrote: > > > > Assign a person to each of your stereotypes, let them and everyone else > > know. > > > > Still a good idea? > > > > Bill > > > > willmarpo wrote: > > > > > > > > > Hello. > > > Reading through all this good material about REST, I find some old > time > > > discussions around. Someone suggested naming things is not so > good, but > > > I love doing so somethings to know what am I referring to. > > > > > > So, having all of you as REST fans, I wanted to present a > classification > > > I did two days ago while riding the bus to work. Silly? It may be, > but I > > > guess it helps understand where are we standing in terms of REST > > > usability and knowledge. > > > > > > *API Makers*: I find them everywhere. They have a system, usually not > > > built thinking on REST, and they want an API created. They usually > think > > > REST is an API making technique or recipe, for the web. > > > /Subcategories: > > > / - *URI Jugglers*. This are the ones that think REST is all about > > > creating URIs, and nothing more. So their discussions are solely > focused > > > on URIs, and their presentations are about URIs definitions. > > > - *RPCers*. Bad group that think REST is a way to map RPC in disguise > > > using URIs in a web API. The most of them don't know they speak > RPC at all. > > > - *Exposers*: This type is repeated below. Those are the guys that > > > think you need to expose things in REST using resources. So REST > is an > > > API for exposing things on the web. > > > - *CRUDers*: Another repeated group. They think REST is a web api for > > > CRUD. Simple. > > > > > > *Mappers*: This other category may use the API idea, but they > actually > > > thing REST is a representation type and the work to be done is to map > > > all that is know used to that new type. Interesting? > > > - *CRUDers*. Again, the idea is that CRUD can be mapped naturally to > > > HTTP operations, and that > > > makes it RESTful. > > > - *HTTPers*. They believe REST is HTTP. Deep enough. > > > - *Exposers*. Again too. They usually try to map all classes, data > > > entities, elements into resources, and then call their systems > RESTful. > > > * > > > FAD followers?*: This is a group of t he reminders of the types. > > > Usually, they tend to follow a lead. > > > - *Standard Haters*: Here you have all those that think Standards are > > > evil and that REST is an anarchy where you have the freedom to do > > > whatever you like, so they follow REST doing whatever they want. > > > - *KISS lovers*. This are the ones that like thinks to be simple. And > > > someone told them REST is easy, so they follow doing easy things with > > > URIs. There are lots of URI jugglers in this group. > > > - *Servicers*. They think Services is good, and someone told them > REST > > > is a way to do services without SOAP. So they follow. > > > - *BuzzWorders*. This is a vast majority. They like buzz words, so > they > > > follow REST just because it is cool and all people talk about it. > There > > > are some Buzz creators too, with thinks like ROA and REST in WOA. > No pun > > > intended on REST-* > > > > > > Is there some one I'm missing? Well, yes, probably the group that > knows > > > REST as it actually is and understan ds it. That may be a one person > > > group (yes Roy). > > > > > > I'm may not be saying all those believes up there are wrong. I'm NOT > > > saying they are good, at all. > > > > > > What do you think? Do you find yourself in any of those groups? > > > > > > William Martinez Pomares. > > > > > > > > > >
Is there a violation of REST contraints when a Web application makes use of URIs (controlled by iteslf) to enable clients to refer to something in a request (parameters or body)? Example: - a Web applikation has some item collections identified by URIs like http://example.org/collections/1 , http://example.org/collections/2,... - the client knows about the URIs by discovering them in previous interactions - the Web application exposes a search interface that lets the client use a certain parameter to apply a search to only some of the available collections, e.g. (unescaped) GET /search?query=dog&limitSets=http://example.org/collections/2,http://example.org/collections/54 I have an uneasy feeling about this, but I cannot name the problem (if there is any). Thoughts? Thanks, Jan
As long as the URIs are opaque from the client, and any crafting of new URIs (such as using the querystring to generate key/value pairs) is driven from a media type definition containing the directions to build such querystrings (aka "when an input tag has the name x and the value y, append the querystring using x=y), and as long as such x and y values are not specified themselves in the media type definition, then there's nothing wrong. If the client has preconceptions as to how to build URIs, rather than follow the construction guide provided by a server, then you introduce unnecessary coupling and make the solution non-general, hence unReSTful IMO. Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jan Algermissen Sent: 23 October 2009 13:24 To: REST Discuss Subject: [rest-discuss] URIs as parameter values? Is there a violation of REST contraints when a Web application makes use of URIs (controlled by iteslf) to enable clients to refer to something in a request (parameters or body)? Example: - a Web applikation has some item collections identified by URIs like http://example.org/collections/1 , http://example.org/collections/2,... - the client knows about the URIs by discovering them in previous interactions - the Web application exposes a search interface that lets the client use a certain parameter to apply a search to only some of the available collections, e.g. (unescaped) GET /search?query=dog&limitSets=http://example.org/collections/2,http://example. org/collections/54 I have an uneasy feeling about this, but I cannot name the problem (if there is any). Thoughts? Thanks, Jan ------------------------------------ Yahoo! Groups Links
You know, Antonio, you may be right, still that was not the intention! You see, I was expecting an interesting discussion where people would say things like the ones you just said. I did not make the list to diminish people, but to point to uses or understandings or REST that I think are not totally correct. Since I'm not in the last group, it may be that one, several or partially some of the categories are right. Reading it again, yes, it sounds humorous! Probably because I tend to write that way (look at some others posts I have). But I assure you that it was not my intention to make ridiculous observations of people to make fun of them. No sense. Let me take some of the phrases you pointed out: "They like buzz words, so they follow REST just because it is cool and all people talk about it." This is BuzzWorders. Let me tell you there is an antipattern called "Jumping the Bandwagon". It refers to people and organizations that follow buzzwords just because they are buzzwords. Thousands of companies bought tools and platforms to have SOA in their systems. It is hard to find a tool that does not offer some kind of REST solution. And it is hard to find a company now that is not thinking on adding a REST API to their offerings. For some reason Roy exploded last time someone was marketing their new "RESTFul API". REST should be followed if it makes sense in your system. Is it suddenly that all systems in the world are networked systems on the web that need REST? See the point? Humorous or diminishings of a particular person? Now, "someone told them REST is a way to do services without SOAP. So they follow...". What can I mention about this one? Look for all the discussion on the web, thousands of them, where the two antagonistic sides are WS (SOAP) and REST. As if REST was actually a replacement for web services. And Web Services was another BuzzWord (I had clients asking to add WS where they didn't fit, simply because they wanted WS!). "someone told them REST is easy, so they follow doing easy things with URIs", "so they follow REST doing whatever they want.". REST is not easy, if it is then why are there so many questions in this forum? Hundreds of APIs were not more than RPC composed in URIs (did they read anything to keep them away of doing the most dreaded RPC? If so why do they keep doing it?). Clients ask me to do it REST because it would be easier that web services. And WS are a right away. "They believe REST is HTTP. Deep enough." Actually, deep enough. I had just read one post here trying REST without HTTP. I'm not sure if all the rest of the world knows that is possible. Pleas tell me what is humorous about this one. "They think REST is a web api for CRUD. Simple.". Please search google for CRUD and REST discussions. And these are very interesting indeed, for the mapping is quite close, and Roy may come and tell me that is ok to think of a CRUDable REST. So, can you see now that what you think of my intention may be correct, but not quite? At least I made you think about the option, think of some of them as not correct, analyze yourself to see how you fit it, and maybe make you aware of what things not to approach. If it helps, good. If you think I mock up people, then I'm sorry. William Martinez Pomares. --- In rest-discuss@yahoogroups.com, António Mota <amsmota@...> wrote: > > I though your initial post was something like a "humorous" post, that > the intention of it was to put a smile in everybody faces. Because > otherwise it tends to be seen a little diminishing for everybody that > falls in your categories. > > "They like buzz words, so they follow REST just because it is cool and > all people talk about it." > > "someone told them REST is a way to do services without SOAP. So they > follow..." > > "someone told them REST is easy, so they follow doing easy things with URIs" > > "so they follow REST doing whatever they want." > > "They believe REST is HTTP. Deep enough." > > "They think REST is a web api for CRUD. Simple." > > Now I appreciate a good irony when it is said... ironically. But now you > say that this is not a "humorous" post but it should be taken literally, > because "the intention is to identify how is the REST term used and > understood." And we have to identify ourselves in one of the groups you > mention, most of them "because it's cool", because "someone told then" > and "so they follow", practically all of them described in terms that > are either humorous or (xor) diminishing... > > I myself think I can fall in more than one group, but I prefer to think > that is because I'm in a early stage of working with REST and I have > lot's of things yet to grasp and little time to do it, and not just > because I found REST cool and all people talk about it., or because > someone told me something and I just follow, or because I'm shallow > enough to believe REST is HTTP, or naive enough to think is simple > CRUD... That would be to dismiss a person as of little intelligence, not > to say other harsh words... > > Finally, do you really believe there is only a person that knows REST as > it actually is and understands it, being that person Roy, or is that > part irony/humorous? Because I don't belong to that particular group for > sure, but to some people on this list that wrote lot's of good articles > in blogs from which I learned a lot that is, at least, unfair... > > Not that I think your intention was that, of course. > > William Martinez Pomares wrote: > > > > > > Hello Bill. > > Not sure of your intention with the comment. Sounds like > > disapprovement, due to the negative conception of the "stereotype" > > word. Not even sure of your idea of my idea by writing the list. > > > > Let me state it again: > > "I love doing so sometimes (Naming things) to know what am I referring to" > > "I guess it helps understand where are we standing in terms of REST > > usability and knowledge." > > > > I had been in hundreds of discussions. Most of the time, two people > > discuss about different things that are named the same. So, my first > > action is to clarify what I mean by that name. Just as I did now with > > my idea in the lines above. > > > > Ok, let's start again: I've created a classification of the common > > understandings people that I've read and discussed with, had shown > > related to REST. The intention is to identify how is the REST term > > used and understood. It makes no sense to put a face on each class, > > when we want common behavior for analysis. > > > > Sorry if taken as a bad thing. Not my intention. > > > > William. > > > > --- In rest-discuss@yahoogroups.com > > <mailto:rest-discuss%40yahoogroups.com>, Bill de hOra <bill@> wrote: > > > > > > Assign a person to each of your stereotypes, let them and everyone else > > > know. > > > > > > Still a good idea? > > > > > > Bill > > > > > > willmarpo wrote: > > > > > > > > > > > > Hello. > > > > Reading through all this good material about REST, I find some old > > time > > > > discussions around. Someone suggested naming things is not so > > good, but > > > > I love doing so somethings to know what am I referring to. > > > > > > > > So, having all of you as REST fans, I wanted to present a > > classification > > > > I did two days ago while riding the bus to work. Silly? It may be, > > but I > > > > guess it helps understand where are we standing in terms of REST > > > > usability and knowledge. > > > > > > > > *API Makers*: I find them everywhere. They have a system, usually not > > > > built thinking on REST, and they want an API created. They usually > > think > > > > REST is an API making technique or recipe, for the web. > > > > /Subcategories: > > > > / - *URI Jugglers*. This are the ones that think REST is all about > > > > creating URIs, and nothing more. So their discussions are solely > > focused > > > > on URIs, and their presentations are about URIs definitions. > > > > - *RPCers*. Bad group that think REST is a way to map RPC in disguise > > > > using URIs in a web API. The most of them don't know they speak > > RPC at all. > > > > - *Exposers*: This type is repeated below. Those are the guys that > > > > think you need to expose things in REST using resources. So REST > > is an > > > > API for exposing things on the web. > > > > - *CRUDers*: Another repeated group. They think REST is a web api for > > > > CRUD. Simple. > > > > > > > > *Mappers*: This other category may use the API idea, but they > > actually > > > > thing REST is a representation type and the work to be done is to map > > > > all that is know used to that new type. Interesting? > > > > - *CRUDers*. Again, the idea is that CRUD can be mapped naturally to > > > > HTTP operations, and that > > > > makes it RESTful. > > > > - *HTTPers*. They believe REST is HTTP. Deep enough. > > > > - *Exposers*. Again too. They usually try to map all classes, data > > > > entities, elements into resources, and then call their systems > > RESTful. > > > > * > > > > FAD followers?*: This is a group of t he reminders of the types. > > > > Usually, they tend to follow a lead. > > > > - *Standard Haters*: Here you have all those that think Standards are > > > > evil and that REST is an anarchy where you have the freedom to do > > > > whatever you like, so they follow REST doing whatever they want. > > > > - *KISS lovers*. This are the ones that like thinks to be simple. And > > > > someone told them REST is easy, so they follow doing easy things with > > > > URIs. There are lots of URI jugglers in this group. > > > > - *Servicers*. They think Services is good, and someone told them > > REST > > > > is a way to do services without SOAP. So they follow. > > > > - *BuzzWorders*. This is a vast majority. They like buzz words, so > > they > > > > follow REST just because it is cool and all people talk about it. > > There > > > > are some Buzz creators too, with thinks like ROA and REST in WOA. > > No pun > > > > intended on REST-* > > > > > > > > Is there some one I'm missing? Well, yes, probably the group that > > knows > > > > REST as it actually is and understan ds it. That may be a one person > > > > group (yes Roy). > > > > > > > > I'm may not be saying all those believes up there are wrong. I'm NOT > > > > saying they are good, at all. > > > > > > > > What do you think? Do you find yourself in any of those groups? > > > > > > > > William Martinez Pomares. > > > > > > > > > > > > > > > >
On Fri, Oct 23, 2009 at 8:24 AM, Jan Algermissen
<algermissen1971@...> wrote:
>
> Is there a violation of REST contraints when a Web application makes
> use of URIs (controlled by iteslf) to enable clients to refer to
> something in a request (parameters or body)?
>
> Example:
>
> - a Web applikation has some item collections identified by URIs like http://example.org/collections/1
> , http://example.org/collections/2,...
>
> - the client knows about the URIs by discovering them in previous
> interactions
>
> - the Web application exposes a search interface that lets the client
> use a certain parameter to
> apply a search to only some of the available collections, e.g.
> (unescaped)
>
> GET /search?query=dog&limitSets=http://example.org/collections/2,http://example.org/collections/54
>
I had a similar problem before - a huge number of possible states that
were essentially different permutations of a finite set of options.
It was suggested to me [and might work for you] to use HTML forms to
represent the interface. So, your collections would become:
<form method="GET" action="/search">
<input type="text" name="query"/>
<select name="limitSets" multiple="true">
<option value="http://example.org/collections/2">2</option>
<option value="http://example.org/collections/54">54</option>
</select>
</form>
or something like that anyway. The advantage being the media type
definition provides the semantics of how to put things together as
opposed to out-of-band knowledge...
--tim
Sorry Antonio, I forgot two things. 1. That last part of Roy being the one was indeed humorous. I said May Be, which allows a set of good guys in this community to enter the group, but since I'm not there I cannot tell you who they are. 2. Irony is "incongruity between what might be expected and what actually occurs". In this case, the post is not ironical, because it tells you what is actually occurring without comparing against the expected. It is just a set of categories you can validate on internet. Cheers! William. --- In rest-discuss@yahoogroups.com, António Mota <amsmota@...> wrote: > > I though your initial post was something like a "humorous" post, that > the intention of it was to put a smile in everybody faces. Because > otherwise it tends to be seen a little diminishing for everybody that > falls in your categories. > > "They like buzz words, so they follow REST just because it is cool and > all people talk about it." > > "someone told them REST is a way to do services without SOAP. So they > follow..." > > "someone told them REST is easy, so they follow doing easy things with URIs" > > "so they follow REST doing whatever they want." > > "They believe REST is HTTP. Deep enough." > > "They think REST is a web api for CRUD. Simple." > > Now I appreciate a good irony when it is said... ironically. But now you > say that this is not a "humorous" post but it should be taken literally, > because "the intention is to identify how is the REST term used and > understood." And we have to identify ourselves in one of the groups you > mention, most of them "because it's cool", because "someone told then" > and "so they follow", practically all of them described in terms that > are either humorous or (xor) diminishing... > > I myself think I can fall in more than one group, but I prefer to think > that is because I'm in a early stage of working with REST and I have > lot's of things yet to grasp and little time to do it, and not just > because I found REST cool and all people talk about it., or because > someone told me something and I just follow, or because I'm shallow > enough to believe REST is HTTP, or naive enough to think is simple > CRUD... That would be to dismiss a person as of little intelligence, not > to say other harsh words... > > Finally, do you really believe there is only a person that knows REST as > it actually is and understands it, being that person Roy, or is that > part irony/humorous? Because I don't belong to that particular group for > sure, but to some people on this list that wrote lot's of good articles > in blogs from which I learned a lot that is, at least, unfair... > > Not that I think your intention was that, of course. > > William Martinez Pomares wrote: > > > > > > Hello Bill. > > Not sure of your intention with the comment. Sounds like > > disapprovement, due to the negative conception of the "stereotype" > > word. Not even sure of your idea of my idea by writing the list. > > > > Let me state it again: > > "I love doing so sometimes (Naming things) to know what am I referring to" > > "I guess it helps understand where are we standing in terms of REST > > usability and knowledge." > > > > I had been in hundreds of discussions. Most of the time, two people > > discuss about different things that are named the same. So, my first > > action is to clarify what I mean by that name. Just as I did now with > > my idea in the lines above. > > > > Ok, let's start again: I've created a classification of the common > > understandings people that I've read and discussed with, had shown > > related to REST. The intention is to identify how is the REST term > > used and understood. It makes no sense to put a face on each class, > > when we want common behavior for analysis. > > > > Sorry if taken as a bad thing. Not my intention. > > > > William. > > > > --- In rest-discuss@yahoogroups.com > > <mailto:rest-discuss%40yahoogroups.com>, Bill de hOra <bill@> wrote: > > > > > > Assign a person to each of your stereotypes, let them and everyone else > > > know. > > > > > > Still a good idea? > > > > > > Bill > > > > > > willmarpo wrote: > > > > > > > > > > > > Hello. > > > > Reading through all this good material about REST, I find some old > > time > > > > discussions around. Someone suggested naming things is not so > > good, but > > > > I love doing so somethings to know what am I referring to. > > > > > > > > So, having all of you as REST fans, I wanted to present a > > classification > > > > I did two days ago while riding the bus to work. Silly? It may be, > > but I > > > > guess it helps understand where are we standing in terms of REST > > > > usability and knowledge. > > > > > > > > *API Makers*: I find them everywhere. They have a system, usually not > > > > built thinking on REST, and they want an API created. They usually > > think > > > > REST is an API making technique or recipe, for the web. > > > > /Subcategories: > > > > / - *URI Jugglers*. This are the ones that think REST is all about > > > > creating URIs, and nothing more. So their discussions are solely > > focused > > > > on URIs, and their presentations are about URIs definitions. > > > > - *RPCers*. Bad group that think REST is a way to map RPC in disguise > > > > using URIs in a web API. The most of them don't know they speak > > RPC at all. > > > > - *Exposers*: This type is repeated below. Those are the guys that > > > > think you need to expose things in REST using resources. So REST > > is an > > > > API for exposing things on the web. > > > > - *CRUDers*: Another repeated group. They think REST is a web api for > > > > CRUD. Simple. > > > > > > > > *Mappers*: This other category may use the API idea, but they > > actually > > > > thing REST is a representation type and the work to be done is to map > > > > all that is know used to that new type. Interesting? > > > > - *CRUDers*. Again, the idea is that CRUD can be mapped naturally to > > > > HTTP operations, and that > > > > makes it RESTful. > > > > - *HTTPers*. They believe REST is HTTP. Deep enough. > > > > - *Exposers*. Again too. They usually try to map all classes, data > > > > entities, elements into resources, and then call their systems > > RESTful. > > > > * > > > > FAD followers?*: This is a group of t he reminders of the types. > > > > Usually, they tend to follow a lead. > > > > - *Standard Haters*: Here you have all those that think Standards are > > > > evil and that REST is an anarchy where you have the freedom to do > > > > whatever you like, so they follow REST doing whatever they want. > > > > - *KISS lovers*. This are the ones that like thinks to be simple. And > > > > someone told them REST is easy, so they follow doing easy things with > > > > URIs. There are lots of URI jugglers in this group. > > > > - *Servicers*. They think Services is good, and someone told them > > REST > > > > is a way to do services without SOAP. So they follow. > > > > - *BuzzWorders*. This is a vast majority. They like buzz words, so > > they > > > > follow REST just because it is cool and all people talk about it. > > There > > > > are some Buzz creators too, with thinks like ROA and REST in WOA. > > No pun > > > > intended on REST-* > > > > > > > > Is there some one I'm missing? Well, yes, probably the group that > > knows > > > > REST as it actually is and understan ds it. That may be a one person > > > > group (yes Roy). > > > > > > > > I'm may not be saying all those believes up there are wrong. I'm NOT > > > > saying they are good, at all. > > > > > > > > What do you think? Do you find yourself in any of those groups? > > > > > > > > William Martinez Pomares. > > > > > > > > > > > > > > > >
Tim, On Oct 23, 2009, at 3:06 PM, Tim Williams wrote: > On Fri, Oct 23, 2009 at 8:24 AM, Jan Algermissen > <algermissen1971@...> wrote: >> >> Is there a violation of REST contraints when a Web application makes >> use of URIs (controlled by iteslf) to enable clients to refer to >> something in a request (parameters or body)? >> >> Example: >> >> - a Web applikation has some item collections identified by URIs >> like http://example.org/collections/1 >> , http://example.org/collections/2,... >> >> - the client knows about the URIs by discovering them in previous >> interactions >> >> - the Web application exposes a search interface that lets the client >> use a certain parameter to >> apply a search to only some of the available collections, e.g. >> (unescaped) >> >> GET /search?query=dog&limitSets=http://example.org/collections/2,http://example.org/collections/54 >> > > I had a similar problem before - a huge number of possible states that > were essentially different permutations of a finite set of options. > It was suggested to me [and might work for you] to use HTML forms to > represent the interface. So, your collections would become: > > <form method="GET" action="/search"> > <input type="text" name="query"/> > <select name="limitSets" multiple="true"> > <option value="http://example.org/collections/2">2</option> > <option value="http://example.org/collections/54">54</option> > </select> > </form> yes, exactly. That is fine, because the client only reacts to what it has been given by the server. The problem with that for me is the amount of possible identfiers in my case. > > or something like that anyway. The advantage being the media type > definition provides the semantics of how to put things together as > opposed to out-of-band knowledge... My concern is not so much regarding the out-of-band knowledge I think because the use of the URIs as parameter values can be specified in the parameter or link relation spec. I think my concern is more around the lifetime of the URI since it is not presented by the server as part of the definition of the next transition (the form submission) but (possibly a long time) before. I am also not sure if there is a self-descriptiveness issue, because the meaning of the submission depends on the current state of the resource identified by the URI. IIRC there has been a thread somewhere (cannpt find it right now) that discussed the placement of an order with just the shopping cart URI as opposed to an explicit listing of the intended items. POST /orders cart=/users/65525/cart as opposed to POST /orders <order> <item>...</item> <item>...</item> <item>...</item> </order> Jan > > --tim -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
I'm looking for additional references for architectural properties found in section 2.3.4 of Roy's paper? I was curious how Roy came up with his list. I've never done a dissertation so if I'm parsing the paper incorrectly, please let me know.
On Oct 23, 2009, at 10:28 AM, Noah Campbell wrote:
> I'm looking for additional references for architectural properties
> found in section 2.3.4 of Roy's paper? I was curious how Roy came
> up with his list. I've never done a dissertation so if I'm parsing
> the paper incorrectly, please let me know.
There wasn't any one reference. There are a lot of references in the
references list, some of which define what I called a property.
Usually these are defined in the literature as software qualities
or system properties.
You might want to check the new book on Software Architecture by
Taylor (my dissertation committee chair), Medvidovic, and Dashovy:
http://www.softwarearchitecturebook.com/
http://www.amazon.com/dp/0470167742
though I don't know if they used the same terminology as my diss.
I am still waiting for my free copy. ;-)
....Roy
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > Is there a violation of REST contraints when a Web application makes > use of URIs (controlled by iteslf) to enable clients to refer to > something in a request (parameters or body)? > I think this might be one of those areas where you need to be careful with what you are doing to make sure you aren't deviating from the uniform interface. I think the biggest worry might be when a URI in the parameters or body should really be the URI being invoked. For example, POST /put uri=/foo&val=bar instead of PUT /foo bar is a bad idea. But POST /merger uri1=/foo&uri2=/bar to create a new resource who's initial state is based on some combination of the state of /foo and /bar seems ok to me. Andrew
2009/10/23 Sebastien Lambla <seb@...> > As long as the URIs are opaque from the client, and any crafting of new URIs > (such as using the querystring to generate key/value pairs) is driven from a > media type definition containing the directions to build such querystrings > (aka "when an input tag has the name x and the value y, append the > querystring using x=y), and as long as such x and y values are not specified > themselves in the media type definition (...) > I don't understand the why of this last part "as long as such x and y values are not specified themselves in the media type definition" Why does it matter and why they should not be specified in the media type?
Hi, First of all, I also interpreted this as humorous. (Personally, I'm probably something like a "loose coupling addict". ) However, using humor to achieve a (serious) goal is a good thing. I imagine one could have all sorts of "so you think you know REST"-checklists / personal assessment tools (you can probably take this too far..). Having some way of giving developers feedback on which "fan types" they might be, could be a good thing. Let's say you end up as a "URI-juggler", you might realize that you should read up on HATEOAS etc. And by using humor, it might encourage more people to do so etc. etc. Cheers, Erling On Fri, Oct 23, 2009 at 1:09 PM, William Martinez Pomares < wmartinez@...> wrote: > > > Sorry Antonio, I forgot two things. > > 1. That last part of Roy being the one was indeed humorous. I said May Be, > which allows a set of good guys in this community to enter the group, but > since I'm not there I cannot tell you who they are. > > 2. Irony is "incongruity between what might be expected and what actually > occurs". In this case, the post is not ironical, because it tells you what > is actually occurring without comparing against the expected. It is just a > set of categories you can validate on internet. > > Cheers! > > William. > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com>, > António Mota <amsmota@...> wrote: > > > > I though your initial post was something like a "humorous" post, that > > the intention of it was to put a smile in everybody faces. Because > > otherwise it tends to be seen a little diminishing for everybody that > > falls in your categories. > > > > "They like buzz words, so they follow REST just because it is cool and > > all people talk about it." > > > > "someone told them REST is a way to do services without SOAP. So they > > follow..." > > > > "someone told them REST is easy, so they follow doing easy things with > URIs" > > > > "so they follow REST doing whatever they want." > > > > "They believe REST is HTTP. Deep enough." > > > > "They think REST is a web api for CRUD. Simple." > > > > Now I appreciate a good irony when it is said... ironically. But now you > > say that this is not a "humorous" post but it should be taken literally, > > because "the intention is to identify how is the REST term used and > > understood." And we have to identify ourselves in one of the groups you > > mention, most of them "because it's cool", because "someone told then" > > and "so they follow", practically all of them described in terms that > > are either humorous or (xor) diminishing... > > > > I myself think I can fall in more than one group, but I prefer to think > > that is because I'm in a early stage of working with REST and I have > > lot's of things yet to grasp and little time to do it, and not just > > because I found REST cool and all people talk about it., or because > > someone told me something and I just follow, or because I'm shallow > > enough to believe REST is HTTP, or naive enough to think is simple > > CRUD... That would be to dismiss a person as of little intelligence, not > > to say other harsh words... > > > > Finally, do you really believe there is only a person that knows REST as > > it actually is and understands it, being that person Roy, or is that > > part irony/humorous? Because I don't belong to that particular group for > > sure, but to some people on this list that wrote lot's of good articles > > in blogs from which I learned a lot that is, at least, unfair... > > > > Not that I think your intention was that, of course. > > > > William Martinez Pomares wrote: > > > > > > > > > Hello Bill. > > > Not sure of your intention with the comment. Sounds like > > > disapprovement, due to the negative conception of the "stereotype" > > > word. Not even sure of your idea of my idea by writing the list. > > > > > > Let me state it again: > > > "I love doing so sometimes (Naming things) to know what am I referring > to" > > > "I guess it helps understand where are we standing in terms of REST > > > usability and knowledge." > > > > > > I had been in hundreds of discussions. Most of the time, two people > > > discuss about different things that are named the same. So, my first > > > action is to clarify what I mean by that name. Just as I did now with > > > my idea in the lines above. > > > > > > Ok, let's start again: I've created a classification of the common > > > understandings people that I've read and discussed with, had shown > > > related to REST. The intention is to identify how is the REST term > > > used and understood. It makes no sense to put a face on each class, > > > when we want common behavior for analysis. > > > > > > Sorry if taken as a bad thing. Not my intention. > > > > > > William. > > > > > > --- In rest-discuss@yahoogroups.com <rest-discuss%40yahoogroups.com> > > > <mailto:rest-discuss%40yahoogroups.com<rest-discuss%2540yahoogroups.com>>, > Bill de hOra <bill@> wrote: > > > > > > > > Assign a person to each of your stereotypes, let them and everyone > else > > > > know. > > > > > > > > Still a good idea? > > > > > > > > Bill > > > > > > > > willmarpo wrote: > > > > > > > > > > > > > > > Hello. > > > > > Reading through all this good material about REST, I find some old > > > time > > > > > discussions around. Someone suggested naming things is not so > > > good, but > > > > > I love doing so somethings to know what am I referring to. > > > > > > > > > > So, having all of you as REST fans, I wanted to present a > > > classification > > > > > I did two days ago while riding the bus to work. Silly? It may be, > > > but I > > > > > guess it helps understand where are we standing in terms of REST > > > > > usability and knowledge. > > > > > > > > > > *API Makers*: I find them everywhere. They have a system, usually > not > > > > > built thinking on REST, and they want an API created. They usually > > > think > > > > > REST is an API making technique or recipe, for the web. > > > > > /Subcategories: > > > > > / - *URI Jugglers*. This are the ones that think REST is all about > > > > > creating URIs, and nothing more. So their discussions are solely > > > focused > > > > > on URIs, and their presentations are about URIs definitions. > > > > > - *RPCers*. Bad group that think REST is a way to map RPC in > disguise > > > > > using URIs in a web API. The most of them don't know they speak > > > RPC at all. > > > > > - *Exposers*: This type is repeated below. Those are the guys that > > > > > think you need to expose things in REST using resources. So REST > > > is an > > > > > API for exposing things on the web. > > > > > - *CRUDers*: Another repeated group. They think REST is a web api > for > > > > > CRUD. Simple. > > > > > > > > > > *Mappers*: This other category may use the API idea, but they > > > actually > > > > > thing REST is a representation type and the work to be done is to > map > > > > > all that is know used to that new type. Interesting? > > > > > - *CRUDers*. Again, the idea is that CRUD can be mapped naturally > to > > > > > HTTP operations, and that > > > > > makes it RESTful. > > > > > - *HTTPers*. They believe REST is HTTP. Deep enough. > > > > > - *Exposers*. Again too. They usually try to map all classes, data > > > > > entities, elements into resources, and then call their systems > > > RESTful. > > > > > * > > > > > FAD followers?*: This is a group of t he reminders of the types. > > > > > Usually, they tend to follow a lead. > > > > > - *Standard Haters*: Here you have all those that think Standards > are > > > > > evil and that REST is an anarchy where you have the freedom to do > > > > > whatever you like, so they follow REST doing whatever they want. > > > > > - *KISS lovers*. This are the ones that like thinks to be simple. > And > > > > > someone told them REST is easy, so they follow doing easy things > with > > > > > URIs. There are lots of URI jugglers in this group. > > > > > - *Servicers*. They think Services is good, and someone told them > > > REST > > > > > is a way to do services without SOAP. So they follow. > > > > > - *BuzzWorders*. This is a vast majority. They like buzz words, so > > > they > > > > > follow REST just because it is cool and all people talk about it. > > > There > > > > > are some Buzz creators too, with thinks like ROA and REST in WOA. > > > No pun > > > > > intended on REST-* > > > > > > > > > > Is there some one I'm missing? Well, yes, probably the group that > > > knows > > > > > REST as it actually is and understan ds it. That may be a one > person > > > > > group (yes Roy). > > > > > > > > > > I'm may not be saying all those believes up there are wrong. I'm > NOT > > > > > saying they are good, at all. > > > > > > > > > > What do you think? Do you find yourself in any of those groups? > > > > > > > > > > William Martinez Pomares. > > > > > > > > > > > > > > > > > > > > > > > > >
Thinking about this a little more, I have a question I'd like clarified. We talked about unique naming and how there shouldn't be /resource.xml and /resource.json, but rather /resource and two representations based on the Accept header. But in hindsight, what's the difference between GET /resource.xml GET /resource.json and GET /resource Accept: application/xml GET /resource Accept: application/json Semantically, the queries can be identical. Logically, one would ASSUME they're identical. From a caching point of view, they are separate requests. A cache that has the XML representation won't be able to answer a JSON query, so both have a similar caching impact in terms of ensuring that the cache is properly synced with both representations. So, on the surface, they really don't seem that much different to me. I was curious what other folks thought. Regards, Will Hartung (willh@...)
On Oct 26, 2009, at 6:19 PM, Will Hartung wrote: > Thinking about this a little more, I have a question I'd like > clarified. > > We talked about unique naming and how there shouldn't be /resource.xml > and /resource.json, but rather /resource and two representations based > on the Accept header. I'd still make the variants resources in their own right either using redirection based on the Accept header or at least providing Content- Location. > > But in hindsight, what's the difference between > > GET /resource.xml > GET /resource.json > > and > > GET /resource > Accept: application/xml > > GET /resource > Accept: application/json > > Semantically, the queries can be identical. I'd not call them queries but requests. You are not querying the resources but invoking the GET method. > Logically, one would > ASSUME they're identical. It is a matter of what URI you communicate to the client as the 'entry point'. If the client only relies on the knowledge of /resource then the server has more freedom to add more variants later or change the URIs of the variants. In the 'Cool URIs don't change' mindset / resource would be 'cool' and the specific ones would just be URIs discovered at runtime, likely to change at some point. > > From a caching point of view, they are separate requests. A cache that > has the XML representation won't be able to answer a JSON query, so > both have a similar caching impact in terms of ensuring that the cache > is properly synced with both representations. > > So, on the surface, they really don't seem that much different to me. > I was curious what other folks thought. The approach is different. The latter one offers more flexibility. The server can even anser with an explanatory body if the request is Not Acceptable, telling the client what variants are available. This helps decoupling (less assumptions made by the client). HTH, Jan > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Mon, Oct 26, 2009 at 13:19, Will Hartung <willh@...> wrote: > But in hindsight, what's the difference between > > GET /resource.xml > GET /resource.json > > and > > GET /resource > Accept: application/xml > > GET /resource > Accept: application/json > From my POV, media-type selection should be treated separately from resource selection. For that reason, I use the Accept header and conneg to determine the representation format/semantics. > From a caching point of view, they are separate requests. A cache that > has the XML representation won't be able to answer a JSON query, so > both have a similar caching impact in terms of ensuring that the cache > is properly synced with both representations. While it is possible that caches will need to make additional requests to the origin server to get the specific negotiated media type for a client, it is a different story when it comes to invalidating a cached resource. If caches are using the "generic" resource URI (/resource), any PUT/POST/DELETE that passes through that intermediary will invalidate _all_ the representation formats. If a unique URI is used for each format, the cache can fall into a pretty bad state since only the specific representation will be invalidated. MCA > ------------------------------------ > > Yahoo! Groups Links > > > >
The more I think about this problem (and I've been thinking of it a lot lately in the context of OCCI), the more I think we should rely on HTTP connection negotiation to select appropriate types. If we must embed mime-types into URLs then I would likely prefer something like '/resource;type=text/plain' than having a mapping from MIME types to file extensions. Sam On Mon, Oct 26, 2009 at 6:54 PM, mike amundsen <mamund@...> wrote: > On Mon, Oct 26, 2009 at 13:19, Will Hartung <willh@...> wrote: > > > But in hindsight, what's the difference between > > > > GET /resource.xml > > GET /resource.json > > > > and > > > > GET /resource > > Accept: application/xml > > > > GET /resource > > Accept: application/json > > > > From my POV, media-type selection should be treated separately from > resource selection. For that reason, I use the Accept header and > conneg to determine the representation format/semantics. > > > From a caching point of view, they are separate requests. A cache that > > has the XML representation won't be able to answer a JSON query, so > > both have a similar caching impact in terms of ensuring that the cache > > is properly synced with both representations. > > While it is possible that caches will need to make additional requests > to the origin server to get the specific negotiated media type for a > client, it is a different story when it comes to invalidating a cached > resource. If caches are using the "generic" resource URI (/resource), > any PUT/POST/DELETE that passes through that intermediary will > invalidate _all_ the representation formats. If a unique URI is used > for each format, the cache can fall into a pretty bad state since only > the specific representation will be invalidated. > > MCA > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
If you're already using the URL, what practical difference does it make to have '/resource;type=application/xml' or '/resource.xml'? IMHO, The .xml extension is better understood. -Solomon On Mon, Oct 26, 2009 at 1:58 PM, Sam Johnston <samj@...> wrote: > > > The more I think about this problem (and I've been thinking of it a lot > lately in the context of OCCI), the more I think we should rely on HTTP > connection negotiation to select appropriate types. > > If we must embed mime-types into URLs then I would likely prefer something > like '/resource;type=text/plain' than having a mapping from MIME types to > file extensions. > > Sam > > On Mon, Oct 26, 2009 at 6:54 PM, mike amundsen <mamund@...> wrote: > >> On Mon, Oct 26, 2009 at 13:19, Will Hartung <willh@...> wrote: >> >> > But in hindsight, what's the difference between >> > >> > GET /resource.xml >> > GET /resource.json >> > >> > and >> > >> > GET /resource >> > Accept: application/xml >> > >> > GET /resource >> > Accept: application/json >> > >> >> From my POV, media-type selection should be treated separately from >> resource selection. For that reason, I use the Accept header and >> conneg to determine the representation format/semantics. >> >> > From a caching point of view, they are separate requests. A cache that >> > has the XML representation won't be able to answer a JSON query, so >> > both have a similar caching impact in terms of ensuring that the cache >> > is properly synced with both representations. >> >> While it is possible that caches will need to make additional requests >> to the origin server to get the specific negotiated media type for a >> client, it is a different story when it comes to invalidating a cached >> resource. If caches are using the "generic" resource URI (/resource), >> any PUT/POST/DELETE that passes through that intermediary will >> invalidate _all_ the representation formats. If a unique URI is used >> for each format, the cache can fall into a pretty bad state since only >> the specific representation will be invalidated. >> >> MCA >> >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> > >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > >
If you have a Flash client, you're better off using extensions since Flash doesn't have the ability to set the Accept header. In that case, since you're forced to use extensions, couldn't the use of ETags solve the caching issues? -Solomon On Mon, Oct 26, 2009 at 1:54 PM, mike amundsen <mamund@yahoo.com> wrote: > > > On Mon, Oct 26, 2009 at 13:19, Will Hartung <willh@mirthcorp.com<willh%40mirthcorp.com>> > wrote: > > > But in hindsight, what's the difference between > > > > GET /resource.xml > > GET /resource.json > > > > and > > > > GET /resource > > Accept: application/xml > > > > GET /resource > > Accept: application/json > > > > From my POV, media-type selection should be treated separately from > resource selection. For that reason, I use the Accept header and > conneg to determine the representation format/semantics. > > > > From a caching point of view, they are separate requests. A cache that > > has the XML representation won't be able to answer a JSON query, so > > both have a similar caching impact in terms of ensuring that the cache > > is properly synced with both representations. > > While it is possible that caches will need to make additional requests > to the origin server to get the specific negotiated media type for a > client, it is a different story when it comes to invalidating a cached > resource. If caches are using the "generic" resource URI (/resource), > any PUT/POST/DELETE that passes through that intermediary will > invalidate _all_ the representation formats. If a unique URI is used > for each format, the cache can fall into a pretty bad state since only > the specific representation will be invalidated. > > MCA > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > >
On Mon, Oct 26, 2009 at 7:02 PM, Solomon Duskis <sduskis@...> wrote: > If you're already using the URL, what practical difference does it make to > have '/resource;type=application/xml' or '/resource.xml'? IMHO, The .xml > extension is better understood. > It removes an unnecessary layer of indirection (e.g. mime.types). That said, I don't particularly like either for reasons already discussed. Sam
It turns out more than one HTTP client stinks at content negotiation. some common browsers are notorious (IE sends Accept:*/* on any refresh of a page!). Some are dumb (MS-Excel sends Accept: text/html, text/csv without any quality info which usually results in sending text/html when csv is expected). And the list goes on. I use a variant of the MimeParse utility (http://code.google.com/p/mimeparse/) that allows server programmers to add override information for some clients (as in the above), but this still falls short in some cases. For that reason, it sometimes makes more sense to use server-driven conneg via explicit URIs that contain representation format hints (resource.xml) and just get used to the idea that caches can fall into a bad state. Of course, this is never an issue when I'm implementing my own custom client (desktop apps, console apps). Therefore, I treat this approach (/resource.xml) as a _patch_ to support selected poorly-implemented clients, not a standard practice to adopt or defend. mca http://amundsen.com/blog/ On Mon, Oct 26, 2009 at 14:07, Sam Johnston <samj@...> wrote: > On Mon, Oct 26, 2009 at 7:02 PM, Solomon Duskis <sduskis@...> wrote: >> >> If you're already using the URL, what practical difference does it make to >> have '/resource;type=application/xml' or '/resource.xml'? IMHO, The .xml >> extension is better understood. > > It removes an unnecessary layer of indirection (e.g. mime.types). > That said, I don't particularly like either for reasons already discussed. > Sam >
On Mon, Oct 26, 2009 at 7:18 PM, mike amundsen <mamund@...> wrote: > It turns out more than one HTTP client stinks at content negotiation. > some common browsers are notorious (IE sends Accept:*/* on any refresh > of a page!). Some are dumb (MS-Excel sends Accept: text/html, text/csv > without any quality info which usually results in sending text/html > when csv is expected). And the list goes on. > You can include pretty much anything that involves human interaction into that bucket too... be it a browser, command line client or something else. I think a good deal of the harm caused by having a separate URL for each representation (e.g. /resource.xml) could be limited by also sending Link: headers with rel=canonical pointing at the resource itself (e.g. /resource). Doesn't help you with today's caches, but at least it gives you a way to work out that two URLs are representations of the same resource without having to parse URLs that should be opaque. See draft-johnston-addressing-link-relations<http://tools.ietf.org/html/draft-johnston-addressing-link-relations#section-2>and draft-nottingham-http-link-header<http://tools.ietf.org/html/draft-nottingham-http-link-header>for the specifics, Sam
On Mon, Oct 26, 2009 at 10:54 AM, mike amundsen <mamund@...> wrote: > While it is possible that caches will need to make additional requests > to the origin server to get the specific negotiated media type for a > client, it is a different story when it comes to invalidating a cached > resource. If caches are using the "generic" resource URI (/resource), > any PUT/POST/DELETE that passes through that intermediary will > invalidate _all_ the representation formats. If a unique URI is used > for each format, the cache can fall into a pretty bad state since only > the specific representation will be invalidated. This alone I think helps cement the difference, as just because an application can conflate /resource.xml and /resource with the proper Accept header, doesn't mean that a cache will. And the PUT/POST/DELETE cache invalidation is a pretty powerful reason to not use extensions in place of mime/types. Mind, it doesn't solve the problem of multiple caches, but, really, nothing does save simply using the cache headers properly. If you can't afford stale data, then, you can't use the cache. Regards, Will Hartung (willh@...)
Will: Another hack for keeping caches in line is to resort to the "Validation Model" that requires intermediaries to use ETags and Modified-Date headers to validate their cached copy each time before delivering it to clients. It doesn't reduce traffic to the origin server, but it does cut down on bandwidth. I only use this pattern on resources that require this high level of accuracy. Turns out this level is not needed as often as users (or even developers) expect - esp. on public Web apps. mca http://amundsen.com/blog/ On Mon, Oct 26, 2009 at 14:27, Will Hartung <willh@...> wrote: > On Mon, Oct 26, 2009 at 10:54 AM, mike amundsen <mamund@...> wrote: >> While it is possible that caches will need to make additional requests >> to the origin server to get the specific negotiated media type for a >> client, it is a different story when it comes to invalidating a cached >> resource. If caches are using the "generic" resource URI (/resource), >> any PUT/POST/DELETE that passes through that intermediary will >> invalidate _all_ the representation formats. If a unique URI is used >> for each format, the cache can fall into a pretty bad state since only >> the specific representation will be invalidated. > > This alone I think helps cement the difference, as just because an > application can conflate /resource.xml and /resource with the proper > Accept header, doesn't mean that a cache will. > > And the PUT/POST/DELETE cache invalidation is a pretty powerful reason > to not use extensions in place of mime/types. > > Mind, it doesn't solve the problem of multiple caches, but, really, > nothing does save simply using the cache headers properly. If you > can't afford stale data, then, you can't use the cache. > > Regards, > > Will Hartung > (willh@mirthcorp.com) > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Michael Crute wrote: > > Client sends a GET request with an If-Range header that specifies the > last download date and a Range header that specifies the same. The > server could then send back a response of 304 "Not Modified", 206 > "Partial Resource" along with the deltas as the body or a 200 with all > of the records as the body. > Has anyone else noticed a trend towards HTTP clients that incorporate an httpd? The client could POST a request for updates to the origin server, the origin server could then send a PATCH request to the client- side httpd. Thoughts? -Eric
Range headers make sense only when the server can produce the same set of bytes for a given representation. Most text formats don't guarantee this. That's one reason why the complicated "Canonical XML" spec exists. I would just make up a URI for such use cases. GET /stuff?lastmod=2009-10-18T08:49:37Z Subbu On Oct 21, 2009, at 5:43 AM, Michael Crute wrote: > I'm writing an RESTful web service to update content on a mobile > device. We are currently using the If-Modified-Since header along with > 304 "Not Modified" response codes to ensure that the device does not > download the file more than is absolutely necessary, but I'd like to > go a step further and only provide the changed records to the device > (this is an XML file FWIW). After combing over the HTTP spec and not > finding much on Google I think this might be a valid approach: > > Client sends a GET request with an If-Range header that specifies the > last download date and a Range header that specifies the same. The > server could then send back a response of 304 "Not Modified", 206 > "Partial Resource" along with the deltas as the body or a 200 with all > of the records as the body. > > Sample Request Headers: > If-Range: Sun, 18 Oct 2009 08:49:37 GMT > Range: lastmod=Sun, 18 Oct 2009 08:49:37 GMT > > Is this approach correct? The HTTP spec suggests that I may define my > own custom units for the Range header but they may not be portable[0] > is there a standard already in place that I'm missing? > > -mike > > [0] http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.12 > > -- > ________________________________ > Michael E. Crute > http://mike.crute.org > > God put me on this earth to accomplish a certain number of things. > Right now I am so far behind that I will never die. --Bill Watterson > > > ------------------------------------ > > Yahoo! Groups Links > > >
> > I think a good deal of the harm caused by having a separate URL for > each representation (e.g. /resource.xml) could be limited by also > sending Link: headers with rel=canonical pointing at the resource > itself (e.g. /resource). Doesn't help you with today's caches, but > at least it gives you a way to work out that two URLs are > representations of the same resource without having to parse URLs > that should be opaque. I don't think this interpretation of rel=canonical can make the resource at the request URI or Content-Location and the resource at the Link header the same. These are different resources. Subbu
While this (HTTP callbacks) might be practical in some use cases (like inside a corporate firewall), it's not going to be feasible when there are firewalls in the way that will block the server -> client callback. You might want to think about a "comet" based approach, which is what Ajax based clients often do to exchange messages bidirectionally. Basically, it involves leaving the incoming (client -> server) HTTP connection open, and the server suspends it's "response" until it wants to actually send something. Craig McClanahan On Mon, Oct 26, 2009 at 11:43 AM, Eric J. Bowman <eric@bisonsystems.net>wrote: > > > Michael Crute wrote: > > > > > Client sends a GET request with an If-Range header that specifies the > > last download date and a Range header that specifies the same. The > > server could then send back a response of 304 "Not Modified", 206 > > "Partial Resource" along with the deltas as the body or a 200 with all > > of the records as the body. > > > > Has anyone else noticed a trend towards HTTP clients that incorporate > an httpd? The client could POST a request for updates to the origin > server, the origin server could then send a PATCH request to the client- > side httpd. Thoughts? > > -Eric > >
mike amundsen wrote: > On Mon, Oct 26, 2009 at 13:19, Will Hartung <willh@...> wrote: > > >> But in hindsight, what's the difference between >> >> GET /resource.xml >> GET /resource.json >> >> and >> >> GET /resource >> Accept: application/xml >> >> GET /resource >> Accept: application/json >> >> > > From my POV, media-type selection should be treated separately from > resource selection. For that reason, I use the Accept header and > conneg to determine the representation format/semantics. > > >> From a caching point of view, they are separate requests. A cache that >> has the XML representation won't be able to answer a JSON query, so >> both have a similar caching impact in terms of ensuring that the cache >> is properly synced with both representations. >> > > While it is possible that caches will need to make additional requests > to the origin server to get the specific negotiated media type for a > client, it is a different story when it comes to invalidating a cached > resource. If caches are using the "generic" resource URI (/resource), > any PUT/POST/DELETE that passes through that intermediary will > invalidate _all_ the representation formats. If a unique URI is used > for each format, the cache can fall into a pretty bad state since only > the specific representation will be invalidated. > > MCA > The only issue here, practically speaking, is the issue of driving user agents (e.g. web browsers) to a specific content type which the UA can only negotiate by over-riding its default accept header preference. e.g. via html, how would one go about directing a web browser to negotiate the atom representation if the following are available: /blog (text/html) /blog (application/atom+xml) Not possible - so I raised this with the html working group for consideration: http://lists.w3.org/Archives/Public/public-html/2009Oct/0527.html - Mike
On Mon, Oct 26, 2009 at 2:04 PM, Mike Kelly <mike@...> wrote: > mike amundsen wrote: > e.g. via html, how would one go about directing a web browser to negotiate > the atom representation if the following are available: Why would a web browser want an Atom representation? And why would you want to force it to ask for one? Just trying to visualize the use case here. Regards, Will Hartung (willh@...)
Will Hartung wrote: > On Mon, Oct 26, 2009 at 2:04 PM, Mike Kelly <mike@...> wrote: > >> mike amundsen wrote: >> e.g. via html, how would one go about directing a web browser to negotiate >> the atom representation if the following are available: >> > > Why would a web browser want an Atom representation? And why would you > want to force it to ask for one? > > Just trying to visualize the use case here. > > Regards, > > Will Hartung > (willh@...) > An HTML document wishing to provide links to the blog 'page' and the blog 'feed'. <a href="/blog">My Blog Page (HTML)</a> <a href="/blog">My Blog Feed (Atom)</a>
On Mon, Oct 26, 2009 at 10:18 PM, Mike Kelly <mike@...> wrote: > > An HTML document wishing to provide links to the blog 'page' and the > blog 'feed'. > > <a href="/blog">My Blog Page (HTML)</a> > <a href="/blog">My Blog Feed (Atom)</a> Another example would be making a document available in many formats (HTML, DOC, PDF, TXT, etc.), and one specific to the work I'm doing with IaaS is having a HTML rendering of a virtual machine which also links to one or more "native" renderings (e.g. OVF). This would also be useful for HTML & Atom <LINK>s and the HTTP Link: header. Sam
On Mon, Oct 26, 2009 at 2:18 PM, Mike Kelly <mike@...> wrote: > An HTML document wishing to provide links to the blog 'page' and the blog > 'feed'. > > <a href="/blog">My Blog Page (HTML)</a> > <a href="/blog">My Blog Feed (Atom)</a> Sure, but clearly, for a "normal" web browser, that fact that "/blog" replies in HTML is sufficient, right? If someone actually wanted an Atom feed, they'd set the Accept header correctly, right? And there's no expectation that a web browser would do that, is there? With Firefox, if you use the in built RSS feed tool, it gives you a choice of which format to read. I don't use it, but since you, the user, are telling the browser what format you want, ideally it will ask for that format. (I have no idea what it does to try and subscribe). But when I see what you have above, identical URLs with different titles, clearly as a user if I click on either, I'd get the same result since the actual client (the browser in HTML link click mode) will be pretty much mostly be interested in HTML, and that's what it will ask for. If you were building an automated client, then observation of the payload by a you (as human) will tell you where you can get an Atom feed (it says so, documented in English), and ideally you'll ask for that type explicitly. Do you want to add type information to the link to coerce the client to change its type? <a href="/blog" type="text/html">My Blog Page (HTML)</a> <a href="/blog" type="application/atom+xml">My Blog Page (Atom)</a> Do you think that if you clicked on the second that the browser would view, or subscribe? Regards, Will Hartung (willh@...)
MIkeK and I covered this a few weeks ago. HTML supports the "type" attribute for link elements, but this is not passed as the accept header by HTML browsers when resolving the link. XInclude introduced the "accept" attribute for link elements and that _is_ passed as the accept header when resolving the link. It's a bummer that browsers don't act the same way, but that's the way it goes. There are other work arounds including minting URIs that give the server enough info to override any accept header from the client: <a href="/my-blog/rss">RSS Feed</a> <a href="/my-blog/feed.atom">Atom Feed</a> <a href="/my-blog;pdf">PDF View</a> etc. To paraphrase a man well-known for explaining somewhat unpleasant situations, "You go to the Web with the clients you have..."<g> mca http://amundsen.com/blog/ On Mon, Oct 26, 2009 at 17:34, Will Hartung <willh@...> wrote: > On Mon, Oct 26, 2009 at 2:18 PM, Mike Kelly <mike@...> wrote: >> An HTML document wishing to provide links to the blog 'page' and the blog >> 'feed'. >> >> <a href="/blog">My Blog Page (HTML)</a> >> <a href="/blog">My Blog Feed (Atom)</a> > > Sure, but clearly, for a "normal" web browser, that fact that "/blog" > replies in HTML is sufficient, right? > > If someone actually wanted an Atom feed, they'd set the Accept header > correctly, right? And there's no expectation that a web browser would > do that, is there? > > With Firefox, if you use the in built RSS feed tool, it gives you a > choice of which format to read. I don't use it, but since you, the > user, are telling the browser what format you want, ideally it will > ask for that format. (I have no idea what it does to try and > subscribe). > > But when I see what you have above, identical URLs with different > titles, clearly as a user if I click on either, I'd get the same > result since the actual client (the browser in HTML link click mode) > will be pretty much mostly be interested in HTML, and that's what it > will ask for. > > If you were building an automated client, then observation of the > payload by a you (as human) will tell you where you can get an Atom > feed (it says so, documented in English), and ideally you'll ask for > that type explicitly. > > Do you want to add type information to the link to coerce the client > to change its type? > > <a href="/blog" type="text/html">My Blog Page (HTML)</a> > <a href="/blog" type="application/atom+xml">My Blog Page (Atom)</a> > > Do you think that if you clicked on the second that the browser would > view, or subscribe? > > Regards, > > Will Hartung > (willh@...) >
For an example like the one below, most server-driven conneg bets are off. Even feed don't do conneg right. Whether right or wrong, operational/practical reasons like this sometimes require the server to treat each representation as a different resource, and give it a different URI. As Mike just replied "You go to the Web with the clients you have...". Subbu On Oct 26, 2009, at 2:34 PM, Will Hartung wrote: > On Mon, Oct 26, 2009 at 2:18 PM, Mike Kelly <mike@...> > wrote: >> An HTML document wishing to provide links to the blog 'page' and >> the blog >> 'feed'. >> >> <a href="/blog">My Blog Page (HTML)</a> >> <a href="/blog">My Blog Feed (Atom)</a> > > Sure, but clearly, for a "normal" web browser, that fact that "/blog" > replies in HTML is sufficient, right? > > If someone actually wanted an Atom feed, they'd set the Accept header > correctly, right? And there's no expectation that a web browser would > do that, is there? > > With Firefox, if you use the in built RSS feed tool, it gives you a > choice of which format to read. I don't use it, but since you, the > user, are telling the browser what format you want, ideally it will > ask for that format. (I have no idea what it does to try and > subscribe). > > But when I see what you have above, identical URLs with different > titles, clearly as a user if I click on either, I'd get the same > result since the actual client (the browser in HTML link click mode) > will be pretty much mostly be interested in HTML, and that's what it > will ask for. > > If you were building an automated client, then observation of the > payload by a you (as human) will tell you where you can get an Atom > feed (it says so, documented in English), and ideally you'll ask for > that type explicitly. > > Do you want to add type information to the link to coerce the client > to change its type? > > <a href="/blog" type="text/html">My Blog Page (HTML)</a> > <a href="/blog" type="application/atom+xml">My Blog Page (Atom)</a> > > Do you think that if you clicked on the second that the browser would > view, or subscribe? > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > >
Will Hartung wrote: > Do you want to add type information to the link to coerce the client > to change its type? > > <a href="/blog" type="text/html">My Blog Page (HTML)</a> > <a href="/blog" type="application/atom+xml">My Blog Page (Atom)</a> > http://www.w3.org/TR/html401/struct/links.html#adef-type-A "This attribute gives an advisory hint as to the content type of the content available at the link target address. It allows user agents to opt to use a fallback mechanism rather than fetch the content if they are advised that they will get content in a content type they do not support. Authors who use this attribute take responsibility to manage the risk that it may become inconsistent with the content available at the link target address." Despite that definition, the spec does not say anything about that should affect the accept header of the request - so these hyperlinks are no different to the ones I provided which omitted the attribute altogether (from an HTTP perspective, anyway). > Do you think that if you clicked on the second that the browser would > view, or subscribe? > Doesn't matter, we just want the UA to make a request with the correct preferences.
On Oct 26, 2009, at 2:45 PM, Subbu Allamaraju wrote: > Even feed don't do conneg right. Meant to say, "feed aggregators"
Craig McClanahan wrote: > > While this (HTTP callbacks) might be practical in some use cases (like > inside a corporate firewall), it's not going to be feasible when > there are firewalls in the way that will block the server -> client > callback. > Yes, all the usual caveats apply, I'm definitely speaking theoretically here. Walled-garden mobile networks are another real-world application. Although... http://tools.ietf.org/html/draft-lentczner-rhttp-00 (Reverse HTTP) I really like the novel approach of the empty Host header, and the IANA HTTP Upgrade Token registry (the use of which is something I never would have come up with in my wildest dreams, hats off). > > You might want to think about a "comet" based approach, > which is what Ajax based clients often do to exchange messages > bidirectionally. Basically, it involves leaving the incoming (client > -> server) HTTP connection open, and the server suspends it's > "response" until it wants to actually send something. > Yes, I have been thinking about this sort of approach, particularly: http://xmpp.org/extensions/xep-0124.html (BOSH) However, I don't think this solves the original problem: How to apply a server-generated delta to a document cached on the client? The suggestion is to use a media type the client understands, but this is coupling, and such use of media types is at odds with REST (sorry, guys). Media types define link relations and processing rules for the message body; they do not redefine HTTP method semantics -- in this case, by attempting to assign PATCH semantics to GET. The use-case Mike C. describes is interesting. Assuming a very large document cached on the client, the user-perceived performance will be greater if the client can apply a patch, rather than re-transferring a new version of the very large document. So it's a good REST problem. Since the problem area is XML, as Mike A. pointed out, HTTP range requests aren't all that pragmatic a solution. Another alternative would be to have the client GET patches from the server as application/xslt+xml (XSLT 2) transformations, which in RESTspeak translates as "applying the optional code-on-demand constraint." A self-applying XSLT transformation is possible, even cacheable. Just not very visible. Degree-of-difficulty-wise, it's no more pragmatic than going for canonical or binary XML formats to make range requests work, unfortunately. Ideally, though, the protocol request method and media-type visibly describe the semantics of the interaction. The REST 'application' here is, "User requests latest diff be applied." The HTTP PATCH method with an xmldiff media-type has exactly the desired interaction semantics, making it the most visible solution. If only it could be done in reverse... using RHTTP, or some method where it's assumed (in RESTspeak) that the requesting component has both client and server connectors. Theoretically, the BOSH technique could be used to hold open an RHTTP connection, such that new deltas are pushed to the client using PATCH instead of using "short" polling, or explicit user request. IOW, I think it may be possible to build a Google Wave-like user experience RESTfully. I haven't worked out the specifics, but this thread is definitely asking the right question. -Eric > > Craig McClanahan > > On Mon, Oct 26, 2009 at 11:43 AM, Eric J. Bowman > wrote: > > > > > > > Michael Crute wrote: > > > > > > > > Client sends a GET request with an If-Range header that specifies > > > the last download date and a Range header that specifies the > > > same. The server could then send back a response of 304 "Not > > > Modified", 206 "Partial Resource" along with the deltas as the > > > body or a 200 with all of the records as the body. > > > > > > > Has anyone else noticed a trend towards HTTP clients that > > incorporate an httpd? The client could POST a request for updates > > to the origin server, the origin server could then send a PATCH > > request to the client- side httpd. Thoughts? > > > > -Eric > >
Mike Kelly wrote: > > Despite that definition, the spec does not say anything about that > should affect the accept header of the request - so these hyperlinks > are no different to the ones I provided which omitted the attribute > altogether (from an HTTP perspective, anyway). > > > Do you think that if you clicked on the second that the browser > > would view, or subscribe? > > > > Doesn't matter, we just want the UA to make a request with the > correct preferences. > What's wrong with Vary and Content-Location headers, with <link> tags? Using link rel='alternate' type='application/atom+xml' in a returned HTML response causes my browser to display a feed button, pressing it returns the Atom representation with a button asking if I want to subscribe. The request for the HTML page returns Vary: Accept, Content-Type: text/html, and Content-Location: /file.html headers. The link tag indicates that the request may be repeated, with Accept: application/ atom+xml, which returns Vary: Accept, Content-Type: application/atom+ xml, and Content-Location: /file.atom headers. The client has now "discovered" the URIs for both file.html and file.atom, learning how to request each variant as its own resource, or obtain each variant through a properly-formatted request to the /file URI. XHR can be used to provide two separate links in the HTML to /file which will return different representations based on the Accept request header, for clients that execute the script. -Eric
Eric: this statement caught me off-guard: "The link tag indicates that the request may be repeated, with Accept: application/atom+xml..." I was not aware of this rule for HTML browsers. Can you point in the direction of the documentation for this? mca http://amundsen.com/blog/ On Mon, Oct 26, 2009 at 18:35, Eric J. Bowman <eric@...> wrote: > Mike Kelly wrote: >> >> Despite that definition, the spec does not say anything about that >> should affect the accept header of the request - so these hyperlinks >> are no different to the ones I provided which omitted the attribute >> altogether (from an HTTP perspective, anyway). >> >> > Do you think that if you clicked on the second that the browser >> > would view, or subscribe? >> > >> >> Doesn't matter, we just want the UA to make a request with the >> correct preferences. >> > > What's wrong with Vary and Content-Location headers, with <link> tags? > Using link rel='alternate' type='application/atom+xml' in a returned > HTML response causes my browser to display a feed button, pressing it > returns the Atom representation with a button asking if I want to > subscribe. > > The request for the HTML page returns Vary: Accept, Content-Type: > text/html, and Content-Location: /file.html headers. The link tag > indicates that the request may be repeated, with Accept: application/ > atom+xml, which returns Vary: Accept, Content-Type: application/atom+ > xml, and Content-Location: /file.atom headers. > > The client has now "discovered" the URIs for both file.html and > file.atom, learning how to request each variant as its own resource, or > obtain each variant through a properly-formatted request to the /file > URI. XHR can be used to provide two separate links in the HTML to /file > which will return different representations based on the Accept request > header, for clients that execute the script. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Or maybe, HTTP is missing some sort of CPATCH method? -Eric
On Mon, Oct 26, 2009 at 11:52 PM, mike amundsen <mamund@...> wrote: > Eric: > > this statement caught me off-guard: > "The link tag indicates that the request may be repeated, with Accept: > application/atom+xml..." > > I was not aware of this rule for HTML browsers. Can you point in the > direction of the documentation for this? It's true that most browsers do implement similar functionality when they spot <link> elements of certain types (and presumably Link: headers eventually too), so what Mike's asking for effectively exists today even if it is not documented as such. Sam > On Mon, Oct 26, 2009 at 18:35, Eric J. Bowman <eric@...> > wrote: > > Mike Kelly wrote: > >> > >> Despite that definition, the spec does not say anything about that > >> should affect the accept header of the request - so these hyperlinks > >> are no different to the ones I provided which omitted the attribute > >> altogether (from an HTTP perspective, anyway). > >> > >> > Do you think that if you clicked on the second that the browser > >> > would view, or subscribe? > >> > > >> > >> Doesn't matter, we just want the UA to make a request with the > >> correct preferences. > >> > > > > What's wrong with Vary and Content-Location headers, with <link> tags? > > Using link rel='alternate' type='application/atom+xml' in a returned > > HTML response causes my browser to display a feed button, pressing it > > returns the Atom representation with a button asking if I want to > > subscribe. > > > > The request for the HTML page returns Vary: Accept, Content-Type: > > text/html, and Content-Location: /file.html headers. The link tag > > indicates that the request may be repeated, with Accept: application/ > > atom+xml, which returns Vary: Accept, Content-Type: application/atom+ > > xml, and Content-Location: /file.atom headers. > > > > The client has now "discovered" the URIs for both file.html and > > file.atom, learning how to request each variant as its own resource, or > > obtain each variant through a properly-formatted request to the /file > > URI. XHR can be used to provide two separate links in the HTML to /file > > which will return different representations based on the Accept request > > header, for clients that execute the script. > > > > -Eric > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
mike amundsen wrote: > > Eric: > > this statement caught me off-guard: > "The link tag indicates that the request may be repeated, with Accept: > application/atom+xml..." > > I was not aware of this rule for HTML browsers. Can you point in the > direction of the documentation for this? > There is no such rule. If there's any documentation, it would be the definition of the @rel='alternate' link relation and @type, and HTTP's definition of conneg. If the alternate URIs are identical to the request URI but indicate different media types than that of the received representation, and in the presence of Vary: Accept headers, logic dictates that repeating the original request with different Accept headers will yield different representations. Preferably with distinct Content-Location headers. I don't understand this notion of content-negotiation within HTML. If I want to explicitly have a user click on a link to the Atom representation of a resource, then I'll link directly to its Content- Location *.atom URI. If the intent is to override the browser's default Accept header, that's a job for scripting not markup. Content negotiation is incredibly easy to override using Content-Location URIs, and client-side code can reliably make this inference by implementing RFC 2616. This is possible because using Accept, Vary, Content-Type and Content- Location headers provides self-descriptive protocol headers, while the semantics of link tags with @rel='alternate' and @type are well- defined, providing a self-documenting API for requesting specific variants. Or, keep it at the protocol level using HTTP Link headers, for example if using conneg between binary/image formats. -Eric
Sam/Eric: This leads me to understand that the HTML5 spec statement: "The type attribute gives the MIME type of the linked resource. It is purely advisory. " and "User agents must not consider the type attribute authoritative..." [1] is only part of the story. I've done a bit of digging, but have not yet found any reference to client browser implementations that use this "second try using the type attribute as the accept header" behavior. I'd greatly appreciate anyone who can point me in the proper direction. Thanks. mca http://amundsen.com/blog/ [1] http://dev.w3.org/html5/spec/semantics.html#attr-link-type On Mon, Oct 26, 2009 at 19:10, Sam Johnston <samj@samj.net> wrote: > On Mon, Oct 26, 2009 at 11:52 PM, mike amundsen <mamund@yahoo.com> wrote: >> >> Eric: >> >> this statement caught me off-guard: >> "The link tag indicates that the request may be repeated, with Accept: >> application/atom+xml..." >> >> I was not aware of this rule for HTML browsers. Can you point in the >> direction of the documentation for this? > > It's true that most browsers do implement similar functionality when they > spot <link> elements of certain types (and presumably Link: headers > eventually too), so what Mike's asking for effectively exists today even if > it is not documented as such. > Sam > >> >> On Mon, Oct 26, 2009 at 18:35, Eric J. Bowman <eric@...> >> wrote: >> > Mike Kelly wrote: >> >> >> >> Despite that definition, the spec does not say anything about that >> >> should affect the accept header of the request - so these hyperlinks >> >> are no different to the ones I provided which omitted the attribute >> >> altogether (from an HTTP perspective, anyway). >> >> >> >> > Do you think that if you clicked on the second that the browser >> >> > would view, or subscribe? >> >> > >> >> >> >> Doesn't matter, we just want the UA to make a request with the >> >> correct preferences. >> >> >> > >> > What's wrong with Vary and Content-Location headers, with <link> tags? >> > Using link rel='alternate' type='application/atom+xml' in a returned >> > HTML response causes my browser to display a feed button, pressing it >> > returns the Atom representation with a button asking if I want to >> > subscribe. >> > >> > The request for the HTML page returns Vary: Accept, Content-Type: >> > text/html, and Content-Location: /file.html headers. The link tag >> > indicates that the request may be repeated, with Accept: application/ >> > atom+xml, which returns Vary: Accept, Content-Type: application/atom+ >> > xml, and Content-Location: /file.atom headers. >> > >> > The client has now "discovered" the URIs for both file.html and >> > file.atom, learning how to request each variant as its own resource, or >> > obtain each variant through a properly-formatted request to the /file >> > URI. XHR can be used to provide two separate links in the HTML to /file >> > which will return different representations based on the Accept request >> > header, for clients that execute the script. >> > >> > -Eric >> > >> > >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> > >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > >
Eric: Thanks for follow up. <snip> I don't understand this notion of content-negotiation within HTML. </snip> I agree 100%. My interest is not in HTML but in user-agents; for this thread I was focusedon the behavior of common Web browsers. Specifically, I am interested in effective ways to support multiple representations of the same resource (text, image, PDF, etc.). In some cases, these verying representations could be meaningful to the same user-agent (such as a common browser) in a single display. For example, a representation that will display a text version of analysis data along with a graphic pie chart of the same data. To someone who prefers server-driven conneg, the same resource URI could be used for both displays in the same document by way of external resource links w/ different content-type meta data (usually via the accept header). This works fine in user-agents that support the xlink element type model where the attributes are considered definitive. Common web browsers don't use this model and that leads to a need for a different solution. There are lots of reasonable ways to accomplish this for web browsers, of course, and you've detailed more than one of them. Thanks again. mca http://amundsen.com/blog/ On Mon, Oct 26, 2009 at 19:25, Eric J. Bowman <eric@...> wrote: > mike amundsen wrote: >> >> Eric: >> >> this statement caught me off-guard: >> "The link tag indicates that the request may be repeated, with Accept: >> application/atom+xml..." >> >> I was not aware of this rule for HTML browsers. Can you point in the >> direction of the documentation for this? >> > > There is no such rule. If there's any documentation, it would be the > definition of the @rel='alternate' link relation and @type, and HTTP's > definition of conneg. If the alternate URIs are identical to the request > URI but indicate different media types than that of the received > representation, and in the presence of Vary: Accept headers, logic > dictates that repeating the original request with different Accept > headers will yield different representations. Preferably with distinct > Content-Location headers. > > I don't understand this notion of content-negotiation within HTML. If > I want to explicitly have a user click on a link to the Atom > representation of a resource, then I'll link directly to its Content- > Location *.atom URI. If the intent is to override the browser's > default Accept header, that's a job for scripting not markup. Content > negotiation is incredibly easy to override using Content-Location URIs, > and client-side code can reliably make this inference by implementing > RFC 2616. > > This is possible because using Accept, Vary, Content-Type and Content- > Location headers provides self-descriptive protocol headers, while the > semantics of link tags with @rel='alternate' and @type are well- > defined, providing a self-documenting API for requesting specific > variants. Or, keep it at the protocol level using HTTP Link headers, > for example if using conneg between binary/image formats. > > -Eric >
Mike, I think you're looking for Feed Autodiscovery<http://www.google.com/search?q=feed+autodiscovery> . Sam On Tue, Oct 27, 2009 at 12:25 AM, mike amundsen <mamund@...> wrote: > Sam/Eric: > > This leads me to understand that the HTML5 spec statement: "The type > attribute gives the MIME type of the linked resource. It is purely > advisory. " and "User agents must not consider the type attribute > authoritative..." [1] is only part of the story. > > I've done a bit of digging, but have not yet found any reference to > client browser implementations that use this "second try using the > type attribute as the accept header" behavior. > > I'd greatly appreciate anyone who can point me in the proper direction. > > Thanks. > > mca > http://amundsen.com/blog/ > > [1] http://dev.w3.org/html5/spec/semantics.html#attr-link-type > > > On Mon, Oct 26, 2009 at 19:10, Sam Johnston <samj@...> wrote: > > On Mon, Oct 26, 2009 at 11:52 PM, mike amundsen <mamund@...> > wrote: > >> > >> Eric: > >> > >> this statement caught me off-guard: > >> "The link tag indicates that the request may be repeated, with Accept: > >> application/atom+xml..." > >> > >> I was not aware of this rule for HTML browsers. Can you point in the > >> direction of the documentation for this? > > > > It's true that most browsers do implement similar functionality when they > > spot <link> elements of certain types (and presumably Link: headers > > eventually too), so what Mike's asking for effectively exists today even > if > > it is not documented as such. > > Sam > > > >> > >> On Mon, Oct 26, 2009 at 18:35, Eric J. Bowman <eric@...> > >> wrote: > >> > Mike Kelly wrote: > >> >> > >> >> Despite that definition, the spec does not say anything about that > >> >> should affect the accept header of the request - so these hyperlinks > >> >> are no different to the ones I provided which omitted the attribute > >> >> altogether (from an HTTP perspective, anyway). > >> >> > >> >> > Do you think that if you clicked on the second that the browser > >> >> > would view, or subscribe? > >> >> > > >> >> > >> >> Doesn't matter, we just want the UA to make a request with the > >> >> correct preferences. > >> >> > >> > > >> > What's wrong with Vary and Content-Location headers, with <link> tags? > >> > Using link rel='alternate' type='application/atom+xml' in a returned > >> > HTML response causes my browser to display a feed button, pressing it > >> > returns the Atom representation with a button asking if I want to > >> > subscribe. > >> > > >> > The request for the HTML page returns Vary: Accept, Content-Type: > >> > text/html, and Content-Location: /file.html headers. The link tag > >> > indicates that the request may be repeated, with Accept: application/ > >> > atom+xml, which returns Vary: Accept, Content-Type: application/atom+ > >> > xml, and Content-Location: /file.atom headers. > >> > > >> > The client has now "discovered" the URIs for both file.html and > >> > file.atom, learning how to request each variant as its own resource, > or > >> > obtain each variant through a properly-formatted request to the /file > >> > URI. XHR can be used to provide two separate links in the HTML to > /file > >> > which will return different representations based on the Accept > request > >> > header, for clients that execute the script. > >> > > >> > -Eric > >> > > >> > > >> > ------------------------------------ > >> > > >> > Yahoo! Groups Links > >> > > >> > > >> > > >> > > >> > >> > >> ------------------------------------ > >> > >> Yahoo! Groups Links > >> > >> > >> > > > > >
mike amundsen wrote: > > Sam/Eric: > > This leads me to understand that the HTML5 spec statement: "The type > attribute gives the MIME type of the linked resource. It is purely > advisory. " and "User agents must not consider the type attribute > authoritative..." [1] is only part of the story. > Right, the only authoritative mime type is what the Content-Type response header says. Just because I'm asking for PNG-only doesn't mean the server won't give me a GIF, which might be named 'image.png', or redirected to 'image.gif' or whatever else the server wants to do with the client request. This isn't a good reason not to try anyway! > > I've done a bit of digging, but have not yet found any reference to > client browser implementations that use this "second try using the > type attribute as the accept header" behavior. > That's just me, assuming that if Opera can introspect a link tag and provide me a button to click on for an alternate representation, then that behavior can just as easily be automated -- for example, a user preference directing a client to always load an Atom alternate, where available, instead of defaulting to the HTML. I just see this as standard conneg as specified in HTTP. If a client follows a <link> to the same URI without altering its Accept header, then it receives the HTML page again, with the same Content- Location URI it received before. Failing to at least try with the specified @type in the <link> tag would be, in my mind, broken client behavior. I know it's close to Halloween and all, but conneg really isn't so scary... ;-) -Eric
Sam:
Thanks for the pointer. IIRC, HTML5 adds the "feed" link relation type
("rel") to make autodiscovery possible. This is different than
honoring the "type" attribute.
mca
http://amundsen.com/blog/
On Mon, Oct 26, 2009 at 19:42, Sam Johnston <samj@...> wrote:
> Mike,
> I think you're looking for Feed Autodiscovery.
> Sam
>
> On Tue, Oct 27, 2009 at 12:25 AM, mike amundsen <mamund@...> wrote:
>>
>> Sam/Eric:
>>
>> This leads me to understand that the HTML5 spec statement: "The type
>> attribute gives the MIME type of the linked resource. It is purely
>> advisory. " and "User agents must not consider the type attribute
>> authoritative..." [1] is only part of the story.
>>
>> I've done a bit of digging, but have not yet found any reference to
>> client browser implementations that use this "second try using the
>> type attribute as the accept header" behavior.
>>
>> I'd greatly appreciate anyone who can point me in the proper direction.
>>
>> Thanks.
>>
>> mca
>> http://amundsen.com/blog/
>>
>> [1] http://dev.w3.org/html5/spec/semantics.html#attr-link-type
>>
>>
>> On Mon, Oct 26, 2009 at 19:10, Sam Johnston <samj@...> wrote:
>> > On Mon, Oct 26, 2009 at 11:52 PM, mike amundsen <mamund@...>
>> > wrote:
>> >>
>> >> Eric:
>> >>
>> >> this statement caught me off-guard:
>> >> "The link tag indicates that the request may be repeated, with Accept:
>> >> application/atom+xml..."
>> >>
>> >> I was not aware of this rule for HTML browsers. Can you point in the
>> >> direction of the documentation for this?
>> >
>> > It's true that most browsers do implement similar functionality when
>> > they
>> > spot <link> elements of certain types (and presumably Link: headers
>> > eventually too), so what Mike's asking for effectively exists today even
>> > if
>> > it is not documented as such.
>> > Sam
>> >
>> >>
>> >> On Mon, Oct 26, 2009 at 18:35, Eric J. Bowman <eric@...>
>> >> wrote:
>> >> > Mike Kelly wrote:
>> >> >>
>> >> >> Despite that definition, the spec does not say anything about that
>> >> >> should affect the accept header of the request - so these hyperlinks
>> >> >> are no different to the ones I provided which omitted the attribute
>> >> >> altogether (from an HTTP perspective, anyway).
>> >> >>
>> >> >> > Do you think that if you clicked on the second that the browser
>> >> >> > would view, or subscribe?
>> >> >> >
>> >> >>
>> >> >> Doesn't matter, we just want the UA to make a request with the
>> >> >> correct preferences.
>> >> >>
>> >> >
>> >> > What's wrong with Vary and Content-Location headers, with <link>
>> >> > tags?
>> >> > Using link rel='alternate' type='application/atom+xml' in a returned
>> >> > HTML response causes my browser to display a feed button, pressing it
>> >> > returns the Atom representation with a button asking if I want to
>> >> > subscribe.
>> >> >
>> >> > The request for the HTML page returns Vary: Accept, Content-Type:
>> >> > text/html, and Content-Location: /file.html headers. The link tag
>> >> > indicates that the request may be repeated, with Accept: application/
>> >> > atom+xml, which returns Vary: Accept, Content-Type: application/atom+
>> >> > xml, and Content-Location: /file.atom headers.
>> >> >
>> >> > The client has now "discovered" the URIs for both file.html and
>> >> > file.atom, learning how to request each variant as its own resource,
>> >> > or
>> >> > obtain each variant through a properly-formatted request to the /file
>> >> > URI. XHR can be used to provide two separate links in the HTML to
>> >> > /file
>> >> > which will return different representations based on the Accept
>> >> > request
>> >> > header, for clients that execute the script.
>> >> >
>> >> > -Eric
>> >> >
>> >> >
>> >> > ------------------------------------
>> >> >
>> >> > Yahoo! Groups Links
>> >> >
>> >> >
>> >> >
>> >> >
>> >>
>> >>
>> >> ------------------------------------
>> >>
>> >> Yahoo! Groups Links
>> >>
>> >>
>> >>
>> >
>> >
>
>
mike amundsen wrote: > > I agree 100%. My interest is not in HTML but in user-agents; for this > thread I was focusedon the behavior of common Web browsers. > Specifically, I am interested in effective ways to support multiple > representations of the same resource (text, image, PDF, etc.). In some > cases, these verying representations could be meaningful to the same > user-agent (such as a common browser) in a single display. > Yes, I've been working on exactly that problem for a few years now... I think the domain-root URI should use conneg or redirection depending on Accept header, plus the OPTIONS method, to handle any existing or future media type, i.e. Universal Discovery vs. well-known service URIs. I also deal with the problem of multiple variants using the same media type, instead of limiting to one variant per media type. -Eric
Mike,
You raise a good point - "reverse engineering" the application of a
particular mime type from what it is most commonly used for seems inelegant
at best and severely limiting/dangerous at worst. HTML5 have got it right in
that case.
For example, I may want to represent a collection using Atom, but if the
client assumes that this is instead a feed of recent entries then I'm in
trouble. As a better example, imagine I want to advertise an icon (typically
1:1 aspect ratio according to Atom) and a logo (2:1 aspect ratio) but both
are PNG... then I have no option but to resort to link relations. In this
case though, the resource I'm pointing at will usually be "sufficiently
different" as to justify a separate URL - e.g. a feed of recent articles is
not [usually] the same as a site's home page - so I shouldn't need to start
thinking about embedding both the relation *and* the type into the URL.
That doesn't dispense with the requirement for servers to specify that
certain type(s) are available and the combination of one (or more?) type=""
attributes and the use of the Accept: headers still seems the most sensible
way to do this. Ideally we'd be able to specify multiple type attributes
and/or a space separated list (ala rel) but that does appear to be
non-compliant.
Sam
On Tue, Oct 27, 2009 at 12:51 AM, mike amundsen <mamund@...> wrote:
> Sam:
>
> Thanks for the pointer. IIRC, HTML5 adds the "feed" link relation type
> ("rel") to make autodiscovery possible. This is different than
> honoring the "type" attribute.
>
> mca
> http://amundsen.com/blog/
>
>
>
>
> On Mon, Oct 26, 2009 at 19:42, Sam Johnston <samj@...> wrote:
> > Mike,
> > I think you're looking for Feed Autodiscovery.
> > Sam
> >
> > On Tue, Oct 27, 2009 at 12:25 AM, mike amundsen <mamund@...>
> wrote:
> >>
> >> Sam/Eric:
> >>
> >> This leads me to understand that the HTML5 spec statement: "The type
> >> attribute gives the MIME type of the linked resource. It is purely
> >> advisory. " and "User agents must not consider the type attribute
> >> authoritative..." [1] is only part of the story.
> >>
> >> I've done a bit of digging, but have not yet found any reference to
> >> client browser implementations that use this "second try using the
> >> type attribute as the accept header" behavior.
> >>
> >> I'd greatly appreciate anyone who can point me in the proper direction.
> >>
> >> Thanks.
> >>
> >> mca
> >> http://amundsen.com/blog/
> >>
> >> [1] http://dev.w3.org/html5/spec/semantics.html#attr-link-type
> >>
> >>
> >> On Mon, Oct 26, 2009 at 19:10, Sam Johnston <samj@...> wrote:
> >> > On Mon, Oct 26, 2009 at 11:52 PM, mike amundsen <mamund@...>
> >> > wrote:
> >> >>
> >> >> Eric:
> >> >>
> >> >> this statement caught me off-guard:
> >> >> "The link tag indicates that the request may be repeated, with
> Accept:
> >> >> application/atom+xml..."
> >> >>
> >> >> I was not aware of this rule for HTML browsers. Can you point in the
> >> >> direction of the documentation for this?
> >> >
> >> > It's true that most browsers do implement similar functionality when
> >> > they
> >> > spot <link> elements of certain types (and presumably Link: headers
> >> > eventually too), so what Mike's asking for effectively exists today
> even
> >> > if
> >> > it is not documented as such.
> >> > Sam
> >> >
> >> >>
> >> >> On Mon, Oct 26, 2009 at 18:35, Eric J. Bowman <eric@...
> >
> >> >> wrote:
> >> >> > Mike Kelly wrote:
> >> >> >>
> >> >> >> Despite that definition, the spec does not say anything about that
> >> >> >> should affect the accept header of the request - so these
> hyperlinks
> >> >> >> are no different to the ones I provided which omitted the
> attribute
> >> >> >> altogether (from an HTTP perspective, anyway).
> >> >> >>
> >> >> >> > Do you think that if you clicked on the second that the browser
> >> >> >> > would view, or subscribe?
> >> >> >> >
> >> >> >>
> >> >> >> Doesn't matter, we just want the UA to make a request with the
> >> >> >> correct preferences.
> >> >> >>
> >> >> >
> >> >> > What's wrong with Vary and Content-Location headers, with <link>
> >> >> > tags?
> >> >> > Using link rel='alternate' type='application/atom+xml' in a
> returned
> >> >> > HTML response causes my browser to display a feed button, pressing
> it
> >> >> > returns the Atom representation with a button asking if I want to
> >> >> > subscribe.
> >> >> >
> >> >> > The request for the HTML page returns Vary: Accept, Content-Type:
> >> >> > text/html, and Content-Location: /file.html headers. The link tag
> >> >> > indicates that the request may be repeated, with Accept:
> application/
> >> >> > atom+xml, which returns Vary: Accept, Content-Type:
> application/atom+
> >> >> > xml, and Content-Location: /file.atom headers.
> >> >> >
> >> >> > The client has now "discovered" the URIs for both file.html and
> >> >> > file.atom, learning how to request each variant as its own
> resource,
> >> >> > or
> >> >> > obtain each variant through a properly-formatted request to the
> /file
> >> >> > URI. XHR can be used to provide two separate links in the HTML to
> >> >> > /file
> >> >> > which will return different representations based on the Accept
> >> >> > request
> >> >> > header, for clients that execute the script.
> >> >> >
> >> >> > -Eric
> >> >> >
> >> >> >
> >> >> > ------------------------------------
> >> >> >
> >> >> > Yahoo! Groups Links
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >>
> >> >>
> >> >> ------------------------------------
> >> >>
> >> >> Yahoo! Groups Links
> >> >>
> >> >>
> >> >>
> >> >
> >> >
> >
> >
>
Sam:
Yep, this is the way HTML is and, thus common browsers, too. And they both
feed into each other as time goes on.
Of course, there are many things that are impractical for me to implement in
the custom desktop applications I build that use HTTP as the app protocol.
I learn to work around them; sometimes modifying the server resource and/or
representation accordingly. Same goes for the common browser (HTML) user
agent.
As long as I keep these abstractions (resource and representation) clear of
the data I store on the server, all works fine; even when I need to have
unique resources and/or representations in order to better server a
particular user agent.
mca
http://amundsen.com/blog/
On Mon, Oct 26, 2009 at 20:35, Sam Johnston <samj@...> wrote:
>
>
> Mike,
>
> You raise a good point - "reverse engineering" the application of a
> particular mime type from what it is most commonly used for seems inelegant
> at best and severely limiting/dangerous at worst. HTML5 have got it right in
> that case.
>
> For example, I may want to represent a collection using Atom, but if the
> client assumes that this is instead a feed of recent entries then I'm in
> trouble. As a better example, imagine I want to advertise an icon (typically
> 1:1 aspect ratio according to Atom) and a logo (2:1 aspect ratio) but both
> are PNG... then I have no option but to resort to link relations. In this
> case though, the resource I'm pointing at will usually be "sufficiently
> different" as to justify a separate URL - e.g. a feed of recent articles is
> not [usually] the same as a site's home page - so I shouldn't need to start
> thinking about embedding both the relation *and* the type into the URL.
>
> That doesn't dispense with the requirement for servers to specify that
> certain type(s) are available and the combination of one (or more?) type=""
> attributes and the use of the Accept: headers still seems the most sensible
> way to do this. Ideally we'd be able to specify multiple type attributes
> and/or a space separated list (ala rel) but that does appear to be
> non-compliant.
>
> Sam
>
>
> On Tue, Oct 27, 2009 at 12:51 AM, mike amundsen <mamund@...> wrote:
>
>> Sam:
>>
>> Thanks for the pointer. IIRC, HTML5 adds the "feed" link relation type
>> ("rel") to make autodiscovery possible. This is different than
>> honoring the "type" attribute.
>>
>> mca
>> http://amundsen.com/blog/
>>
>>
>>
>>
>> On Mon, Oct 26, 2009 at 19:42, Sam Johnston <samj@...> wrote:
>> > Mike,
>> > I think you're looking for Feed Autodiscovery.
>> > Sam
>> >
>> > On Tue, Oct 27, 2009 at 12:25 AM, mike amundsen <mamund@...>
>> wrote:
>> >>
>> >> Sam/Eric:
>> >>
>> >> This leads me to understand that the HTML5 spec statement: "The type
>> >> attribute gives the MIME type of the linked resource. It is purely
>> >> advisory. " and "User agents must not consider the type attribute
>> >> authoritative..." [1] is only part of the story.
>> >>
>> >> I've done a bit of digging, but have not yet found any reference to
>> >> client browser implementations that use this "second try using the
>> >> type attribute as the accept header" behavior.
>> >>
>> >> I'd greatly appreciate anyone who can point me in the proper direction.
>> >>
>> >> Thanks.
>> >>
>> >> mca
>> >> http://amundsen.com/blog/
>> >>
>> >> [1] http://dev.w3.org/html5/spec/semantics.html#attr-link-type
>> >>
>> >>
>> >> On Mon, Oct 26, 2009 at 19:10, Sam Johnston <samj@...> wrote:
>> >> > On Mon, Oct 26, 2009 at 11:52 PM, mike amundsen <mamund@...>
>> >> > wrote:
>> >> >>
>> >> >> Eric:
>> >> >>
>> >> >> this statement caught me off-guard:
>> >> >> "The link tag indicates that the request may be repeated, with
>> Accept:
>> >> >> application/atom+xml..."
>> >> >>
>> >> >> I was not aware of this rule for HTML browsers. Can you point in the
>> >> >> direction of the documentation for this?
>> >> >
>> >> > It's true that most browsers do implement similar functionality when
>> >> > they
>> >> > spot <link> elements of certain types (and presumably Link: headers
>> >> > eventually too), so what Mike's asking for effectively exists today
>> even
>> >> > if
>> >> > it is not documented as such.
>> >> > Sam
>> >> >
>> >> >>
>> >> >> On Mon, Oct 26, 2009 at 18:35, Eric J. Bowman <
>> eric@...>
>> >> >> wrote:
>> >> >> > Mike Kelly wrote:
>> >> >> >>
>> >> >> >> Despite that definition, the spec does not say anything about
>> that
>> >> >> >> should affect the accept header of the request - so these
>> hyperlinks
>> >> >> >> are no different to the ones I provided which omitted the
>> attribute
>> >> >> >> altogether (from an HTTP perspective, anyway).
>> >> >> >>
>> >> >> >> > Do you think that if you clicked on the second that the browser
>> >> >> >> > would view, or subscribe?
>> >> >> >> >
>> >> >> >>
>> >> >> >> Doesn't matter, we just want the UA to make a request with the
>> >> >> >> correct preferences.
>> >> >> >>
>> >> >> >
>> >> >> > What's wrong with Vary and Content-Location headers, with <link>
>> >> >> > tags?
>> >> >> > Using link rel='alternate' type='application/atom+xml' in a
>> returned
>> >> >> > HTML response causes my browser to display a feed button, pressing
>> it
>> >> >> > returns the Atom representation with a button asking if I want to
>> >> >> > subscribe.
>> >> >> >
>> >> >> > The request for the HTML page returns Vary: Accept, Content-Type:
>> >> >> > text/html, and Content-Location: /file.html headers. The link tag
>> >> >> > indicates that the request may be repeated, with Accept:
>> application/
>> >> >> > atom+xml, which returns Vary: Accept, Content-Type:
>> application/atom+
>> >> >> > xml, and Content-Location: /file.atom headers.
>> >> >> >
>> >> >> > The client has now "discovered" the URIs for both file.html and
>> >> >> > file.atom, learning how to request each variant as its own
>> resource,
>> >> >> > or
>> >> >> > obtain each variant through a properly-formatted request to the
>> /file
>> >> >> > URI. XHR can be used to provide two separate links in the HTML to
>> >> >> > /file
>> >> >> > which will return different representations based on the Accept
>> >> >> > request
>> >> >> > header, for clients that execute the script.
>> >> >> >
>> >> >> > -Eric
>> >> >> >
>> >> >> >
>> >> >> > ------------------------------------
>> >> >> >
>> >> >> > Yahoo! Groups Links
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >>
>> >> >>
>> >> >> ------------------------------------
>> >> >>
>> >> >> Yahoo! Groups Links
>> >> >>
>> >> >>
>> >> >>
>> >> >
>> >> >
>> >
>> >
>>
>
>
>
>
On Oct 26, 2009, at 10:19 AM, Will Hartung wrote: > Thinking about this a little more, I have a question I'd like > clarified. > > We talked about unique naming and how there shouldn't be /resource.xml > and /resource.json, but rather /resource and two representations based > on the Accept header. Actually, there should be all three if you want a negotiated resource. It is important to understand that these are three *different* resources (resource != file). Each identifier corresponds to a unique semantic and mapping over time. > But in hindsight, what's the difference between > > GET /resource.xml > GET /resource.json > > and > > GET /resource > Accept: application/xml > > GET /resource > Accept: application/json > > Semantically, the queries can be identical. Logically, one would > ASSUME they're identical. The former are requests on two different resources. The latter are two varying requests on one resource. The only difference, in my opinion, is that the single varying resource makes for a better bookmark because it is less susceptible to both differences in user agent capabilities (different accept lists) and changes in supported media types over time. It is not, however, a replacement for the media-specific resources and their corresponding URIs. A better protocol would tell the client the available variants and how to get them, preferably in a way that doesn't impact latency (trailers). Yes, that was in HTTP/1.1's original design. The media-specific resources are also useful for the apps that don't want to negotiate, especially those performing remote authoring or versioning. ....Roy
On Wed, Oct 21, 2009 at 8:43 AM, Michael Crute <mcrute@...> wrote: > I'm writing an RESTful web service to update content on a mobile > device. We are currently using the If-Modified-Since header along with > 304 "Not Modified" response codes to ensure that the device does not > download the file more than is absolutely necessary, but I'd like to > go a step further and only provide the changed records to the device > (this is an XML file FWIW). After combing over the HTTP spec and not > finding much on Google I think this might be a valid approach: > > Client sends a GET request with an If-Range header that specifies the > last download date and a Range header that specifies the same. The > server could then send back a response of 304 "Not Modified", 206 > "Partial Resource" along with the deltas as the body or a 200 with all > of the records as the body. I think there might be some confusion as to what I was trying to do. The "resource" in question here is a web service that knows how to parse headers and query a database to return only the records since the last modified date submitted by the device, the client contains the logic to take a partial XML file (partial in the sense of does not contain all records) and merge it into it's locally cached copy. With that in mind the patch approach doesn't work as well and canonical XML is a little overkill for this project. After re-reading the header portion of the HTTP spec it would seem perfectly legitimate for me to send back a 206 response to a GET request that contains an If-Modified-Since header. The body of that reply would be the changed records since the last time the client downloaded the resource. -mike -- ________________________________ Michael E. Crute http://mike.crute.org God put me on this earth to accomplish a certain number of things. Right now I am so far behind that I will never die. --Bill Watterson
Hello Will. The thread is looong and taking into account too many HTTP specifics, which if you read my comments I tend to be away from. Also, it seems Roy already answered what I was going to write. Anyway, I will write it :D --- In rest-discuss@yahoogroups.com, Will Hartung <willh@...> wrote: > > Thinking about this a little more, I have a question I'd like clarified. > > We talked about unique naming and how there shouldn't be /resource.xml > and /resource.json, but rather /resource and two representations based > on the Accept header. > The naming of /resource.ext should not mean anything to the client, but taking into account the human part, it will lead the developer to think the .ext part of the URI is a type indicator. > But in hindsight, what's the difference between > > GET /resource.xml > GET /resource.json > > and > > GET /resource > Accept: application/xml > > GET /resource > Accept: application/json > Well, the difference, in the REST context, is clear: They are three different URIs, and thus under the eyes of the client they are three different resources (although not necessarily three different ones!). What I mean is, you have there three different "names" or ID for resources, and to the client they are three resources, period. Now, since a resource can have more than one name, then they may be the same resource. Note that I'm not talking about files here, but resources. In fact, there may be only one resource with two representations (a service that generates XML of JSON on request), and each URI approaches the same resource in a different way. But all is hidden in the implementation, and client does not know that. It may not mean a major difference from your insider view, but from the client view it is more complicated. See? BTW, that approach of service allows for expansion and evolution. YOu can add new representation whenever you like. But, in that case, the generic /resource plus ACCEPT is the best choice. > Semantically, the queries can be identical. Logically, one would > ASSUME they're identical. > Queries is a word that itches here, but that is already mentioned somewhere else. To the client, it may be requesting any of three URIs, no queries, and each URI will return something different. One will return only xml representation, the other only JSON representation (and may not be related to the URI composition!) and the last one allows negotiation of the type. Simple. > From a caching point of view, they are separate requests. A cache that > has the XML representation won't be able to answer a JSON query, so > both have a similar caching impact in terms of ensuring that the cache > is properly synced with both representations. > To a cache system, it depends. For instance, in DB world: If you cache systems identifies cached results by exact query, then any two SQL statement that differs in one space, or in the order of the Where clauses, will create a copy of results to cache. Now, if the cache is an intelligent one, it will identify the result by its properties, rather by the SQL statement that generated them. So, later, another SQL statement that may require a subset of the result that is already in cache, will make the cache activate and avoid another DB call! Even more, the cache can see if part of the query is already answered in cache, and then not to perform the complete query since it already has some data. In this example, fear not of the URIs, but of your cache intelligence. Taking into account that you can have 100 URIs all pointing to the same resource, the cache that works against a canonical resource name will have no problem, but if it works with the URI, then it will load 100 copies of the same thing. Cheers! William Martinez Pomares
Michael Crute wrote: > > With > that in mind the patch approach doesn't work as well and canonical XML > is a little overkill for this project. > I understand exactly what you're after. The problem here is that your proposed solution may be perfectly valid HTTP, but it isn't REST. I don't believe REST is the solution to all problems. I do believe it's important to understand where a system deviates from REST, in order to evaluate the system in terms of desired properties. Understanding what constraints aren't being met, and what the consequences are, is what software architecture is all about. Your solution may very well be best-suited to your needs -- I am not in a position to weigh the benefits of long-term scaling vs. short-term development costs. If you do need scaling, the higher deployment cost of a REST architecture (in this case) is likely offset. You surely have other criteria I'm not aware of. Understanding your system as it relates to REST, allows you to define your solution as its own architectural style (set of constraints). > > After re-reading the header portion of the HTTP spec it would seem > perfectly legitimate for me to send back a 206 response to a GET > request that contains an If-Modified-Since header. The body of that > reply would be the changed records since the last time the client > downloaded the resource. > If I curl this resource, the application state I receive in response tells me... what? So, I need some sort of black-box client that knows how to merge this into some previous application state, without following hypertext in the response? This indicates that out-of-band information is driving the application. I'm not saying it can't be done, I'm just saying the hypertext constraint isn't being applied. What if your solution were to return an XSLT 2 patch, instead? When the update is requested, the server writes the request REFERER to the response as an XSLT document() call. Instead of being a black box, the client implements the linking and processing rules of a well-known media type (application/xslt+xml). The URI is changed to indicate a version, the document() call loads the previous application state from the client component's cache connector, transforming it into the new steady-state the user requested. Using curl, I can see the REFERER hyperlink in the document() function of the returned XSLT, and I can tell by the media type that the response is to be executed by an XSLT 2 processor. Nothing out-of-band driving the interaction, there, so the hypertext constraint is successfully applied. I do know from working with XSLT, though, that generating dynamic XSLT on-the-fly at the server is expensive to develop, and perhaps overly complex a solution to any problem. Which brings me back to CPATCH, or using RHTTP to PATCH in reverse. Anyway, you're in one of those ill-defined areas of REST here, where there's more theory than hands-on experience. So do what best suits your project's needs, with an understanding of what constraint(s) you aren't applying and why. But do keep an eye on the work being done in this area, like RHTTP, because in a few more years these things may have gone from cutting-edge experiments to common practice. Then, a RESTful solution to your problem is less expensive to develop and more widely understood -- both good reasons to not use REST to solve your specific problem today. -Eric
On Tue, Oct 27, 2009 at 1:55 PM, Eric J. Bowman <eric@...> wrote: > I understand exactly what you're after. The problem here is that your > proposed solution may be perfectly valid HTTP, but it isn't REST. That's an interesting point. I'm mostly concerned with not totally abusing the HTTP spec and implementing something really non-standard that no general-purpose client understands. Technically a "naive" client that had no idea of how the if-last-modified header was employed (and thus didn't use it) would just get the resource in it's entirety. But I guess that isn't strictly restful and perhaps a bad idea in general because a naive client that did submit an if-modified-since header might not understand what to do with the response body. I guess I will just stick to sending the entire resource if it has changed and not deal with partial resources and merging as all of the other options seem overkill for this application. Your ideas on XSLT patching is pretty neat though. -mike -- ________________________________ Michael E. Crute http://mike.crute.org God put me on this earth to accomplish a certain number of things. Right now I am so far behind that I will never die. --Bill Watterson
Thanks Erling, for your message. You know, I've read the post several times, and I still don't find it so humorous, but for the names of the categories. I don't find them humiliating either (at least, that was not the intention). I'm still surprised for the reaction. And maybe that is why many people overlooked the post as not interesting, failing to start a good discussion. Anyway, as you say, it may help someone to investigate even further of whether what it is believing is the complete thing or not. And that is a plus. Cheers! William Martinez Pomares --- In rest-discuss@...m, Erling Wegger Linde <erlingwl@...> wrote: > > Hi, > > First of all, I also interpreted this as humorous. (Personally, I'm probably > something like a "loose coupling addict". ) > > However, using humor to achieve a (serious) goal is a good thing. > > I imagine one could have all sorts of "so you think you know > REST"-checklists / personal assessment tools (you can probably take this too > far..). Having some way of giving developers feedback on which "fan types" > they might be, could be a good thing. Let's say you end up as a > "URI-juggler", you might realize that you should read up on HATEOAS etc. And > by using humor, it might encourage more people to do so etc. etc. > > Cheers, > Erling >
On thing that keeps bugging me.... Suppose I have an order accepting resource /order-processor-a and the client has discovered that it accepts application/order+xml (assuming the type being a standard type). Order submissions would be done with (Case A:) POST /order-processor-a Content-Type: application/order+xml <order> <item>A</item> <item>B</item> </order> Now suppose I had another order processor that accepts submission of orders in the form of form data, e.g. (Case B:) POST /order-processor-b Content-Type: application/x-www-form-urlencoded item=A&item=B Isn't case B violating REST's message self descriptiveness constraint because the meaning of the message depends on the knowledge that the recipient is an order processor? IOW, an observer could only figure out the meaning if it knew the past interactions and not form the message itself. Is application/x-www-form-urlencoded as bad a choice as application/ xml? In fact, is any general media type (e.g. text/uri-list) a violation of the message self descriptiveness constraint? Thanks, Jan
Jan:
If I understand your post, I think you might be assuming that the
"self-descriptive" part refers to the body of the message.
Here's a couple things for you to consider.
1 - self-descriptive messages are ones that "stand-alone"; are not
dependent on previous or subsequent messages exchanged between
parties.
2 - the "message" is not just the body, it's the entire _message_ including
+ the operation + URI (POST /order-processor-a)
+the control data (all the header information)
+ the entity body (actual representation sent)
If the server announces that it understands the
application/x-www-form-urlencoded media-type for the /process-order-b
resource and the client also understands that media-type, that is all
that is needed. It might be necessary for clients and servers to
exchange additional out-of-band information on the proper way to form
the message for a particular media type (element names, order, etc.),
but that descriptive information is something else.
Finally, the details of the entity body are only interesting to the
two parties involved in the message (sender, receiver). From the
intermediaries POV, they need to no nothing about this body - in fact
should not be peeking into the body at all.
mca
http://amundsen.com/blog/
On Thu, Oct 29, 2009 at 09:34, Jan Algermissen <algermissen1971@...> wrote:
> On thing that keeps bugging me....
>
> Suppose I have an order accepting resource /order-processor-a and the
> client has discovered that it accepts application/order+xml (assuming
> the type being a standard type). Order submissions would be done with
>
> (Case A:)
>
> POST /order-processor-a
> Content-Type: application/order+xml
>
> <order>
> <item>A</item>
> <item>B</item>
> </order>
>
> Now suppose I had another order processor that accepts submission of
> orders in the form of form data, e.g.
>
>
> (Case B:)
>
> POST /order-processor-b
> Content-Type: application/x-www-form-urlencoded
>
> item=A&item=B
>
>
> Isn't case B violating REST's message self descriptiveness constraint
> because the meaning of the message depends on the knowledge that the
> recipient is an order processor? IOW, an observer could only figure
> out the meaning if it knew the past interactions and not form the
> message itself.
>
> Is application/x-www-form-urlencoded as bad a choice as application/
> xml? In fact, is any general media type (e.g. text/uri-list) a
> violation of the message self descriptiveness constraint?
>
> Thanks,
>
> Jan
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
>
As far as the protocol is concerned, a message using general purpose media types is still self-descriptive. For instance, HTTP does not depend on the meaning of a application/x-www-form-urlencoded body of a message. As far HTTP and intermediaries are concerned, that media type is just an identifier for the format of the message. Subbu On Oct 29, 2009, at 6:34 AM, Jan Algermissen wrote: > On thing that keeps bugging me.... > > Suppose I have an order accepting resource /order-processor-a and the > client has discovered that it accepts application/order+xml (assuming > the type being a standard type). Order submissions would be done with > > (Case A:) > > POST /order-processor-a > Content-Type: application/order+xml > > <order> > <item>A</item> > <item>B</item> > </order> > > Now suppose I had another order processor that accepts submission of > orders in the form of form data, e.g. > > > (Case B:) > > POST /order-processor-b > Content-Type: application/x-www-form-urlencoded > > item=A&item=B > > > Isn't case B violating REST's message self descriptiveness constraint > because the meaning of the message depends on the knowledge that the > recipient is an order processor? IOW, an observer could only figure > out the meaning if it knew the past interactions and not form the > message itself. > > Is application/x-www-form-urlencoded as bad a choice as application/ > xml? In fact, is any general media type (e.g. text/uri-list) a > violation of the message self descriptiveness constraint? > > Thanks, > > Jan > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
In addition to the good comments you've already received from Mike and Subbu, one other detail to consider is this: you do not *have* to use different URIs for order processing in order to process two different media types. It is also reasonable to have the same URI handle both (presumably doing a media type specific conversion to internal data structures, followed by common processing that doesn't care what the incoming format was). Craig McClanahan On Thu, Oct 29, 2009 at 6:34 AM, Jan Algermissen <algermissen1971@...>wrote: > > > On thing that keeps bugging me.... > > Suppose I have an order accepting resource /order-processor-a and the > client has discovered that it accepts application/order+xml (assuming > the type being a standard type). Order submissions would be done with > > (Case A:) > > POST /order-processor-a > Content-Type: application/order+xml > > <order> > <item>A</item> > <item>B</item> > </order> > > Now suppose I had another order processor that accepts submission of > orders in the form of form data, e.g. > > (Case B:) > > POST /order-processor-b > Content-Type: application/x-www-form-urlencoded > > item=A&item=B > > Isn't case B violating REST's message self descriptiveness constraint > because the meaning of the message depends on the knowledge that the > recipient is an order processor? IOW, an observer could only figure > out the meaning if it knew the past interactions and not form the > message itself. > > Is application/x-www-form-urlencoded as bad a choice as application/ > xml? In fact, is any general media type (e.g. text/uri-list) a > violation of the message self descriptiveness constraint? > > Thanks, > > Jan > > >
Hi, suppose the following media types do exist: - application/procurement-order for orders - application/procurement-orderrejection for order rejections also suppose the client knows there is a resource at /order-processor that accepts orders in application/procurement-order media type. What if the client submits an order and the server wants to reject the order (maybe because the requested items are permanently out of stock)? What return code would teh server use and does it make sense to send the order rejection document as the body of the (error-)response? Example: -> POST /order=processor Content-Type: application/procurement-order <the order XML goes here> 409 Confilct Content-Type: application/procurement-orderrejection <xml of order rejection document goes here, maybe telling the client how to fix the problem (e.g. suggesting similar goods that are in stock)> Does that make sense? And if not - how else to do it? Hmm, an alternative would be to create an order request resource and not just an order resource and make the rejection part of the representation of the order request: POST /order-processor Content-Type: application/procurement-order 201 Created Location /order-processor/order-requests Content-Type: application/order-request <XML representing the state of the order-request, which at the moment is 'rejected'> A client could then PUT/PATCH the order to fix the problem according to suggestions made in the order-request response. Does that sound better? Jan
Jan: One approach is to allow for the creation of an order that always results in an order resource that has a "pending" status element. POST /orders .. <order /> 201 OK Location: /orders/123 You could then support an operation that results in a change in the status of that order. This could be handled a number of ways: 1. Simply allow a PUT to modify the order status element PUT /order/123 2. Support the creation of approval resources POST /order-approval ... <order id="123" status="approved" concurrency-tag="a1s2d3f4g5" /> 303 See Other Location /orders/123 (w/ the status updated) As for error codes on failed approvals/orders, along with 409 (or just 400) check out WebDAV status code extensions [1]. I've used 422 and 424 in similar situations. mca http://amundsen.com/blog/ [1] http://www.webdav.org/specs/rfc2518.html#status.code.extensions.to.http11 On Fri, Oct 30, 2009 at 19:01, Jan Algermissen <algermissen1971@...> wrote: > Hi, > > suppose the following media types do exist: > > - application/procurement-order for orders > - application/procurement-orderrejection for order rejections > > also suppose the client knows there is a resource at /order-processor > that accepts orders in application/procurement-order media type. > > What if the client submits an order and the server wants to reject > the order (maybe because the requested items are permanently out > of stock)? What return code would teh server use and does it make > sense to send the order rejection document as the body of the > (error-)response? > > Example: > > > -> > POST /order=processor > Content-Type: application/procurement-order > > <the order XML goes here> > > 409 Confilct > Content-Type: application/procurement-orderrejection > > <xml of order rejection document goes here, maybe > telling the client how to fix the problem (e.g. > suggesting similar goods that are in stock)> > > > Does that make sense? And if not - how else to do it? > > Hmm, an alternative would be to create an order request resource and > not just an order resource and make the rejection part of the > representation of the order request: > > POST /order-processor > Content-Type: application/procurement-order > > 201 Created > Location /order-processor/order-requests > Content-Type: application/order-request > > <XML representing the state of the order-request, which at the moment is > 'rejected'> > > > A client could then PUT/PATCH the order to fix the problem according to > suggestions made in the order-request response. > > > Does that sound better? > > Jan > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Tim, On Oct 31, 2009, at 12:56 AM, Tim Bray wrote: > On Fri, Oct 30, 2009 at 4:01 PM, Jan Algermissen > <algermissen1971@...> wrote: > >> What if the client submits an order and the server wants to reject >> the order (maybe because the requested items are permanently out >> of stock)? What return code would teh server use and does it make >> sense to send the order rejection document as the body of the >> (error-)response? > > I've usually felt that when things go wrong, text/plain is the best > way to ship off the error message. Who knows what kind of client > you've got and whether or not they can display any format in > particular. I'm not sure it's worthwhile inventing a new media-type > for this. -T Yes, I usually agree. I forgot to say that with the example I wanted to stress the point that the document types for order and order-rejection already existed as part of the procurement 'model' and if it would make sense to use the order-rejection business document as the body of an error response. HTTP *is* the application protocol and regarding error indication that means that we do not have a return code for 'order rejected'. OTH, a client must be able to understand the domain semantics to distinguish rejected orders from failed communication. (Instead of just letting some human end huser figure it out from the plain text error response) When I looked at the HTTP error codes I figured that 409 Conflict includes the notion of 'failure but the client can fix it'. I think this leaves room for the use of a domain specific error dcoument that provides suggestions of how to fix the error. You would not send that with a 400 because there the problem is at the technical level. Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Oct 31, 2009, at 2:58 AM, mike amundsen wrote: > Jan: > > One approach is to allow for the creation of an order that always > results in an order resource that has a "pending" status element. > > POST /orders > .. > <order /> > > 201 OK > Location: /orders/123 > > You could then support an operation that results in a change in the > status of that order. This could be handled a number of ways: > > 1. Simply allow a PUT to modify the order status element > PUT /order/123 > > 2. Support the creation of approval resources > POST /order-approval > ... > <order id="123" status="approved" concurrency-tag="a1s2d3f4g5" /> > > 303 See Other > Location /orders/123 (w/ the status updated) Yeah - I came to understand that 303 can be used to notify the client about a resource state change recently (thanks to some posting by Roy). I like that solution. > > As for error codes on failed approvals/orders, along with 409 (or just > 400) check out WebDAV status code extensions [1]. I've used 422 and > 424 in similar situations. Yes, 422 is another approach. OTH, I have understood it to mean that the request does in itself not make sense and not that the request cannot be fulfilled due to the curent server state. Hmm, having such a vast number of options seems not optimal with regard to client side coding... Thanks, Jan > > mca > http://amundsen.com/blog/ > > [1] http://www.webdav.org/specs/rfc2518.html#status.code.extensions.to.http11 > > > > On Fri, Oct 30, 2009 at 19:01, Jan Algermissen <algermissen1971@... > > wrote: >> Hi, >> >> suppose the following media types do exist: >> >> - application/procurement-order for orders >> - application/procurement-orderrejection for order rejections >> >> also suppose the client knows there is a resource at /order-processor >> that accepts orders in application/procurement-order media type. >> >> What if the client submits an order and the server wants to reject >> the order (maybe because the requested items are permanently out >> of stock)? What return code would teh server use and does it make >> sense to send the order rejection document as the body of the >> (error-)response? >> >> Example: >> >> >> -> >> POST /order=processor >> Content-Type: application/procurement-order >> >> <the order XML goes here> >> >> 409 Confilct >> Content-Type: application/procurement-orderrejection >> >> <xml of order rejection document goes here, maybe >> telling the client how to fix the problem (e.g. >> suggesting similar goods that are in stock)> >> >> >> Does that make sense? And if not - how else to do it? >> >> Hmm, an alternative would be to create an order request resource and >> not just an order resource and make the rejection part of the >> representation of the order request: >> >> POST /order-processor >> Content-Type: application/procurement-order >> >> 201 Created >> Location /order-processor/order-requests >> Content-Type: application/order-request >> >> <XML representing the state of the order-request, which at the >> moment is >> 'rejected'> >> >> >> A client could then PUT/PATCH the order to fix the problem >> according to >> suggestions made in the order-request response. >> >> >> Does that sound better? >> >> Jan >> >> >> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Fri, Oct 30, 2009 at 6:01 PM, Jan Algermissen <algermissen1971@...> wrote: > suppose the following media types do exist: > > - application/procurement-order for orders > - application/procurement-orderrejection for order rejections > > also suppose the client knows there is a resource at /order-processor > that accepts orders in application/procurement-order media type. > > What if the client submits an order and the server wants to reject > the order (maybe because the requested items are permanently out > of stock)? What return code would teh server use and does it make > sense to send the order rejection document as the body of the > (error-)response? That's tricky, in the general sense. The business protocol here (above the application protocol) is called Offer-Acceptance. So what the client submits is (business-technically) an Offer To Buy. The response could be acceptance or rejection or maybe a counter-offer. If the seller accepts, buyer and seller have a contract. There is a logical distinction between the offer and the response: in UBL, for example, the offer is just called Order and the response is called OrderResponse. They are different documents, and I think deserve to be different resources in a RESTful app. See http://docs.oasis-open.org/ubl/os-UBL-2.0/UBL-2.0.html The interaction could be asynchronous. In other words, the HTTP response to the initial request could just be an acknowledgement, and the response to the offer to buy could come later. Or in the case of a counter-offer (for example, the seller could only partially fulfill the order), the buyer and seller might exchange several documents. So the answer to the question about the order rejection document is "it depends". Sometimes the seller might be able to respond immediately (as the HTTP response), sometimes not. At any rate, you might want to think of the both order and response as resources.
On Oct 31, 2009, at 12:01 AM, Jan Algermissen wrote: > Hi, > > suppose the following media types do exist: > > - application/procurement-order for orders > - application/procurement-orderrejection for order rejections > > also suppose the client knows there is a resource at /order-processor > that accepts orders in application/procurement-order media type. > > What if the client submits an order and the server wants to reject > the order (maybe because the requested items are permanently out > of stock)? What return code would the server use and does it make > sense to send the order rejection document as the body of the > (error-)response? I am trying to rule out the abve approach by deriving from REST's constraints. Here is my thinking: I assume (because I am not able yet to derive that from the REST constraints) that there is an implicit constraint in REST that demands all application data to be stored on the server. To put this in other words: a client must be able to perform any of the next possible transitions in an application soley based on the responses previously received. The client-server collaboration must not be designed in a way that requires a client to keep track of its own requests. Applied to my question above I think that a RESTful solution demands that the server creates application data as the basis for subsequent interactions and then instructs the client how to procede through the application. (This implies a solution where the order is created on the server and marked as 'pending' to provide the application data. The client would then alter the order (the application data) to 'fix' the (item-out-of-stock-)problem. The solution proposed in my original posting on the other hand would require the client to record it's own order and apply the suggested change to it before repeating the original request. (Aside: is maintaining application data on both, client and server a property of messaging styles?) Generally, I have for some time now wondered what could be used as a guiding principle for answering the question when to create a resource on the server? (*Why* create a resource instead of just sending an answer document like you would do in synchronous messaging). When the resource is seen as application data, the answer would be that resources must be created when otherwise application data would have to be recorded on the client side (which we do not want according to my above reasoning). I am not really satisfied with the 'flow' of this argumentation yet - comments/criticism would be very welcome. Jan > > Example: > > > -> > POST /order=processor > Content-Type: application/procurement-order > > <the order XML goes here> > > 409 Confilct > Content-Type: application/procurement-orderrejection > > <xml of order rejection document goes here, maybe > telling the client how to fix the problem (e.g. > suggesting similar goods that are in stock)> > > > Does that make sense? And if not - how else to do it? > > Hmm, an alternative would be to create an order request resource and > not just an order resource and make the rejection part of the > representation of the order request: > > POST /order-processor > Content-Type: application/procurement-order > > 201 Created > Location /order-processor/order-requests > Content-Type: application/order-request > > <XML representing the state of the order-request, which at the > moment is > 'rejected'> > > > A client could then PUT/PATCH the order to fix the problem according > to > suggestions made in the order-request response. > > > Does that sound better? > > Jan > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
--- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > If the server announces that it understands the > application/x-www-form-urlencoded media-type for the /process-order-b > resource and the client also understands that media-type, that is all > that is needed. It might be necessary for clients and servers to > exchange additional out-of-band information on the proper way to form > the message for a particular media type (element names, order, etc.), > but that descriptive information is something else. > Is that out-of-band info "allowed" from a REST perspective? I've always assumed that constraints on a general media types must be communicated in hypermedia (just like the URI). In other words, you need something like a <form>. Is that not the case? Though I suppose the "form" constraints could also be fixed. ie. the definition of application/orderform+xml describes the constraints on application/x-form-urlencoded requests so no run-time info is needed. Still, the available choices are a) something specific in the definition of the media type(s) or b) something communicated at run-time in hypermedia (or a combination of a and b). Make sense? > Finally, the details of the entity body are only interesting to the > two parties involved in the message (sender, receiver). From the > intermediaries POV, they need to no nothing about this body - in fact > should not be peeking into the body at all. Is that actually a constraint? Some protocols, like SIP, place explicit restrictions on what a proxy can do with the body. But I wasn't aware of anything in HTTP that restricted the proxy behavior this way -- and I can't think of a REST constraint on this. Regards, Andrew
Andrew: good points: <snip> I've always assumed that constraints on a general media types must be communicated in hypermedia (just like the URI). </snip> By "general media types" are you thinking there are "non-general" media types that may not require constraints (on inputs, I assume) be communicated via hypermedia? <snip> But I wasn't aware of anything in HTTP that restricted the proxy behavior this way -- and I can't think of a REST constraint on this. </snip> I have in mind the following from Fielding: "REST enables intermediate processing by constraining messages to be self-descriptive: interaction is stateless between requests, standard methods and media types are used to indicate semantics and exchange information, and responses explicitly indicate cacheability." [1] I have adopted an interpretation of this section that closely follows that described by Joe Gregorio: "...[T]he reason that RESTful systems can scale much easier ... has to do with the amount of information that each message carries and that is available to intermediaries without peeking into the body." [2] I find no direct references to "not peeking into the body" in Fielding or RFC2616. mca http://amundsen.com/blog/ [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_1 [2] http://bitworking.org/news/125/REST-and-WS#self-descriptive On Mon, Nov 2, 2009 at 13:51, wahbedahbe <andrew.wahbe@gmail.com> wrote: > --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: >> If the server announces that it understands the >> application/x-www-form-urlencoded media-type for the /process-order-b >> resource and the client also understands that media-type, that is all >> that is needed. It might be necessary for clients and servers to >> exchange additional out-of-band information on the proper way to form >> the message for a particular media type (element names, order, etc.), >> but that descriptive information is something else. >> > > Is that out-of-band info "allowed" from a REST perspective? I've always assumed that constraints on a general media types must be communicated in hypermedia (just like the URI). In other words, you need something like a <form>. Is that not the case? > > Though I suppose the "form" constraints could also be fixed. ie. the definition of application/orderform+xml describes the constraints on application/x-form-urlencoded requests so no run-time info is needed. Still, the available choices are a) something specific in the definition of the media type(s) or b) something communicated at run-time in hypermedia (or a combination of a and b). Make sense? > > >> Finally, the details of the entity body are only interesting to the >> two parties involved in the message (sender, receiver). From the >> intermediaries POV, they need to no nothing about this body - in fact >> should not be peeking into the body at all. > > Is that actually a constraint? Some protocols, like SIP, place explicit restrictions on what a proxy can do with the body. But I wasn't aware of anything in HTTP that restricted the proxy behavior this way -- and I can't think of a REST constraint on this. > > Regards, > > Andrew > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Application state is the state of the application, that spans both the client and the server. There is no constraint preventing state residing on the client, otherwise you couldn't have a local cache in your browser. S > To: rest-discuss@yahoogroups.com > From: algermissen1971@... > Date: Sun, 1 Nov 2009 21:17:11 +0100 > Subject: Re: [rest-discuss] 'No application data on client' constraint? was: RESTful ordering and order-rejection > > > On Oct 31, 2009, at 12:01 AM, Jan Algermissen wrote: > > > Hi, > > > > suppose the following media types do exist: > > > > - application/procurement-order for orders > > - application/procurement-orderrejection for order rejections > > > > also suppose the client knows there is a resource at /order-processor > > that accepts orders in application/procurement-order media type. > > > > What if the client submits an order and the server wants to reject > > the order (maybe because the requested items are permanently out > > of stock)? What return code would the server use and does it make > > sense to send the order rejection document as the body of the > > (error-)response? > > I am trying to rule out the abve approach by deriving from REST's > constraints. > Here is my thinking: > > I assume (because I am not able yet to derive that from the REST > constraints) > that there is an implicit constraint in REST that demands all > application data > to be stored on the server. > > To put this in other words: a client must be able to perform any of > the next > possible transitions in an application soley based on the responses > previously > received. The client-server collaboration must not be designed in a > way that > requires a client to keep track of its own requests. > > Applied to my question above I think that a RESTful solution demands > that the > server creates application data as the basis for subsequent > interactions and > then instructs the client how to procede through the application. (This > implies a solution where the order is created on the server and marked > as > 'pending' to provide the application data. The client would then alter > the > order (the application data) to 'fix' the (item-out-of-stock-)problem. > > The solution proposed in my original posting on the other hand would > require the client to record it's own order and apply the suggested > change to it > before repeating the original request. > > (Aside: is maintaining application data on both, client and server a > property of > messaging styles?) > > Generally, I have for some time now wondered what could be used as a > guiding > principle for answering the question when to create a resource on the > server? > (*Why* create a resource instead of just sending an answer document > like you > would do in synchronous messaging). > > When the resource is seen as application data, the answer would be > that resources > must be created when otherwise application data would have to be > recorded on the > client side (which we do not want according to my above reasoning). > > I am not really satisfied with the 'flow' of this argumentation yet - > comments/criticism > would be very welcome. > > Jan > > > > > > > > > > > > Example: > > > > > > -> > > POST /order=processor > > Content-Type: application/procurement-order > > > > <the order XML goes here> > > > > 409 Confilct > > Content-Type: application/procurement-orderrejection > > > > <xml of order rejection document goes here, maybe > > telling the client how to fix the problem (e.g. > > suggesting similar goods that are in stock)> > > > > > > Does that make sense? And if not - how else to do it? > > > > Hmm, an alternative would be to create an order request resource and > > not just an order resource and make the rejection part of the > > representation of the order request: > > > > POST /order-processor > > Content-Type: application/procurement-order > > > > 201 Created > > Location /order-processor/order-requests > > Content-Type: application/order-request > > > > <XML representing the state of the order-request, which at the > > moment is > > 'rejected'> > > > > > > A client could then PUT/PATCH the order to fix the problem according > > to > > suggestions made in the order-request response. > > > > > > Does that sound better? > > > > Jan > > > > > > > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > _________________________________________________________________ Chat to your friends for free on selected mobiles http://clk.atdmt.com/UKM/go/174426567/direct/01/
On Nov 2, 2009, at 9:43 PM, Sebastien Lambla wrote: > > > Application state is the state of the application, that spans both > the client and the server. There is no constraint preventing state > residing on the client, otherwise you couldn't have a local cache in > your browser. A request in a local cache is not what I mean by application data. Suppose a service were designed in a way that it would require the client to store data besides URIs in order to make the next request. IOW, to continue its interaction with the service on another machine, it would not only need to take the appropriate URIs with it (e.g. send by mail) but also other data elements. I think that would be a violation of REST - I just cannot derive it from the other constraints. My hunch is that in a RESTful system a) all application state (aka session state) is on the client b) all application data resides on the server (as resource state) The question that drives me is what the significance of resource state is (what is its meaning/purpose in a given design? What are the design conditions that lead to the creation of new resource state? Etc.) I am trying to get away from the "you could do this and you could do that but you also could do it that way" kinds of answers to practical REST design questions. Jan > > S > > > To: rest-discuss@yahoogroups.com > > From: algermissen1971@... > > Date: Sun, 1 Nov 2009 21:17:11 +0100 > > Subject: Re: [rest-discuss] 'No application data on client' > constraint? was: RESTful ordering and order-rejection > > > > > > On Oct 31, 2009, at 12:01 AM, Jan Algermissen wrote: > > > > > Hi, > > > > > > suppose the following media types do exist: > > > > > > - application/procurement-order for orders > > > - application/procurement-orderrejection for order rejections > > > > > > also suppose the client knows there is a resource at /order- > processor > > > that accepts orders in application/procurement-order media type. > > > > > > What if the client submits an order and the server wants to reject > > > the order (maybe because the requested items are permanently out > > > of stock)? What return code would the server use and does it make > > > sense to send the order rejection document as the body of the > > > (error-)response? > > > > I am trying to rule out the abve approach by deriving from REST's > > constraints. > > Here is my thinking: > > > > I assume (because I am not able yet to derive that from the REST > > constraints) > > that there is an implicit constraint in REST that demands all > > application data > > to be stored on the server. > > > > To put this in other words: a client must be able to perform any of > > the next > > possible transitions in an application soley based on the responses > > previously > > received. The client-server collaboration must not be designed in a > > way that > > requires a client to keep track of its own requests. > > > > Applied to my question above I think that a RESTful solution demands > > that the > > server creates application data as the basis for subsequent > > interactions and > > then instructs the client how to procede through the application. > (This > > implies a solution where the order is created on the server and > marked > > as > > 'pending' to provide the application data. The client would then > alter > > the > > order (the application data) to 'fix' the (item-out-of- > stock-)problem. > > > > The solution proposed in my original posting on the other hand would > > require the client to record it's own order and apply the suggested > > change to it > > before repeating the original request. > > > > (Aside: is maintaining application data on both, client and server a > > property of > > messaging styles?) > > > > Generally, I have for some time now wondered what could be used as a > > guiding > > principle for answering the question when to create a resource on > the > > server? > > (*Why* create a resource instead of just sending an answer document > > like you > > would do in synchronous messaging). > > > > When the resource is seen as application data, the answer would be > > that resources > > must be created when otherwise application data would have to be > > recorded on the > > client side (which we do not want according to my above reasoning). > > > > I am not really satisfied with the 'flow' of this argumentation > yet - > > comments/criticism > > would be very welcome. > > > > Jan > > > > > > > > > > > > > > > > > > > > > > Example: > > > > > > > > > -> > > > POST /order=processor > > > Content-Type: application/procurement-order > > > > > > <the order XML goes here> > > > > > > 409 Confilct > > > Content-Type: application/procurement-orderrejection > > > > > > <xml of order rejection document goes here, maybe > > > telling the client how to fix the problem (e.g. > > > suggesting similar goods that are in stock)> > > > > > > > > > Does that make sense? And if not - how else to do it? > > > > > > Hmm, an alternative would be to create an order request resource > and > > > not just an order resource and make the rejection part of the > > > representation of the order request: > > > > > > POST /order-processor > > > Content-Type: application/procurement-order > > > > > > 201 Created > > > Location /order-processor/order-requests > > > Content-Type: application/order-request > > > > > > <XML representing the state of the order-request, which at the > > > moment is > > > 'rejected'> > > > > > > > > > A client could then PUT/PATCH the order to fix the problem > according > > > to > > > suggestions made in the order-request response. > > > > > > > > > Does that sound better? > > > > > > Jan > > > > > > > > > > > > > > > > > > > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > -------------------------------------- > > Jan Algermissen > > > > Mail: algermissen@... > > Blog: http://algermissen.blogspot.com/ > > Home: http://www.jalgermissen.com > > -------------------------------------- > > > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > Chat to your friends for free on selected mobiles. Learn more. > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Nov 2, 2009, at 7:51 PM, wahbedahbe wrote: > --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: >> If the server announces that it understands the >> application/x-www-form-urlencoded media-type for the /process-order-b >> resource and the client also understands that media-type, that is all >> that is needed. It might be necessary for clients and servers to >> exchange additional out-of-band information on the proper way to form >> the message for a particular media type (element names, order, etc.), >> but that descriptive information is something else. >> > > Is that out-of-band info "allowed" from a REST perspective? I've > always assumed that constraints on a general media types must be > communicated in hypermedia (just like the URI). In other words, you > need something like a <form>. Is that not the case? My understanding is that when a spec tells you that some URI you find in some hypermedia context (e.g. as link target with a certain relation) it is allowed to establish enay kind of contract between client(you) and server. What matter is that you discovered the URI from hypermedia and that the spec defines which assumptions the client can make. Of course the use of forms increases flexibility. Jan > > Though I suppose the "form" constraints could also be fixed. ie. the > definition of application/orderform+xml describes the constraints on > application/x-form-urlencoded requests so no run-time info is > needed. Still, the available choices are a) something specific in > the definition of the media type(s) or b) something communicated at > run-time in hypermedia (or a combination of a and b). Make sense? > > >> Finally, the details of the entity body are only interesting to the >> two parties involved in the message (sender, receiver). From the >> intermediaries POV, they need to no nothing about this body - in fact >> should not be peeking into the body at all. > > Is that actually a constraint? Some protocols, like SIP, place > explicit restrictions on what a proxy can do with the body. But I > wasn't aware of anything in HTTP that restricted the proxy behavior > this way -- and I can't think of a REST constraint on this. > > Regards, > > Andrew > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Mon, Nov 2, 2009 at 4:16 PM, Jan Algermissen <algermissen1971@...>
wrote:
>
> On Nov 2, 2009, at 9:43 PM, Sebastien Lambla wrote:
>
>>
>>
>> Application state is the state of the application, that spans both
>> the client and the server. There is no constraint preventing state
>> residing on the client, otherwise you couldn't have a local cache in
>> your browser.
>
> A request in a local cache is not what I mean by application data.
> Suppose a service were designed in a way that it would require the
> client to store data besides URIs in order to make the next request.
> IOW, to continue its interaction with the service on another machine,
> it would not only need to take the appropriate URIs with it (e.g. send
> by mail) but also other data elements.
>
> I think that would be a violation of REST - I just cannot derive it
> from the other constraints.
>
> My hunch is that in a RESTful system
>
> a) all application state (aka session state) is on the client
> b) all application data resides on the server (as resource state)
>
> The question that drives me is what the significance of resource state
> is (what is its meaning/purpose in a given design? What are the design
> conditions that lead to the creation of new resource state? Etc.) I am
> trying to get away from the "you could do this and you could do that
> but you also could do it that way" kinds of answers to practical REST
> design questions.
I think I see what you are driving at with your distinction between session
state vs resource state, but I don't think the distinction is so clean in
REST. Here's a relevant passage from Roy's thesis
(5.3.3<http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_3>
):
The application state is controlled and stored by the user agent and
can be composed
of representations from multiple servers. In addition to freeing the server
from the scalability problems of storing state, this allows the user to
directly manipulate the state (e.g., a Web browser's history), anticipate
changes to that state (e.g., link maps and prefetching of representations),
and jump from one application to another (e.g., bookmarks and URI-entry
dialogs). [emphasis added]
So at least a copy of the resource state (in the form or a representation of
that resource) is intended to be stored and manipulated on the client side.
For example, I believe this 5.3.3 paragraph endorses the following scenario:
1. client receives a representation of the works of art in a museum
exhibit as list of art works (name of work, name of artist) from a museum
service on some museum server
2. the client extracts the name of an artist (a string) from the list and
submits it to Wikipedia on a different set of servers
3. It takes the representation from Wikipedia and formats it into a popup
for the user's UI visualization of the museum exhibit
In this scenario the client "jumps from one application (museum service) to
another (Wikipedia)", submitting representation data it received from the
first service to the second service. Your description above seems to
prohibit such a scenario ("to continue its interaction with the service on
another machine, it would not only need to take the appropriate URIs with it
... but also other data elements").
It appears that REST endorses the use of "application data" stored on the
client, at least in the case where such data was at some point in the past
received by the client as part of a representation of a server-based
resource.
-- Nick
On Nov 2, 2009, at 10:52 PM, Nick Gall wrote:
>
>
> On Mon, Nov 2, 2009 at 4:16 PM, Jan Algermissen <algermissen1971@...
> > wrote:
>
> > My hunch is that in a RESTful system
> >
> > a) all application state (aka session state) is on the client
> > b) all application data resides on the server (as resource state)
> >
> > The question that drives me is what the significance of resource
> state
> > is (what is its meaning/purpose in a given design? What are the
> design
> > conditions that lead to the creation of new resource state? Etc.)
> I am
> > trying to get away from the "you could do this and you could do that
> > but you also could do it that way" kinds of answers to practical
> REST
> > design questions.
>
> I think I see what you are driving at with your distinction between
> session state vs resource state, but I don't think the distinction
> is so clean in REST. Here's a relevant passage from Roy's thesis
> (5.3.3):
>
> The application state is controlled and stored by the user agent and
> can be composed of representations from multiple servers. In
> addition to freeing the server from the scalability problems of
> storing state, this allows the user to directly manipulate the state
> (e.g., a Web browser's history), anticipate changes to that state
> (e.g., link maps and prefetching of representations), and jump from
> one application to another (e.g., bookmarks and URI-entry dialogs).
> [emphasis added]
>
> So at least a copy of the resource state (in the form or a
> representation of that resource) is intended to be stored and
> manipulated on the client side. For example, I believe this 5.3.3
> paragraph endorses the following scenario:
> • client receives a representation of the works of art in a museum
> exhibit as list of art works (name of work, name of artist) from a
> museum service on some museum server
> • the client extracts the name of an artist (a string) from the
> list and submits it to Wikipedia on a different set of servers
> • It takes the representation from Wikipedia and formats it into a
> popup for the user's UI visualization of the museum exhibit
> In this scenario the client "jumps from one application (museum
> service) to another (Wikipedia)", submitting representation data it
> received from the first service to the second service. Your
> description above seems to prohibit such a scenario ("to continue
> its interaction with the service on another machine, it would not
> only need to take the appropriate URIs with it ... but also other
> data elements").
>
But in your scenario, the client could allways, just be use of the one
initial URI, pick up the interaction. And this is (IMHO) exactly the
significance of storing the application data as resource state - so
that the client can pick it up again. If the data was not stored in
the server it would have to be stored on the client and the ability
would be lost to pick up the conversation just on the basis of the URI.
Jan
> It appears that REST endorses the use of "application data" stored
> on the client, at least in the case where such data was at some
> point in the past received by the client as part of a representation
> of a server-based resource.
>
> -- Nick
>
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
Jan:
<snip>
I am trying to get away from the "you could do this and you could do
that but you also could do it that way" kinds of answers to practical
REST design questions.
</snip>
Why do you want to get away from this?
mca
http://amundsen.com/blog/
On Mon, Nov 2, 2009 at 17:09, Jan Algermissen <algermissen1971@...> wrote:
>
> On Nov 2, 2009, at 10:52 PM, Nick Gall wrote:
>
>>
>>
>> On Mon, Nov 2, 2009 at 4:16 PM, Jan Algermissen <algermissen1971@...
>> > wrote:
>>
>> > My hunch is that in a RESTful system
>> >
>> > a) all application state (aka session state) is on the client
>> > b) all application data resides on the server (as resource state)
>> >
>> > The question that drives me is what the significance of resource
>> state
>> > is (what is its meaning/purpose in a given design? What are the
>> design
>> > conditions that lead to the creation of new resource state? Etc.)
>> I am
>> > trying to get away from the "you could do this and you could do that
>> > but you also could do it that way" kinds of answers to practical
>> REST
>> > design questions.
>>
>> I think I see what you are driving at with your distinction between
>> session state vs resource state, but I don't think the distinction
>> is so clean in REST. Here's a relevant passage from Roy's thesis
>> (5.3.3):
>>
>> The application state is controlled and stored by the user agent and
>> can be composed of representations from multiple servers. In
>> addition to freeing the server from the scalability problems of
>> storing state, this allows the user to directly manipulate the state
>> (e.g., a Web browser's history), anticipate changes to that state
>> (e.g., link maps and prefetching of representations), and jump from
>> one application to another (e.g., bookmarks and URI-entry dialogs).
>> [emphasis added]
>>
>> So at least a copy of the resource state (in the form or a
>> representation of that resource) is intended to be stored and
>> manipulated on the client side. For example, I believe this 5.3.3
>> paragraph endorses the following scenario:
>> • client receives a representation of the works of art in a museum
>> exhibit as list of art works (name of work, name of artist) from a
>> museum service on some museum server
>> • the client extracts the name of an artist (a string) from the
>> list and submits it to Wikipedia on a different set of servers
>> • It takes the representation from Wikipedia and formats it into a
>> popup for the user's UI visualization of the museum exhibit
>> In this scenario the client "jumps from one application (museum
>> service) to another (Wikipedia)", submitting representation data it
>> received from the first service to the second service. Your
>> description above seems to prohibit such a scenario ("to continue
>> its interaction with the service on another machine, it would not
>> only need to take the appropriate URIs with it ... but also other
>> data elements").
>>
>
> But in your scenario, the client could allways, just be use of the one
> initial URI, pick up the interaction. And this is (IMHO) exactly the
> significance of storing the application data as resource state - so
> that the client can pick it up again. If the data was not stored in
> the server it would have to be stored on the client and the ability
> would be lost to pick up the conversation just on the basis of the URI.
>
> Jan
>
>
>
>> It appears that REST endorses the use of "application data" stored
>> on the client, at least in the case where such data was at some
>> point in the past received by the client as part of a representation
>> of a server-based resource.
>>
>> -- Nick
>>
>>
>>
>>
>
> --------------------------------------
> Jan Algermissen
>
> Mail: algermissen@...
> Blog: http://algermissen.blogspot.com/
> Home: http://www.jalgermissen.com
> --------------------------------------
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Nov 3, 2009, at 12:13 AM, mike amundsen wrote:
> Jan:
>
> <snip>
> I am trying to get away from the "you could do this and you could do
> that but you also could do it that way" kinds of answers to practical
> REST design questions.
> </snip>
>
> Why do you want to get away from this?
Because it does not exactly sell good if you cannot back up advice you
give by principles. One of the things I admire about the Software
Architecture notion laid out by Perry/Wolf, Garlan/Shaw and Roy [1] is
that it creates a foundation for backing up architectural design
decisions by principles. And I just get this feeling of hand waving
when it comes to answering questions about 'how do I do X'. I don't
mind that there are many solutions (otherwise there would not be any
room for design) but (at least) I often cannot articulate a clear
reasoning why and when to prefer one over the other.
Take the example problem I posted. What is better for order-rejection-
with-suggested-modification? To send 409 and e.g. a UBL-like order-
response document (like you would in the non computerized business
world) or to create an order resource anyhow, set the status to
pending and then let the client update the order until it can be
accepted?
(I had a hunch that the latter is better than the former because it
maintains application data on the server only)
Likewise, several return codes have been suggested (409, 422 for
example). It makes no sense to have different returns codes specified
when a group like this one cannot provide a spot-on 'yeah, it is that
one and only that one' response.
And even if there is a single best answer - how do you get
interoperability if it is so easy for different people to have
different opinions about it?
Jan
[1] Excuses for the names missing here
>
> mca
> http://amundsen.com/blog/
>
>
>
>
> On Mon, Nov 2, 2009 at 17:09, Jan Algermissen
> <algermissen1971@...> wrote:
>>
>> On Nov 2, 2009, at 10:52 PM, Nick Gall wrote:
>>
>>>
>>>
>>> On Mon, Nov 2, 2009 at 4:16 PM, Jan Algermissen <algermissen1971@...
>>>> wrote:
>>>
>>>> My hunch is that in a RESTful system
>>>>
>>>> a) all application state (aka session state) is on the client
>>>> b) all application data resides on the server (as resource state)
>>>>
>>>> The question that drives me is what the significance of resource
>>> state
>>>> is (what is its meaning/purpose in a given design? What are the
>>> design
>>>> conditions that lead to the creation of new resource state? Etc.)
>>> I am
>>>> trying to get away from the "you could do this and you could do
>>>> that
>>>> but you also could do it that way" kinds of answers to practical
>>> REST
>>>> design questions.
>>>
>>> I think I see what you are driving at with your distinction between
>>> session state vs resource state, but I don't think the distinction
>>> is so clean in REST. Here's a relevant passage from Roy's thesis
>>> (5.3.3):
>>>
>>> The application state is controlled and stored by the user agent and
>>> can be composed of representations from multiple servers. In
>>> addition to freeing the server from the scalability problems of
>>> storing state, this allows the user to directly manipulate the state
>>> (e.g., a Web browser's history), anticipate changes to that state
>>> (e.g., link maps and prefetching of representations), and jump from
>>> one application to another (e.g., bookmarks and URI-entry dialogs).
>>> [emphasis added]
>>>
>>> So at least a copy of the resource state (in the form or a
>>> representation of that resource) is intended to be stored and
>>> manipulated on the client side. For example, I believe this 5.3.3
>>> paragraph endorses the following scenario:
>>> • client receives a representation of the works of art in a
>>> museum
>>> exhibit as list of art works (name of work, name of artist) from a
>>> museum service on some museum server
>>> • the client extracts the name of an artist (a string) from
>>> the
>>> list and submits it to Wikipedia on a different set of servers
>>> • It takes the representation from Wikipedia and formats it
>>> into a
>>> popup for the user's UI visualization of the museum exhibit
>>> In this scenario the client "jumps from one application (museum
>>> service) to another (Wikipedia)", submitting representation data it
>>> received from the first service to the second service. Your
>>> description above seems to prohibit such a scenario ("to continue
>>> its interaction with the service on another machine, it would not
>>> only need to take the appropriate URIs with it ... but also other
>>> data elements").
>>>
>>
>> But in your scenario, the client could allways, just be use of the
>> one
>> initial URI, pick up the interaction. And this is (IMHO) exactly the
>> significance of storing the application data as resource state - so
>> that the client can pick it up again. If the data was not stored in
>> the server it would have to be stored on the client and the ability
>> would be lost to pick up the conversation just on the basis of the
>> URI.
>>
>> Jan
>>
>>
>>
>>> It appears that REST endorses the use of "application data" stored
>>> on the client, at least in the case where such data was at some
>>> point in the past received by the client as part of a representation
>>> of a server-based resource.
>>>
>>> -- Nick
>>>
>>>
>>>
>>>
>>
>> --------------------------------------
>> Jan Algermissen
>>
>> Mail: algermissen@...
>> Blog: http://algermissen.blogspot.com/
>> Home: http://www.jalgermissen.com
>> --------------------------------------
>>
>>
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
On Nov 2, 2009, at 11:39 AM, mike amundsen wrote: > Andrew: > > good points: > <snip> > I've always assumed that constraints on a general media types must be > communicated in hypermedia (just like the URI). > </snip> > By "general media types" are you thinking there are "non-general" > media types that may not require constraints (on inputs, I assume) be > communicated via hypermedia? > > <snip> > But I wasn't aware of anything in HTTP that restricted the proxy > behavior this way -- and I can't think of a REST constraint on this. > </snip> > > I have in mind the following from Fielding: > "REST enables intermediate processing by constraining messages to be > self-descriptive: interaction is stateless between requests, standard > methods and media types are used to indicate semantics and exchange > information, and responses explicitly indicate cacheability." [1] > > I have adopted an interpretation of this section that closely follows > that described by Joe Gregorio: > "...[T]he reason that RESTful systems can scale much easier ... has to > do with the amount of information that each message carries and that > is available to intermediaries without peeking into the body." [2] > > I find no direct references to "not peeking into the body" in Fielding > or RFC2616. Er, in terms of REST, that is not a style issue. In HTTP, it is kind of obvious, so I didn't think to write it down. You would have to look in the http-wg archives to see all the discussion about why length-delimited framing is better than MIME-style boundaries. ....Roy
On Mon, Nov 2, 2009 at 7:27 PM, Roy T. Fielding <fielding@...> wrote: > On Nov 2, 2009, at 11:39 AM, mike amundsen wrote: > >> Andrew: >> >> good points: >> <snip> >> I've always assumed that constraints on a general media types must be >> communicated in hypermedia (just like the URI). >> </snip> >> By "general media types" are you thinking there are "non-general" >> media types that may not require constraints (on inputs, I assume) be >> communicated via hypermedia? >> >> <snip> >> But I wasn't aware of anything in HTTP that restricted the proxy >> behavior this way -- and I can't think of a REST constraint on this. >> </snip> >> >> I have in mind the following from Fielding: >> "REST enables intermediate processing by constraining messages to be >> self-descriptive: interaction is stateless between requests, standard >> methods and media types are used to indicate semantics and exchange >> information, and responses explicitly indicate cacheability." [1] >> >> I have adopted an interpretation of this section that closely follows >> that described by Joe Gregorio: >> "...[T]he reason that RESTful systems can scale much easier ... has to >> do with the amount of information that each message carries and that >> is available to intermediaries without peeking into the body." [2] >> >> I find no direct references to "not peeking into the body" in Fielding >> or RFC2616. > > Er, in terms of REST, that is not a style issue. In HTTP, it is > kind of obvious, so I didn't think to write it down. You would > have to look in the http-wg archives to see all the discussion about > why length-delimited framing is better than MIME-style boundaries. > > ....Roy > Fair enough, but the fact that self-descriptive messages enable efficient processing by intermediaries is one thing; and asserting that intermediaries "should not be peeking inside the body at all" is another. I questioned the assertion because it is not uncommon to see gateway proxies mucking about in bodies, rewriting URIs etc.And while this is often criticized for all the obvious reasons, I wasn't aware of any well-defined rule that was being violated. SIP contains language like "The proxy MUST NOT add to, modify, or remove the message body." so that sort of proxy behavior would be a violation of the protocol. I assumed that HTTP didn't contain this sort of language because it was overly restrictive (e.g. if you must resort to something like URI rewriting then you can as long as you are prepared to deal with the costs). Is this not the case? Also, I'm curious as to why this couldn't be a style issue. The closest thing I've found to architectural style-like constraints for the SIP architecture is Rosenberg's Architecture and Design Principles for SIP (http://tools.ietf.org/html/draft-rosenberg-sipping-sip-arch-01). It describes the "Proxies are for routing" principle which assigns a specific role to intermediaries -- similar to the roles assigned by the client-server constraint. One might argue that the SIP protocol constraint quoted above is derived from this principle. Is there some reason that an architectural style couldn't or shouldn't contain a constraint that implies that intermediaries never modify message bodies? Regards, Andrew
On Nov 2, 2009, at 10:07 PM, Andrew Wahbe wrote: > Fair enough, but the fact that self-descriptive messages enable > efficient processing by intermediaries is one thing; and asserting > that intermediaries "should not be peeking inside the body at all" is > another. You lost me there. I thought you were talking about not looking inside the body for performance reasons. HTTP is designed so that an intermediary doesn't have to look inside the body (for performance reasons), but that is not a constraint of REST and thus it is common to have intermediaries that do want to look inside the body. > I questioned the assertion because it is not uncommon to see > gateway proxies mucking about in bodies, rewriting URIs etc.And while > this is often criticized for all the obvious reasons, I wasn't aware > of any well-defined rule that was being violated. SIP contains > language like "The proxy MUST NOT add to, modify, or remove the > message body." so that sort of proxy behavior would be a violation of > the protocol. I assumed that HTTP didn't contain this sort of language > because it was overly restrictive (e.g. if you must resort to > something like URI rewriting then you can as long as you are prepared > to deal with the costs). Is this not the case? HTTP does not contain such a restriction because transducing proxies and mash-up gateways are considered features. That is part of the REST style. > Also, I'm curious as to why this couldn't be a style issue. The > closest thing I've found to architectural style-like constraints for > the SIP architecture is Rosenberg's Architecture and Design Principles > for SIP (http://tools.ietf.org/html/draft-rosenberg-sipping-sip-arch-01 > ). > It describes the "Proxies are for routing" principle which assigns a > specific role to intermediaries -- similar to the roles assigned by > the client-server constraint. One might argue that the SIP protocol > constraint quoted above is derived from this principle. Is there some > reason that an architectural style couldn't or shouldn't contain a > constraint that implies that intermediaries never modify message > bodies? No, that is a common constraint. Not in REST, but certainly in other styles of interaction and in much of the design around the core Internet protocols (the end-to-end principle). However, it isn't quite right to call them intermediaries, at least if you are following my definitions of software architecture. Those are called connectors. An intermediary is a component that is allowed to do intermediation, which in my opinion implies the ability to change the payload. Sorry for the confusion. ....Roy
Roy T. Fielding wrote: > On Nov 2, 2009, at 10:07 PM, Andrew Wahbe wrote: > >> Fair enough, but the fact that self-descriptive messages enable >> efficient processing by intermediaries is one thing; and asserting >> that intermediaries "should not be peeking inside the body at all" is >> another. >> > > You lost me there. I thought you were talking about not looking > inside the body for performance reasons. HTTP is designed so that > an intermediary doesn't have to look inside the body (for performance > reasons), but that is not a constraint of REST and thus it is common > to have intermediaries that do want to look inside the body. > > I think there is an additional benefit, beyond performance gains, derived from requiring a system's intermediary/connector controls to be represented as protocol-level metadata (i.e. HTTP headers) - since this removes a system specific burden (of self-descriptiveness) on media types, and therefore promotes flexibility in terms of how resources can be represented over time. Such as system would not necessarily prevent duplication of this information within specific media types, if required - although there would be some cost associated with ensuring this enveloped information within the body is consistent with the 'authoritative' information in the accompanying headers. - Mike
2009/11/2 Jan Algermissen <algermissen1971@...> > > My hunch is that in a RESTful system > > a) all application state (aka session state) is on the client > b) all application data resides on the server (as resource state) > That makes some sense to me, although my hunch would be more like a) all application state is on the client as a representation of the last accessed resource(s) state(s) b) all application data resides on the server, a combination of which, at some point in time of the application life-cycle, represents a particular resource state But then again, if a client can manipulate the data that in some point in time of the application life-cycle was transferred from a resource, then it can change or add that data, so before that data is submitted back to the server, there is *some* application data that does not resides on the server...
Jan:
As a quick follow-up on the notion of the "getting away from..." issue:
Since HTTP is a testable spec, and REST is most commonly discussed in
regards to HTTP, it is a short leap to consider REST-fulness to also
be testable; to expect there to be a "right way" to implement REST
over HTTP just as there is a correct way to implement the HTTP
application protocol. But this is not a reasonable goal. In fact, it
can be a hurdle to building workable applications using the style.
mca
http://amundsen.com/blog/
On Mon, Nov 2, 2009 at 18:51, Jan Algermissen <algermissen1971@...> wrote:
>
> On Nov 3, 2009, at 12:13 AM, mike amundsen wrote:
>
>> Jan:
>>
>> <snip>
>> I am trying to get away from the "you could do this and you could do
>> that but you also could do it that way" kinds of answers to practical
>> REST design questions.
>> </snip>
>>
>> Why do you want to get away from this?
>
> Because it does not exactly sell good if you cannot back up advice you give
> by principles. One of the things I admire about the Software Architecture
> notion laid out by Perry/Wolf, Garlan/Shaw and Roy [1] is that it creates a
> foundation for backing up architectural design decisions by principles. And
> I just get this feeling of hand waving when it comes to answering questions
> about 'how do I do X'. I don't mind that there are many solutions (otherwise
> there would not be any room for design) but (at least) I often cannot
> articulate a clear reasoning why and when to prefer one over the other.
>
> Take the example problem I posted. What is better for
> order-rejection-with-suggested-modification? To send 409 and e.g. a UBL-like
> order-response document (like you would in the non computerized business
> world) or to create an order resource anyhow, set the status to pending and
> then let the client update the order until it can be accepted?
>
> (I had a hunch that the latter is better than the former because it
> maintains application data on the server only)
>
> Likewise, several return codes have been suggested (409, 422 for example).
> It makes no sense to have different returns codes specified when a group
> like this one cannot provide a spot-on 'yeah, it is that one and only that
> one' response.
>
> And even if there is a single best answer - how do you get interoperability
> if it is so easy for different people to have different opinions about it?
>
> Jan
>
>
> [1] Excuses for the names missing here
>
>
>
>>
>> mca
>> http://amundsen.com/blog/
>>
>>
>>
>>
>> On Mon, Nov 2, 2009 at 17:09, Jan Algermissen <algermissen1971@...>
>> wrote:
>>>
>>> On Nov 2, 2009, at 10:52 PM, Nick Gall wrote:
>>>
>>>>
>>>>
>>>> On Mon, Nov 2, 2009 at 4:16 PM, Jan Algermissen <algermissen1971@...
>>>>>
>>>>> wrote:
>>>>
>>>>> My hunch is that in a RESTful system
>>>>>
>>>>> a) all application state (aka session state) is on the client
>>>>> b) all application data resides on the server (as resource state)
>>>>>
>>>>> The question that drives me is what the significance of resource
>>>>
>>>> state
>>>>>
>>>>> is (what is its meaning/purpose in a given design? What are the
>>>>
>>>> design
>>>>>
>>>>> conditions that lead to the creation of new resource state? Etc.)
>>>>
>>>> I am
>>>>>
>>>>> trying to get away from the "you could do this and you could do that
>>>>> but you also could do it that way" kinds of answers to practical
>>>>
>>>> REST
>>>>>
>>>>> design questions.
>>>>
>>>> I think I see what you are driving at with your distinction between
>>>> session state vs resource state, but I don't think the distinction
>>>> is so clean in REST. Here's a relevant passage from Roy's thesis
>>>> (5.3.3):
>>>>
>>>> The application state is controlled and stored by the user agent and
>>>> can be composed of representations from multiple servers. In
>>>> addition to freeing the server from the scalability problems of
>>>> storing state, this allows the user to directly manipulate the state
>>>> (e.g., a Web browser's history), anticipate changes to that state
>>>> (e.g., link maps and prefetching of representations), and jump from
>>>> one application to another (e.g., bookmarks and URI-entry dialogs).
>>>> [emphasis added]
>>>>
>>>> So at least a copy of the resource state (in the form or a
>>>> representation of that resource) is intended to be stored and
>>>> manipulated on the client side. For example, I believe this 5.3.3
>>>> paragraph endorses the following scenario:
>>>> • client receives a representation of the works of art in a museum
>>>> exhibit as list of art works (name of work, name of artist) from a
>>>> museum service on some museum server
>>>> • the client extracts the name of an artist (a string) from the
>>>> list and submits it to Wikipedia on a different set of servers
>>>> • It takes the representation from Wikipedia and formats it into a
>>>> popup for the user's UI visualization of the museum exhibit
>>>> In this scenario the client "jumps from one application (museum
>>>> service) to another (Wikipedia)", submitting representation data it
>>>> received from the first service to the second service. Your
>>>> description above seems to prohibit such a scenario ("to continue
>>>> its interaction with the service on another machine, it would not
>>>> only need to take the appropriate URIs with it ... but also other
>>>> data elements").
>>>>
>>>
>>> But in your scenario, the client could allways, just be use of the one
>>> initial URI, pick up the interaction. And this is (IMHO) exactly the
>>> significance of storing the application data as resource state - so
>>> that the client can pick it up again. If the data was not stored in
>>> the server it would have to be stored on the client and the ability
>>> would be lost to pick up the conversation just on the basis of the URI.
>>>
>>> Jan
>>>
>>>
>>>
>>>> It appears that REST endorses the use of "application data" stored
>>>> on the client, at least in the case where such data was at some
>>>> point in the past received by the client as part of a representation
>>>> of a server-based resource.
>>>>
>>>> -- Nick
>>>>
>>>>
>>>>
>>>>
>>>
>>> --------------------------------------
>>> Jan Algermissen
>>>
>>> Mail: algermissen@...
>>> Blog: http://algermissen.blogspot.com/
>>> Home: http://www.jalgermissen.com
>>> --------------------------------------
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
> --------------------------------------
> Jan Algermissen
>
> Mail: algermissen@...
> Blog: http://algermissen.blogspot.com/
> Home: http://www.jalgermissen.com
> --------------------------------------
>
>
>
>
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Oct 31, 2009, at 12:01 AM, Jan Algermissen wrote: > > > Hi, > > > > suppose the following media types do exist: > > > > - application/procurement-order for orders > > - application/procurement-orderrejection for order rejections > > > > also suppose the client knows there is a resource at /order-processor > > that accepts orders in application/procurement-order media type. > > > > What if the client submits an order and the server wants to reject > > the order (maybe because the requested items are permanently out > > of stock)? What return code would the server use and does it make > > sense to send the order rejection document as the body of the > > (error-)response? > > I am trying to rule out the abve approach by deriving from REST's > constraints. > Here is my thinking: > > I assume (because I am not able yet to derive that from the REST > constraints) > that there is an implicit constraint in REST that demands all > application data > to be stored on the server. > > To put this in other words: a client must be able to perform any of > the next > possible transitions in an application soley based on the responses > previously > received. The client-server collaboration must not be designed in a > way that > requires a client to keep track of its own requests. > Well I don't think it is necessarily the case that the response to every request *replaces* the application state on the client. That is, a response can be use to *add to* or otherwise modify the client state. This is certainly the case for ancillary resources like images in a web page. But I think it is also valid to do this after already reaching a steady state. For example, this is what is often happening in an Ajax application. An HTTP request can retrieve new data that changes the client state but the overall client state is defined by a combination of data retrieved from multiple requests. This does present the problem of not having a single URI to represent the current state of the application but many Ajax apps are using URI fragments to deal with this. As far as I can tell, this is "ok". > Applied to my question above I think that a RESTful solution demands > that the > server creates application data as the basis for subsequent > interactions and > then instructs the client how to procede through the application. (This > implies a solution where the order is created on the server and marked > as > 'pending' to provide the application data. The client would then alter > the > order (the application data) to 'fix' the (item-out-of-stock-)problem. > Well I don't think it is unRESTful for a server to respond to a POST with a 200 OK rather than a 201 Created or a 303 See Other. This does create a "transient" state that the client can't return to (via a bookmark) or communicate by sharing the URI. It is common practice to avoid doing this, but if the state truly is transient then maybe it's the right thing? Anyways, I don't believe that REST requires that there are no transient states, which I think is what you are getting at. I can't say that I have any references to back this up though -- I'm more going on the fact that I've never seen authoritative sources say otherwise. Regards, Andrew
On Nov 4, 2009, at 3:20 AM, mike amundsen wrote:
> Jan:
>
> As a quick follow-up on the notion of the "getting away from..."
> issue:
>
> Since HTTP is a testable spec, and REST is most commonly discussed in
> regards to HTTP, it is a short leap to consider REST-fulness to also
> be testable; to expect there to be a "right way" to implement REST
> over HTTP just as there is a correct way to implement the HTTP
> application protocol.
But my question is related to resource design and hypermedia design, not
about how to implement HTTP.
If we view linking to be part of the resource design and hypermedia
to just be a means of serializing resource state and resource
relationships (linking) than my question is really only about
resource design.
So, IOW, I am looking for principles that apply to the partitioning
of the application data (resource model) and to the design of the
relationships between the application data partitions.
Probably it makes sense to ask if the application data partitioning
may extend to the client? Is it 'valid' to put application data on
the client that is not also maintained on the server? I think that
is likely to be the essence of the original question.
Jan
> But this is not a reasonable goal. In fact, it
> can be a hurdle to building workable applications using the style.
>
> mca
> http://amundsen.com/blog/
>
>
>
>
> On Mon, Nov 2, 2009 at 18:51, Jan Algermissen
> <algermissen1971@...> wrote:
>>
>> On Nov 3, 2009, at 12:13 AM, mike amundsen wrote:
>>
>>> Jan:
>>>
>>> <snip>
>>> I am trying to get away from the "you could do this and you could do
>>> that but you also could do it that way" kinds of answers to
>>> practical
>>> REST design questions.
>>> </snip>
>>>
>>> Why do you want to get away from this?
>>
>> Because it does not exactly sell good if you cannot back up advice
>> you give
>> by principles. One of the things I admire about the Software
>> Architecture
>> notion laid out by Perry/Wolf, Garlan/Shaw and Roy [1] is that it
>> creates a
>> foundation for backing up architectural design decisions by
>> principles. And
>> I just get this feeling of hand waving when it comes to answering
>> questions
>> about 'how do I do X'. I don't mind that there are many solutions
>> (otherwise
>> there would not be any room for design) but (at least) I often cannot
>> articulate a clear reasoning why and when to prefer one over the
>> other.
>>
>> Take the example problem I posted. What is better for
>> order-rejection-with-suggested-modification? To send 409 and e.g. a
>> UBL-like
>> order-response document (like you would in the non computerized
>> business
>> world) or to create an order resource anyhow, set the status to
>> pending and
>> then let the client update the order until it can be accepted?
>>
>> (I had a hunch that the latter is better than the former because it
>> maintains application data on the server only)
>>
>> Likewise, several return codes have been suggested (409, 422 for
>> example).
>> It makes no sense to have different returns codes specified when a
>> group
>> like this one cannot provide a spot-on 'yeah, it is that one and
>> only that
>> one' response.
>>
>> And even if there is a single best answer - how do you get
>> interoperability
>> if it is so easy for different people to have different opinions
>> about it?
>>
>> Jan
>>
>>
>> [1] Excuses for the names missing here
>>
>>
>>
>>>
>>> mca
>>> http://amundsen.com/blog/
>>>
>>>
>>>
>>>
>>> On Mon, Nov 2, 2009 at 17:09, Jan Algermissen <algermissen1971@...
>>> >
>>> wrote:
>>>>
>>>> On Nov 2, 2009, at 10:52 PM, Nick Gall wrote:
>>>>
>>>>>
>>>>>
>>>>> On Mon, Nov 2, 2009 at 4:16 PM, Jan Algermissen <algermissen1971@...
>>>>>>
>>>>>> wrote:
>>>>>
>>>>>> My hunch is that in a RESTful system
>>>>>>
>>>>>> a) all application state (aka session state) is on the client
>>>>>> b) all application data resides on the server (as resource state)
>>>>>>
>>>>>> The question that drives me is what the significance of resource
>>>>>
>>>>> state
>>>>>>
>>>>>> is (what is its meaning/purpose in a given design? What are the
>>>>>
>>>>> design
>>>>>>
>>>>>> conditions that lead to the creation of new resource state? Etc.)
>>>>>
>>>>> I am
>>>>>>
>>>>>> trying to get away from the "you could do this and you could do
>>>>>> that
>>>>>> but you also could do it that way" kinds of answers to practical
>>>>>
>>>>> REST
>>>>>>
>>>>>> design questions.
>>>>>
>>>>> I think I see what you are driving at with your distinction
>>>>> between
>>>>> session state vs resource state, but I don't think the distinction
>>>>> is so clean in REST. Here's a relevant passage from Roy's thesis
>>>>> (5.3.3):
>>>>>
>>>>> The application state is controlled and stored by the user agent
>>>>> and
>>>>> can be composed of representations from multiple servers. In
>>>>> addition to freeing the server from the scalability problems of
>>>>> storing state, this allows the user to directly manipulate the
>>>>> state
>>>>> (e.g., a Web browser's history), anticipate changes to that state
>>>>> (e.g., link maps and prefetching of representations), and jump
>>>>> from
>>>>> one application to another (e.g., bookmarks and URI-entry
>>>>> dialogs).
>>>>> [emphasis added]
>>>>>
>>>>> So at least a copy of the resource state (in the form or a
>>>>> representation of that resource) is intended to be stored and
>>>>> manipulated on the client side. For example, I believe this 5.3.3
>>>>> paragraph endorses the following scenario:
>>>>> • client receives a representation of the works of art in a
>>>>> museum
>>>>> exhibit as list of art works (name of work, name of artist) from a
>>>>> museum service on some museum server
>>>>> • the client extracts the name of an artist (a string) from
>>>>> the
>>>>> list and submits it to Wikipedia on a different set of servers
>>>>> • It takes the representation from Wikipedia and formats it
>>>>> into a
>>>>> popup for the user's UI visualization of the museum exhibit
>>>>> In this scenario the client "jumps from one application (museum
>>>>> service) to another (Wikipedia)", submitting representation data
>>>>> it
>>>>> received from the first service to the second service. Your
>>>>> description above seems to prohibit such a scenario ("to continue
>>>>> its interaction with the service on another machine, it would not
>>>>> only need to take the appropriate URIs with it ... but also other
>>>>> data elements").
>>>>>
>>>>
>>>> But in your scenario, the client could allways, just be use of
>>>> the one
>>>> initial URI, pick up the interaction. And this is (IMHO) exactly
>>>> the
>>>> significance of storing the application data as resource state - so
>>>> that the client can pick it up again. If the data was not stored in
>>>> the server it would have to be stored on the client and the ability
>>>> would be lost to pick up the conversation just on the basis of
>>>> the URI.
>>>>
>>>> Jan
>>>>
>>>>
>>>>
>>>>> It appears that REST endorses the use of "application data" stored
>>>>> on the client, at least in the case where such data was at some
>>>>> point in the past received by the client as part of a
>>>>> representation
>>>>> of a server-based resource.
>>>>>
>>>>> -- Nick
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> --------------------------------------
>>>> Jan Algermissen
>>>>
>>>> Mail: algermissen@acm.org
>>>> Blog: http://algermissen.blogspot.com/
>>>> Home: http://www.jalgermissen.com
>>>> --------------------------------------
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ------------------------------------
>>>>
>>>> Yahoo! Groups Links
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>
>> --------------------------------------
>> Jan Algermissen
>>
>> Mail: algermissen@...
>> Blog: http://algermissen.blogspot.com/
>> Home: http://www.jalgermissen.com
>> --------------------------------------
>>
>>
>>
>>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
On Nov 4, 2009, at 6:00 AM, wahbedahbe wrote: > > > --- In rest-discuss@yahoogroups.com, Jan Algermissen > <algermissen1971@...> wrote: >> >> >> On Oct 31, 2009, at 12:01 AM, Jan Algermissen wrote: >> >>> Hi, >>> >>> suppose the following media types do exist: >>> >>> - application/procurement-order for orders >>> - application/procurement-orderrejection for order rejections >>> >>> also suppose the client knows there is a resource at /order- >>> processor >>> that accepts orders in application/procurement-order media type. >>> >>> What if the client submits an order and the server wants to reject >>> the order (maybe because the requested items are permanently out >>> of stock)? What return code would the server use and does it make >>> sense to send the order rejection document as the body of the >>> (error-)response? >> >> I am trying to rule out the abve approach by deriving from REST's >> constraints. >> Here is my thinking: >> >> I assume (because I am not able yet to derive that from the REST >> constraints) >> that there is an implicit constraint in REST that demands all >> application data >> to be stored on the server. >> >> To put this in other words: a client must be able to perform any of >> the next >> possible transitions in an application soley based on the responses >> previously >> received. The client-server collaboration must not be designed in a >> way that >> requires a client to keep track of its own requests. >> > > Well I don't think it is necessarily the case that the response to > every request *replaces* the application state on the client. No, certainly not. I hope I did not create the impression I mean that. The client builds up application state as it proceeds through the applications state machine. > That is, a response can be use to *add to* or otherwise modify the > client state. This is certainly the case for ancillary resources > like images in a web page. But I think it is also valid to do this > after already reaching a steady state. For example, this is what is > often happening in an Ajax application. An HTTP request can retrieve > new data that changes the client state but the overall client state > is defined by a combination of data retrieved from multiple > requests. This does present the problem of not having a single URI > to represent the current state of the application but many Ajax apps > are using URI fragments to deal with this. As far as I can tell, > this is "ok". I am not so sure - as far as bookmarking and the history goes, AJAX is a mess and developers have to use tricks to make bookmarking and history work. Reminds me of HTML frames... > >> Applied to my question above I think that a RESTful solution demands >> that the >> server creates application data as the basis for subsequent >> interactions and >> then instructs the client how to procede through the application. >> (This >> implies a solution where the order is created on the server and >> marked >> as >> 'pending' to provide the application data. The client would then >> alter >> the >> order (the application data) to 'fix' the (item-out-of- >> stock-)problem. >> > > Well I don't think it is unRESTful for a server to respond to a POST > with a 200 OK rather than a 201 Created or a 303 See Other. This > does create a "transient" state that the client can't return to (via > a bookmark) or communicate by sharing the URI. Interesting thought. It is the same though for 201 nd 303 responses, or? > It is common practice to avoid doing this, Hmm - is it? If the POST is simply a submission without any state being changed or created on the server - that's fine. > but if the state truly is transient then maybe it's the right thing? > Anyways, I don't believe that REST requires that there are no > transient states, which I think is what you are getting at. Hmm, could be. Do you have a reference on the notion of 'transient state'? Jan > I can't say that I have any references to back this up though -- I'm > more going on the fact that I've never seen authoritative sources > say otherwise. > > Regards, > > Andrew > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Jan,
I've had some success using HTTP's own CRLF delimeter as this
partition; that is to say, leaving the entity-body "8-bit clean" for
native representations in various formats and confining the meta-model
(links, categories, attributes) to the HTTP headers.
See OCCI core spec and HTTP interface at occi-wg.org where I'm also
defining an XHTML rendering for a consolidated human & programmable
web interface (semweb w microformats/RDFa etc).
Sent from my iPhone
On 04/11/2009, at 9:17, Jan Algermissen <algermissen1971@mac.com> wrote:
>
> On Nov 4, 2009, at 3:20 AM, mike amundsen wrote:
>
>> Jan:
>>
>> As a quick follow-up on the notion of the "getting away from..."
>> issue:
>>
>> Since HTTP is a testable spec, and REST is most commonly discussed in
>> regards to HTTP, it is a short leap to consider REST-fulness to also
>> be testable; to expect there to be a "right way" to implement REST
>> over HTTP just as there is a correct way to implement the HTTP
>> application protocol.
>
> But my question is related to resource design and hypermedia design,
> not
> about how to implement HTTP.
>
> If we view linking to be part of the resource design and hypermedia
> to just be a means of serializing resource state and resource
> relationships (linking) than my question is really only about
> resource design.
>
> So, IOW, I am looking for principles that apply to the partitioning
> of the application data (resource model) and to the design of the
> relationships between the application data partitions.
>
> Probably it makes sense to ask if the application data partitioning
> may extend to the client? Is it 'valid' to put application data on
> the client that is not also maintained on the server? I think that
> is likely to be the essence of the original question.
>
> Jan
>
>
>
>> But this is not a reasonable goal. In fact, it
>> can be a hurdle to building workable applications using the style.
>>
>> mca
>> http://amundsen.com/blog/
>>
>>
>>
>>
>> On Mon, Nov 2, 2009 at 18:51, Jan Algermissen
>> <algermissen1971@...> wrote:
>>>
>>> On Nov 3, 2009, at 12:13 AM, mike amundsen wrote:
>>>
>>>> Jan:
>>>>
>>>> <snip>
>>>> I am trying to get away from the "you could do this and you could
>>>> do
>>>> that but you also could do it that way" kinds of answers to
>>>> practical
>>>> REST design questions.
>>>> </snip>
>>>>
>>>> Why do you want to get away from this?
>>>
>>> Because it does not exactly sell good if you cannot back up advice
>>> you give
>>> by principles. One of the things I admire about the Software
>>> Architecture
>>> notion laid out by Perry/Wolf, Garlan/Shaw and Roy [1] is that it
>>> creates a
>>> foundation for backing up architectural design decisions by
>>> principles. And
>>> I just get this feeling of hand waving when it comes to answering
>>> questions
>>> about 'how do I do X'. I don't mind that there are many solutions
>>> (otherwise
>>> there would not be any room for design) but (at least) I often
>>> cannot
>>> articulate a clear reasoning why and when to prefer one over the
>>> other.
>>>
>>> Take the example problem I posted. What is better for
>>> order-rejection-with-suggested-modification? To send 409 and e.g. a
>>> UBL-like
>>> order-response document (like you would in the non computerized
>>> business
>>> world) or to create an order resource anyhow, set the status to
>>> pending and
>>> then let the client update the order until it can be accepted?
>>>
>>> (I had a hunch that the latter is better than the former because it
>>> maintains application data on the server only)
>>>
>>> Likewise, several return codes have been suggested (409, 422 for
>>> example).
>>> It makes no sense to have different returns codes specified when a
>>> group
>>> like this one cannot provide a spot-on 'yeah, it is that one and
>>> only that
>>> one' response.
>>>
>>> And even if there is a single best answer - how do you get
>>> interoperability
>>> if it is so easy for different people to have different opinions
>>> about it?
>>>
>>> Jan
>>>
>>>
>>> [1] Excuses for the names missing here
>>>
>>>
>>>
>>>>
>>>> mca
>>>> http://amundsen.com/blog/
>>>>
>>>>
>>>>
>>>>
>>>> On Mon, Nov 2, 2009 at 17:09, Jan Algermissen <algermissen1971@...
>>>>>
>>>> wrote:
>>>>>
>>>>> On Nov 2, 2009, at 10:52 PM, Nick Gall wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Nov 2, 2009 at 4:16 PM, Jan Algermissen <algermissen1971@...
>>>>>>>
>>>>>>> wrote:
>>>>>>
>>>>>>> My hunch is that in a RESTful system
>>>>>>>
>>>>>>> a) all application state (aka session state) is on the client
>>>>>>> b) all application data resides on the server (as resource
>>>>>>> state)
>>>>>>>
>>>>>>> The question that drives me is what the significance of resource
>>>>>>
>>>>>> state
>>>>>>>
>>>>>>> is (what is its meaning/purpose in a given design? What are the
>>>>>>
>>>>>> design
>>>>>>>
>>>>>>> conditions that lead to the creation of new resource state?
>>>>>>> Etc.)
>>>>>>
>>>>>> I am
>>>>>>>
>>>>>>> trying to get away from the "you could do this and you could do
>>>>>>> that
>>>>>>> but you also could do it that way" kinds of answers to practical
>>>>>>
>>>>>> REST
>>>>>>>
>>>>>>> design questions.
>>>>>>
>>>>>> I think I see what you are driving at with your distinction
>>>>>> between
>>>>>> session state vs resource state, but I don't think the
>>>>>> distinction
>>>>>> is so clean in REST. Here's a relevant passage from Roy's thesis
>>>>>> (5.3.3):
>>>>>>
>>>>>> The application state is controlled and stored by the user agent
>>>>>> and
>>>>>> can be composed of representations from multiple servers. In
>>>>>> addition to freeing the server from the scalability problems of
>>>>>> storing state, this allows the user to directly manipulate the
>>>>>> state
>>>>>> (e.g., a Web browser's history), anticipate changes to that state
>>>>>> (e.g., link maps and prefetching of representations), and jump
>>>>>> from
>>>>>> one application to another (e.g., bookmarks and URI-entry
>>>>>> dialogs).
>>>>>> [emphasis added]
>>>>>>
>>>>>> So at least a copy of the resource state (in the form or a
>>>>>> representation of that resource) is intended to be stored and
>>>>>> manipulated on the client side. For example, I believe this 5.3.3
>>>>>> paragraph endorses the following scenario:
>>>>>> • client receives a representation of the works of art in a
>>>>>> museum
>>>>>> exhibit as list of art works (name of work, name of artist)
>>>>>> from a
>>>>>> museum service on some museum server
>>>>>> • the client extracts the name of an artist (a string) from
>>>>>> the
>>>>>> list and submits it to Wikipedia on a different set of servers
>>>>>> • It takes the representation from Wikipedia and formats it
>>>>>> into a
>>>>>> popup for the user's UI visualization of the museum exhibit
>>>>>> In this scenario the client "jumps from one application (museum
>>>>>> service) to another (Wikipedia)", submitting representation data
>>>>>> it
>>>>>> received from the first service to the second service. Your
>>>>>> description above seems to prohibit such a scenario ("to continue
>>>>>> its interaction with the service on another machine, it would not
>>>>>> only need to take the appropriate URIs with it ... but also other
>>>>>> data elements").
>>>>>>
>>>>>
>>>>> But in your scenario, the client could allways, just be use of
>>>>> the one
>>>>> initial URI, pick up the interaction. And this is (IMHO) exactly
>>>>> the
>>>>> significance of storing the application data as resource state -
>>>>> so
>>>>> that the client can pick it up again. If the data was not stored
>>>>> in
>>>>> the server it would have to be stored on the client and the
>>>>> ability
>>>>> would be lost to pick up the conversation just on the basis of
>>>>> the URI.
>>>>>
>>>>> Jan
>>>>>
>>>>>
>>>>>
>>>>>> It appears that REST endorses the use of "application data"
>>>>>> stored
>>>>>> on the client, at least in the case where such data was at some
>>>>>> point in the past received by the client as part of a
>>>>>> representation
>>>>>> of a server-based resource.
>>>>>>
>>>>>> -- Nick
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> --------------------------------------
>>>>> Jan Algermissen
>>>>>
>>>>> Mail: algermissen@...
>>>>> Blog: http://algermissen.blogspot.com/
>>>>> Home: http://www.jalgermissen.com
>>>>> --------------------------------------
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ------------------------------------
>>>>>
>>>>> Yahoo! Groups Links
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> ------------------------------------
>>>>
>>>> Yahoo! Groups Links
>>>>
>>>>
>>>>
>>>
>>> --------------------------------------
>>> Jan Algermissen
>>>
>>> Mail: algermissen@...
>>> Blog: http://algermissen.blogspot.com/
>>> Home: http://www.jalgermissen.com
>>> --------------------------------------
>>>
>>>
>>>
>>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
> --------------------------------------
> Jan Algermissen
>
> Mail: algermissen@...
> Blog: http://algermissen.blogspot.com/
> Home: http://www.jalgermissen.com
> --------------------------------------
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
On Fri, Oct 23, 2009 at 2:05 PM, Roy T. Fielding <fielding@...> wrote: > On Oct 23, 2009, at 10:28 AM, Noah Campbell wrote: > >> I'm looking for additional references for architectural properties >> found in section 2.3.4 of Roy's paper? I was curious how Roy came >> up with his list. I've never done a dissertation so if I'm parsing >> the paper incorrectly, please let me know. > > There wasn't any one reference. There are a lot of references in the > references list, some of which define what I called a property. > Usually these are defined in the literature as software qualities > or system properties. > > You might want to check the new book on Software Architecture by > Taylor (my dissertation committee chair), Medvidovic, and Dashovy: > > http://www.softwarearchitecturebook.com/ > http://www.amazon.com/dp/0470167742 > > though I don't know if they used the same terminology as my diss. > I am still waiting for my free copy. ;-) Sadly, they did not. They essentially lump it all into "Adaptability" with some discussion points that aren't of sufficient granularity to evaluate in the same way that the framework in the dissertation does. There is a chapter dedicated to Adaptability that discusses "Styles that Support Adaptation" but they don't even mention the REST style. I actually liked Roy's characterization of these things as "desired properties" better too. The book considers them NFP's where an NFP is defined as, "a non-functional property (NFP) of a software system is a constraint on the manner in which the system implements and delivers its functionality." Real architectural constraints evoke these properties but I'm unconvinced that the properties are, themselves, constraints. Anyway, Roy, if you're ever near D.C., you can borrow my copy... --tim
Hi-
Occasionally demand is heard for service descriptions for services in
RESTful systems. Clearly, such descriptions are contrary to REST's
evolvability goals because any prescriptive information about the
server limits its desired evolvability.
While this is fine when the client has human or human-like
capabilities to mediate between the overall client's intentions (e.g.
'buying a book') and the actual runtime-discovered state machine of
the application, problems do arise when the communication model is
applied to machine to machine interactions. Especially when REST is
applied in an enterprise context where budgets and legal issues are at
stake.
The problem is that an overall goal like 'buying a book' must assume
the availability of certain media types, extensions, links, etc. and
certain available state transitions at certain points in the overall
interaction. Without such assumptions it would be impossible to come
up with client side code that performs the overall goal.
With human clients the situation is actually quite the same but less
visible because the human user is sort of in permanent browsing mode,
walking the Web, discovering new things, etc. When underlying
expectations fail (Ooops, where did that 1-Click link go?)
compensating action can be taken. For example: call the hotline or
change the online shop.
One can put some amount of flexibility into a machine client, but it
is inevitable that at some point assumptions end up in a place (source
code, configuration) that will cause runtime failure when the
assumptions about the server turn out to be wrong.
This is not so much of a problem in contexts, where budget and legal
issues are of minor importance (e.g. when I do some fun-coding to
interact with Amazon's APIs) but as soon as you pay serious amounts
for the use of an API or when liability issues are involved things are
different.
What is necessary I think is a way to describe services that meets the
following objectives:
1. **Only** describe/define those aspects of the provided
service that constitute an inevitable coupling of
client and server anyway. For example: if a client
is programmed to use a search, the service provider
must assure that the search will remain available
during the agreed upon service lifetime.
2. Use a means of description that is formal enough to
serve as the basis for legal contracts (e.g. SLAs).
3. Make that means of description standard/mainstream
enough to avoid that everybody is forced to reinvent
the wheel[1]
These are the aspects I think should be addressed in such definitions:
1. General client obligations
This section would cover the base set of media types,
link relations, etc. to be understood by the client.
2. General server commitments
The description can be simplified and reduced if the
server makes general commitments such as 'resources
that are known by the client to be collections of items
will always at least be available as application/atom+xml
3. General availability of certain state transitions
A search resource for example can be defined to be available
to the client independent of the current application state.
IOW, the search resource will always be 'announced' as
part of the initial service documents or, if none are used
it would mean that the search would be available to the
client from any received response (e.g. via Link header)
4. Availability of certain goals in certain application states
If a client does a search in an online shop there is an
expectation to being able to place an order afterwards.
Such an expectation would be backed up by this section. A
way to view this is as a dependency tree of goals (after
item search can come purchase).
5. Availability of certain media types, link relations,
extensions There must be a means for the server to tell the
client which media types are available[2] and also a
commitment that these (or at least one) will remain in use
for a defined period of time.
[1] This is AFAIU the major concern articulated by Steve Jones in 2nd
paragraph of <http://tech.groups.yahoo.com/group/service-orientated-architecture/message/13909
>
(a discussion that is the origin of this post because he makes some
valid points
there)
[2] Sure, the client could determine this at design time by executing
the service, but on the one hand, this does not provide the designer
with a guaranteed exhaustive set of options (you never know what you
did not see) and on the other hand this approach is not likely to be
helpful when proposing a project to whoever assigns you the budget.
Jan
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
> What is necessary I think is a way to describe services that meets the > following objectives: > > 1. **Only** describe/define those aspects of the provided > service that constitute an inevitable coupling of > client and server anyway. For example: if a client > is programmed to use a search, the service provider > must assure that the search will remain available > during the agreed upon service lifetime. > > 2. Use a means of description that is formal enough to > serve as the basis for legal contracts (e.g. SLAs). > > 3. Make that means of description standard/mainstream > enough to avoid that everybody is forced to reinvent > the wheel[1] > Prose meets all these objectives. Plus it is also interoperable between all parties involved. Subbu
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > Hi- > > Occasionally demand is heard for service descriptions for services in > RESTful systems. Clearly, such descriptions are contrary to REST's > evolvability goals because any prescriptive information about the > server limits its desired evolvability. > > While this is fine when the client has human or human-like > capabilities to mediate between the overall client's intentions (e.g. > 'buying a book') and the actual runtime-discovered state machine of > the application, problems do arise when the communication model is > applied to machine to machine interactions. Especially when REST is > applied in an enterprise context where budgets and legal issues are at > stake. > > The problem is that an overall goal like 'buying a book' must assume > the availability of certain media types, extensions, links, etc. and > certain available state transitions at certain points in the overall > interaction. Without such assumptions it would be impossible to come > up with client side code that performs the overall goal. > > With human clients the situation is actually quite the same but less > visible because the human user is sort of in permanent browsing mode, > walking the Web, discovering new things, etc. When underlying > expectations fail (Ooops, where did that 1-Click link go?) > compensating action can be taken. For example: call the hotline or > change the online shop. > > One can put some amount of flexibility into a machine client, but it > is inevitable that at some point assumptions end up in a place (source > code, configuration) that will cause runtime failure when the > assumptions about the server turn out to be wrong. > > This is not so much of a problem in contexts, where budget and legal > issues are of minor importance (e.g. when I do some fun-coding to > interact with Amazon's APIs) but as soon as you pay serious amounts > for the use of an API or when liability issues are involved things are > different. > There are a lot of assumptions here about the limitations of machine to machine RESTful interaction. I have never seen any proof of these limitations. It just seems that because the general REST community can't figure out how to design good media types for machine to machine interaction, the consensus is that it isn't possible. The thing is that I work in an industry where a media type for machine to machine, RESTful interaction has been available for years. Many client implementations are commercially available and all sorts of applications have been written by 3rd parties *after* the media type was designed, and the clients were built and sold. The clients are completely decoupled from the services because the services didn't exist when the clients were written. That is the way the web is; that is how REST is supposed to work. The media type I'm talking about is CCXML: http://www.w3.org/TR/ccxml/ The industry I'm talking about is telecom -- where 5 9s are expected otherwise you get sued for millions. So I have a hard time buying into these sorts of arguments when it seems that I've been working with a counter example for years. I've been encouraging folks on this list to look at CCXML for a long time, but as far as I know I haven't convinced anyone to spend the time. Hey -- maybe CCXML isn't RESTful after all; it certainly has a different flavor than most of the media types coming from the REST community. I'd love to get all of your feedback; maybe I'm missing something. But if it isn't then perhaps the as yet to be formally defined style behind CCXML is a good alternative to REST for machine to machine interaction. It certainly seems to have many of the properties most readers of this list are looking for. Anyways, I encourage you to take a look at CCXML. I think it will be worth your time, and I think that the discussions that might ensue will be very valuable. Regards, Andrew
On Nov 6, 2009, at 6:45 AM, wahbedahbe wrote: > There are a lot of assumptions here about the limitations of machine > to machine RESTful interaction. I have never seen any proof of these > limitations. It just seems that because the general REST community > can't figure out how to design good media types for machine to > machine interaction, the consensus is that it isn't possible. I did not say that at all. Of course machine to machine RESTful systems are possible (e.g. AtomPub). I was talking about inevitable assumptions the client has to make that manifest itself in code or configuration of and how to document these such that - no unnecessary coupling is created by the documentation (as is by many of the documentations of the so called REST interfaces on the Web) - the style of documentation will be accepted by e.g. the legal department Jan
On Nov 5, 2009, at 4:31 PM, Subbu Allamaraju wrote: >> What is necessary I think is a way to describe services that meets >> the >> following objectives: >> >> 1. **Only** describe/define those aspects of the provided >> service that constitute an inevitable coupling of >> client and server anyway. For example: if a client >> is programmed to use a search, the service provider >> must assure that the search will remain available >> during the agreed upon service lifetime. >> >> 2. Use a means of description that is formal enough to >> serve as the basis for legal contracts (e.g. SLAs). >> >> 3. Make that means of description standard/mainstream >> enough to avoid that everybody is forced to reinvent >> the wheel[1] >> > > Prose meets all these objectives. Plus it is also interoperable > between all parties involved. Sure prose is appropriate (never suggested otherwise). But just saying 'use prose' does not address 3. above. This was one of the points made by Steve Jones: given that WS-* has a rigid interface description language 'built in' while the use of REST means you have to come up with a means of describing interfaces yourself is a huge selling point for WS-*. Not that having such things as WSDL and BPEL is desireable from a networked systems point of view but when a non technical person has to make a business decision about WS-* vs REST, the latter is more likely to 'loose' if you first have to come up with a style for documenting the interfaces. And even worse would be the position of 'Nah, you don't need that. Just let everyone write it up'. Jan > > Subbu > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Nov 6, 2009, at 6:45 AM, wahbedahbe wrote: > I've been encouraging folks on this list to look at CCXML for a long > time, but as far as I know I haven't convinced anyone to spend the > time. Hey -- maybe CCXML isn't RESTful after all; it certainly has a > different flavor than most of the media types coming from the REST > community. I'd love to get all of your feedback; maybe I'm missing > something. But if it isn't then perhaps the as yet to be formally > defined style behind CCXML is a good alternative to REST for machine > to machine interaction. It certainly seems to have many of the > properties most readers of this list are looking for. From the TR: "A CCXML session begins with the execution of a CCXML document." Now, my understanding might be wrong because I dod not have the time to put my head into the spec, but the above quote sounds a lot like that the coordination between components in CCXML is achieved by passing code (executable documents) around. To be RESTful the coordination should be achieved by passing representations of state around. Can you provide us with an example of a typical interaction? Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Fri, Nov 6, 2009 at 2:42 AM, Jan Algermissen <algermissen1971@...> wrote: > From the TR: "A CCXML session begins with the execution of a CCXML > document." > > Now, my understanding might be wrong because I dod not have the time > to put my head into the spec, but the above quote sounds a lot like > that the coordination between components in CCXML is achieved by > passing code (executable documents) around. To be RESTful the > coordination should be achieved by passing representations of state > around. Passing executable code is one of the features of REST: 5.1.7 Code-On-Demand That doesn't prove CCXML to be RESTful, but doesn't rule it out, either.
On Nov 6, 2009, at 1:12 PM, Bob Haugen wrote: > On Fri, Nov 6, 2009 at 2:42 AM, Jan Algermissen <algermissen1971@... > > wrote: >> From the TR: "A CCXML session begins with the execution of a CCXML >> document." >> >> Now, my understanding might be wrong because I dod not have the time >> to put my head into the spec, but the above quote sounds a lot like >> that the coordination between components in CCXML is achieved by >> passing code (executable documents) around. To be RESTful the >> coordination should be achieved by passing representations of state >> around. > > Passing executable code is one of the features of REST: > 5.1.7 Code-On-Demand Yes, thought about that, too. > > That doesn't prove CCXML to be RESTful, but doesn't rule it out, > either. Right. But it seems like the primary corrdination means of CCXML and I guess that would make it 'non REST'. (given my glimples at the spec is enough). Jan > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Hello. Eoin Woods and Nick Rozanski book about Software Systems Architecture sort of define them as the result of stakeholder concerns that are not satisfied with a particular function. And they define "perspectives" as tool chests to help you work with those quality properties. Here you can see the list of quality properties they offer perspectives. http://www.viewpoints-and-perspectives.info/index.php?page=persp-intro But, anyway, a there is no standard list of properties. Actually, there are some of them that are mixed with other ones, for instance security concern may be to avoid DOS attacks, which is also helpful for availability. It depends on the author. William Martinez. --- In rest-discuss@yahoogroups.com, Tim Williams <williamstw@...> wrote: > > On Fri, Oct 23, 2009 at 2:05 PM, Roy T. Fielding <fielding@...> wrote: > > On Oct 23, 2009, at 10:28 AM, Noah Campbell wrote: > > > >> I'm looking for additional references for architectural properties > >> found in section 2.3.4 of Roy's paper? I was curious how Roy came > >> up with his list. I've never done a dissertation so if I'm parsing > >> the paper incorrectly, please let me know. > > > > There wasn't any one reference. There are a lot of references in the > > references list, some of which define what I called a property. > > Usually these are defined in the literature as software qualities > > or system properties. > > > > You might want to check the new book on Software Architecture by > > Taylor (my dissertation committee chair), Medvidovic, and Dashovy: > > > > http://www.softwarearchitecturebook.com/ > > http://www.amazon.com/dp/0470167742 > > > > though I don't know if they used the same terminology as my diss. > > I am still waiting for my free copy. ;-) > > Sadly, they did not. They essentially lump it all into "Adaptability" > with some discussion points that aren't of sufficient granularity to > evaluate in the same way that the framework in the dissertation does. > There is a chapter dedicated to Adaptability that discusses "Styles > that Support Adaptation" but they don't even mention the REST style. > > I actually liked Roy's characterization of these things as "desired > properties" better too. The book considers them NFP's where an NFP is > defined as, "a non-functional property (NFP) of a software system is a > constraint on the manner in which the system implements and delivers > its functionality." Real architectural constraints evoke these > properties but I'm unconvinced that the properties are, themselves, > constraints. > > Anyway, Roy, if you're ever near D.C., you can borrow my copy... > > --tim >
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Nov 6, 2009, at 6:45 AM, wahbedahbe wrote: > > > There are a lot of assumptions here about the limitations of machine > > to machine RESTful interaction. I have never seen any proof of these > > limitations. It just seems that because the general REST community > > can't figure out how to design good media types for machine to > > machine interaction, the consensus is that it isn't possible. > > I did not say that at all. Of course machine to machine RESTful > systems are possible (e.g. AtomPub). I was talking about inevitable > assumptions the client has to make that manifest itself in code or > configuration of and how to document these such that > > - no unnecessary coupling is created by the documentation > (as is by many of the documentations of the so called REST > interfaces on the Web) > - the style of documentation will be accepted by e.g. > the legal department > > Jan > Right, but it's the assumption that these out-of-band contracts are needed is what I'm questioning. They don't seem to be needed in the human web. Sure you may need to document the representation formats -- the media type and the extensions and rels necessary to use the service. But its things like guarantees about what "kinds of state transitions" are available that I question. I'm not sure that the "inevitable assumptions" you refer to are necessary. Maybe I'm reading too much into what you mean by that though, but it seems to be more that what normally constitutes the uniform interface. Maybe I need more clarification on what you think would be in this contract. Regards, Andrew
On Nov 6, 2009, at 3:36 PM, wahbedahbe wrote: > --- In rest-discuss@yahoogroups.com, Jan Algermissen > <algermissen1971@...> wrote: >> >> >> On Nov 6, 2009, at 6:45 AM, wahbedahbe wrote: >> >>> There are a lot of assumptions here about the limitations of machine >>> to machine RESTful interaction. I have never seen any proof of these >>> limitations. It just seems that because the general REST community >>> can't figure out how to design good media types for machine to >>> machine interaction, the consensus is that it isn't possible. >> >> I did not say that at all. Of course machine to machine RESTful >> systems are possible (e.g. AtomPub). I was talking about inevitable >> assumptions the client has to make that manifest itself in code or >> configuration of and how to document these such that >> >> - no unnecessary coupling is created by the documentation >> (as is by many of the documentations of the so called REST >> interfaces on the Web) >> - the style of documentation will be accepted by e.g. >> the legal department >> >> Jan >> > > Right, but it's the assumption that these out-of-band contracts are > needed is what I'm questioning. Suppose you are coding a client for service that lets you search stuff and then do something with it (e.g. update). Your client code will inevitably contain the 'invocation' of the search (e.g. GET request to search resource). And this is based on the expectation that the search resource will be there (== being discoverable). If the service does not provide the search resource anymore the client will break (if you code it to expect that there is a search resource and sudden;y there is none what else could the code do?). Humans can work on a solution for the problem, software cannot (unless we go into AI of some form). Technically, this is inevitable and no other, more specialized interface will help you, because if a Web service does not implement some service.search() anymore the SOAP call will also fail and no WSDL will prevent that. This is just the nature of binding the components at runtime and not at compile time (as you would in a non-networked application). The problem is at the business level though because the WSDL specifies a contract that defines the search method to be there and if you SOAP call fails, you can take the WSDL and the stack trace, run to your service provider and say: "Where's that method you *promised*?"). With REST, there is (deliberately) no such contract and the client's expectation that there will be a search resource is based on observation and trust and on some cloud-level based knowledge about the overall kind of the interaction. From the service owner perspective it is also an interesting question, how a developer would know if he could take away the search resource. After all, there is no contract to look at that would make clear what the client expectations really are. IOW, if you are in charge of evolving the service, you should have a pretty clear source that tells you what you can change and what you cannot change. This is rather easy on the media type level but it is also the combination of hypermedia sematics in use that matter. "Are my clients 'licensed' to assume the presence of that Atom extension or are they not? Well, we never told them we'd never take it away so we can drop it at any time, right?" One approach to all this is probably to simply state that a service will never evolve in an incompatible way (e.g. "we'll never remove anything") and if it has to be incompatible, there'll just be a new service. Now, I am not trying to be enterprisey and ride the 'oh, inside the enetrpsi there are the hard problems' horse, but 'follow your nose' just does not provide the specifics that managers and lawyers (usually rightly so) demand. What is particulary interesting is that IMHO there is the danger of, in the usual attempt to escape this situation, there is far too much coupling put into the descriptive documents and many of REST's advantages lost (see interface docs for example that list the URIs of the resources to use, what formats to expect and which HTTP return codes - and what they mean(!) in the service context). So, balance is really important. Jan > They don't seem to be needed in the human web. Sure you may need to > document the representation formats -- the media type and the > extensions and rels necessary to use the service. But its things > like guarantees about what "kinds of state transitions" are > available that I question. I'm not sure that the "inevitable > assumptions" you refer to are necessary. Maybe I'm reading too much > into what you mean by that though, but it seems to be more that what > normally constitutes the uniform interface. Maybe I need more > clarification on what you think would be in this contract. > Regards, > > Andrew > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Nov 6, 2009, at 4:58 PM, Jan Algermissen wrote: > enetrpsi WTF??? > 'enetrpsi' => 'enterprise' of course :-) Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Thu, Nov 5, 2009 at 5:16 AM, Jan Algermissen <algermissen1971@...> wrote: > Occasionally demand is heard for service descriptions for services in > RESTful systems. Clearly, such descriptions are contrary to REST's > evolvability goals because any prescriptive information about the > server limits its desired evolvability. I'd reframe this and say such descriptions "snapshot" the state of a services implementation/API (version it, if you will). In theory, the service can still evolve from this base, as long as it doesn't violate the contract as is. So, yes, technically it limits evolvability (you can't go "backwards", for example), but I don't think it necessarily kills any change whatsoever. > One can put some amount of flexibility into a machine client, but it > is inevitable that at some point assumptions end up in a place (source > code, configuration) that will cause runtime failure when the > assumptions about the server turn out to be wrong. I think all clients are implicitly rigid. A flexible client is one that is compensating for an imprecise specification. It's hard to imagine any client able to inately adopt new functionality on the fly. I should say, able to adopt any new functionality that the client isn't aware of on the fly. It's possible that a client can leverage functionality newly provided by the service, but that's driven by the service surfacing functions that the client is already aware of. And I won't explore client "plugins" or anything of that nature, it's not germane. > This is not so much of a problem in contexts, where budget and legal > issues are of minor importance (e.g. when I do some fun-coding to > interact with Amazon's APIs) but as soon as you pay serious amounts > for the use of an API or when liability issues are involved things are > different. > > What is necessary I think is a way to describe services that meets the > following objectives: > > 1. **Only** describe/define those aspects of the provided > service that constitute an inevitable coupling of > client and server anyway. For example: if a client > is programmed to use a search, the service provider > must assure that the search will remain available > during the agreed upon service lifetime. > > 2. Use a means of description that is formal enough to > serve as the basis for legal contracts (e.g. SLAs). > > 3. Make that means of description standard/mainstream > enough to avoid that everybody is forced to reinvent > the wheel[1] Seems to me that Steve was really banging on the "machine enforcability" of the contract that you're talking about (thus his continued references to WS-* and BPEL (I'm not familiar enough with BPEL to know how it can be as specific as what he was looking for in all of the scenarios he presented). Everything else, is lawyers and weasel words, regardless of the format of the specification, whether it's RFC format, post it notes, napkin drawings, or what. Having recently been doing work with the IHE Technical Specification (which are robust, but imperfect and range from crystal clear to dark as mud), my favorite attribute is the fact that all of the lines of the specification are numbered (every 5 lines there's a notation in the margin), which makes pointing out answers to specification questions very easy (see doc XYZ, line 123). > These are the aspects I think should be addressed in such definitions: > > 1. General client obligations > This section would cover the base set of media types, > link relations, etc. to be understood by the client. > > 2. General server commitments > The description can be simplified and reduced if the > server makes general commitments such as 'resources > that are known by the client to be collections of items > will always at least be available as application/atom+xml > > 3. General availability of certain state transitions > A search resource for example can be defined to be available > to the client independent of the current application state. > IOW, the search resource will always be 'announced' as > part of the initial service documents or, if none are used > it would mean that the search would be available to the > client from any received response (e.g. via Link header) > > 4. Availability of certain goals in certain application states > If a client does a search in an online shop there is an > expectation to being able to place an order afterwards. > Such an expectation would be backed up by this section. A > way to view this is as a dependency tree of goals (after > item search can come purchase). "General" here is the killer word. There should be nothing general about it. These should all be specific, and documented. There are no "assumptions". That's likely the complaint. For example, look at all the assumptions that surround REST in the first place. "Oh, it's just HTTP, I know HTTP..." and you get...POX over HTTP or something else. Something like "Search" can be a documented entry point in to the system. Otherwise, it's a URI provided in payloads that the client can follow. The specification may well be that a client must hit the EntryPoint resource for the overall system, and follow the "Search" link relation if they want to search. "Availability of Certain Goals" would be, IMHO entry points in the system. That is, URIs that are specifically documented and SLA'd to be "always available". But if someone wants to order after a search, then there is likely a defined link in the payload to create such an order from the search results. Otherwise, the client can simply POST to the /order resource with the proper payload, as documented in the specification as an entry point, a point that can be hit directly, as documented, rather than followed. To clarify what I mean by that, there has always been a discussion as to the lifespan of a URI. It's easy to argue that the link with the "next" rel on a search result is likely to be a pretty temporal link. The link itself can easily have a very limited lifespan, especially the expectation that it returns anything meaningful. This because there is little intent that a client would persist this link long term. If you wanted to go to the next group of a search result, you should follow the next-rel link. But an external entry point is one that is likely to be "hard coded" in to a client, is likely to be templated, and not necessarily opaque. Because clients have to have some way to get started, and ideally they don't all have to start at the "home page" and follow rels for every transaction. > > 5. Availability of certain media types, link relations, > extensions There must be a means for the server to tell the > client which media types are available[2] and also a > commitment that these (or at least one) will remain in use > for a defined period of time. This is all part of the SLA. I don't know if a server needs to "publish" this information. I mean, you can have a /sla resource that defines all of these things, I suppose. Does it need to be machine interpretable? But here's the nut. The conflict is that with a REST system, the application should not be driven by out-of-band information. It should not be making assumptions, it should be working with the system as the service exposes it to the client via request results from the pre-defined entry points. At an extreme view, that's what a specification is, out of band information. Out of band information such as entry points, and link relations. As more and more standardized media types and relations are defined, and used, in theory, the "less" documentation and specification is necessary, other than "Oh, go see spec XYZ, we follow that.". But, then you get back to those assumptions about implementing the spec properly. Obviously SOME out of band information is necessary. The client needs to know what the relationships mean so it can parse the request properly to find links to actions. Consider, an IRS Income Tax form. An IRS tax form is "reasonably" documented form, with often descriptive rule summaries on the fields. However, these field annotations are backed up by the "Form Instructions", which go in to more detail about the form. Finally, those instructions are backed by actual IRS law and procedure, which is likely unusable by the layman. So, of those three components: the Form (or in our case, the XSD of the datatype), the Instructions, and the Law, which level of documentation are we talking about here? Is the Law the SLA, the Instructions the spec that gets sent to the developers by the BSA, and the Form the structure the application uses to find where to put data and links, an XSD for example? Regards, Will Hartung (willh@...)
>> >> > > Sure prose is appropriate (never suggested otherwise). But just > saying 'use prose' does not address 3. above. This was one of the > points made by Steve Jones: given that WS-* has a rigid interface > description language 'built in' while the use of REST means you have > to come up with a means of describing interfaces yourself is a huge > selling point for WS-*. Not that having such things as WSDL and BPEL > is desireable from a networked systems point of view but when a non > technical person has to make a business decision about WS-* vs REST, > the latter is more likely to 'loose' if you first have to come up > with a style for documenting the interfaces. And even worse would be > the position of 'Nah, you don't need that. Just let everyone write > it up'. Sorry, but I think it is a flawed assertion to say that "WS-* has a rigid interface description language 'built in'" which somehow makes it better. Just because it has been sold like that does not make it true. I would love to see a developer who wrote a client application by just looking at the WSDL, or a business person who was not asked a single clarification about some operation or a field is supposed to work. Subbu
On Fri, Nov 6, 2009 at 10:58 AM, Jan Algermissen <algermissen1971@...> wrote: > > On Nov 6, 2009, at 3:36 PM, wahbedahbe wrote: > >> --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> >> wrote: >>> >>> >>> On Nov 6, 2009, at 6:45 AM, wahbedahbe wrote: >>> >>>> There are a lot of assumptions here about the limitations of machine >>>> to machine RESTful interaction. I have never seen any proof of these >>>> limitations. It just seems that because the general REST community >>>> can't figure out how to design good media types for machine to >>>> machine interaction, the consensus is that it isn't possible. >>> >>> I did not say that at all. Of course machine to machine RESTful >>> systems are possible (e.g. AtomPub). I was talking about inevitable >>> assumptions the client has to make that manifest itself in code or >>> configuration of and how to document these such that >>> >>> - no unnecessary coupling is created by the documentation >>> (as is by many of the documentations of the so called REST >>> interfaces on the Web) >>> - the style of documentation will be accepted by e.g. >>> the legal department >>> >>> Jan >>> >> >> Right, but it's the assumption that these out-of-band contracts are needed >> is what I'm questioning. > > Suppose you are coding a client for service that lets you search stuff and > then do something with it (e.g. update). Your client code will inevitably > contain the 'invocation' of the search (e.g. GET request to search > resource). And this is based on the expectation that the search resource > will be there (== being discoverable). Ok so right out of the gate I have issues with this. "Coding a client for a service" seems unRESTful to me. Firefox is not coded for Google, Facebook or Amazon. It is coded for URIs, HTTP and HTML (and HTML's "friends" CSS, Javascript etc.). Your client will only have code that has a notion of search invocation if it is inherent in the media type (including extensions and relations). But that is not necessary for search to work. The client could be performing a search without "knowing" it, because a combination of the media type, the current representation/state, and the client disposition and/or client side events have caused the search link to be followed. This is what happens when I use my browser to do a Google search. I don't see why this can't be the case for media types other than HTML (and it certainly would be for VoiceXML and CCXML). > If the service does not provide the > search resource anymore the client will break (if you code it to expect that > there is a search resource and sudden;y there is none what else could the > code do?). Humans can work on a solution for the problem, software cannot > (unless we go into AI of some form). I'm not sure what is "broken" here. The service seems broken, but I'm not sure if the client is broken -- you should be able to point it at any other service that supports the client's media type(s) and it should still work just fine. Or is the service just changed so that the search step is not required anymore? If the service is still accomplishing a useful goal within the bounds of the client's media type, things still seem ok. If the service is now not doing something useful -- well that is a service implementation issue. If it's still working within the bounds of the media type then it is not a technical issue with the contract between client and service _software_. It is more of an issue between the operator of the client and the operator of the service -- maybe I'm splitting hairs here but it seems to be different than what you are describing. And more importantly doesn't seem to have any differences in the machine-to-machine vs. human-to-machine contexts. i.e. if Amazon stopped selling books, it would trip up a human too. The solution -- go to a different start URI -- applies in both contexts as well. And Firefox is not going to figure out what alternate URI to use just as a machine-driven client will not. If the service started spitting back representations that did not conform to the client's media type. Then I think you can say that the software contract is broken (especially if the client is setting it's accept headers properly). But this again equally trips up machine driven and human driven client software. > > Technically, this is inevitable and no other, more specialized interface > will help you, because if a Web service does not implement some > service.search() anymore the SOAP call will also fail and no WSDL will > prevent that. This is just the nature of binding the components at runtime > and not at compile time (as you would in a non-networked application). > > The problem is at the business level though because the WSDL specifies a > contract that defines the search method to be there and if you SOAP call > fails, you can take the WSDL and the stack trace, run to your service > provider and say: "Where's that method you *promised*?"). > Right. So in REST, the software contract is more "client specific". i.e. the client supports a known media type and the service targets that media type and all clients that support it. So the "missing method" equivalent would be somehow not conforming to the media type supported by the client. And if search capability was something that had to be in every document of that media type then you get the same sort of contract. But often that is not the case (it isn't the case with HTML and lots of other media types anyways). Instead the service is publishing a URI and saying "somewhere behind this URI, search is going to happen". But that is a contract between the client operator and the service operator about the semantics of the service, not the interface. The WSDL equivalent might be the case where the search method is there but it doesn't provide search semantics (e.g. it always spits back the same results no matter what the search terms are). > With REST, there is (deliberately) no such contract and the client's > expectation that there will be a search resource is based on observation and > trust and on some cloud-level based knowledge about the overall kind of the > interaction. > > From the service owner perspective it is also an interesting question, how a > developer would know if he could take away the search resource. After all, > there is no contract to look at that would make clear what the client > expectations really are. IOW, if you are in charge of evolving the service, > you should have a pretty clear source that tells you what you can change and > what you cannot change. This is rather easy on the media type level but it > is also the combination of hypermedia sematics in use that matter. "Are my > clients 'licensed' to assume the presence of that Atom extension or are they > not? Well, we never told them we'd never take it away so we can drop it at > any time, right?" > > One approach to all this is probably to simply state that a service will > never evolve in an incompatible way (e.g. "we'll never remove anything") and > if it has to be incompatible, there'll just be a new service. > > Now, I am not trying to be enterprisey and ride the 'oh, inside the enetrpsi > there are the hard problems' horse, but 'follow your nose' just does not > provide the specifics that managers and lawyers (usually rightly so) demand. > > What is particulary interesting is that IMHO there is the danger of, in the > usual attempt to escape this situation, there is far too much coupling put > into the descriptive documents and many of REST's advantages lost (see > interface docs for example that list the URIs of the resources to use, what > formats to expect and which HTTP return codes - and what they mean(!) in the > service context). > > So, balance is really important. > > Jan > So here's an observation: In the web (and in the VoiceXML/CCXML world), extension support and in general, media type evolution is driven by clients not services. In the machine-driven-REST world, the trend seems to be the opposite. That means service interfaces are "service specific" (e.g. the media type is service-specific or includes service-specific extensions, namespaces or relations). That means coupling. That means that you don't get the full advantages of REST. If I had to point to a constraint being violated, I'd say it was "Self-Descriptive Messages". I don't see how a message can be self-descriptive if it is in a service-specific format. To me that is a key part of using "standard" media types -- the media type exists outside of the context of your service. But I know a lot of folks have different interpretations of what "standard" means in the context of REST. The differences are subtle but I think the implications are huge. Andrew
On Nov 6, 2009, at 8:29 PM, Subbu Allamaraju wrote: > >>> >>> >> >> Sure prose is appropriate (never suggested otherwise). But just >> saying 'use prose' does not address 3. above. This was one of the >> points made by Steve Jones: given that WS-* has a rigid interface >> description language 'built in' while the use of REST means you have >> to come up with a means of describing interfaces yourself is a huge >> selling point for WS-*. Not that having such things as WSDL and BPEL >> is desireable from a networked systems point of view but when a non >> technical person has to make a business decision about WS-* vs REST, >> the latter is more likely to 'loose' if you first have to come up >> with a style for documenting the interfaces. And even worse would be >> the position of 'Nah, you don't need that. Just let everyone write >> it up'. > > > Sorry, but I think it is a flawed assertion to say that "WS-* has a > rigid interface description language 'built in'" which somehow makes > it better. Don't get me wrong: I am not saying that it is better. I am just saying that the WSDL serves a (perceived) need of the people in charge of assigning people like us the budget. And I am saying that it is very easy to actually build more coupling into prose definitions of REST services than is necessary. Jan > Just because it has been sold like that does not make it > true. I would love to see a developer who wrote a client application > by just looking at the WSDL, or a business person who was not asked a > single clarification about some operation or a field is supposed to > work. > > Subbu > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Nov 6, 2009, at 8:41 PM, Andrew Wahbe wrote: > On Fri, Nov 6, 2009 at 10:58 AM, Jan Algermissen > <algermissen1971@...> wrote: >> >> On Nov 6, 2009, at 3:36 PM, wahbedahbe wrote: >> >>> --- In rest-discuss@yahoogroups.com, Jan Algermissen >>> <algermissen1971@...> >>> wrote: >>>> >>>> >>>> On Nov 6, 2009, at 6:45 AM, wahbedahbe wrote: >>>> >>>>> There are a lot of assumptions here about the limitations of >>>>> machine >>>>> to machine RESTful interaction. I have never seen any proof of >>>>> these >>>>> limitations. It just seems that because the general REST community >>>>> can't figure out how to design good media types for machine to >>>>> machine interaction, the consensus is that it isn't possible. >>>> >>>> I did not say that at all. Of course machine to machine RESTful >>>> systems are possible (e.g. AtomPub). I was talking about inevitable >>>> assumptions the client has to make that manifest itself in code or >>>> configuration of and how to document these such that >>>> >>>> - no unnecessary coupling is created by the documentation >>>> (as is by many of the documentations of the so called REST >>>> interfaces on the Web) >>>> - the style of documentation will be accepted by e.g. >>>> the legal department >>>> >>>> Jan >>>> >>> >>> Right, but it's the assumption that these out-of-band contracts >>> are needed >>> is what I'm questioning. >> >> Suppose you are coding a client for service that lets you search >> stuff and >> then do something with it (e.g. update). Your client code will >> inevitably >> contain the 'invocation' of the search (e.g. GET request to search >> resource). And this is based on the expectation that the search >> resource >> will be there (== being discoverable). > > Ok so right out of the gate I have issues with this. "Coding a client > for a service" seems unRESTful to me. Firefox is not coded for Google, > Facebook or Amazon. It is coded for URIs, HTTP and HTML (and HTML's > "friends" CSS, Javascript etc.). Ok, yes. I should have made the distinction between the 'user agent' and the program that makes use of it. At least this is my mental model: you have a library that implements all the specs that make sense to implement (meaning: not only the ones for this service) and this piece is equivalent to the browser. Then, you need some piece of code that uses the user agent to interact with the service (or the Web for that matter). This piece of code is equivalent to the human being. Inside this code (or its configuration) one must manifest some expectations (e.g. that the search resource is there so the user agent can carry out the search and hand the result back to the program. If you want to get rid of that assumption you'd have to come up with a way for the service to actually drive the client program (like a GUI app is driven by the GUI). However, this does not work because the client is a state machine on its own and cannot be entirely driven by the hyper,edia received from the server. At some point there must be 'invocations of goals' by the client side program. > Your client will only have code that > has a notion of search invocation if it is inherent in the media type > (including extensions and relations). But that is not necessary for > search to work. The client could be performing a search without > "knowing" it, because a combination of the media type, the current > representation/state, and the client disposition and/or client side > events have caused the search link to be followed. This is what > happens when I use my browser to do a Google search. I don't see why > this can't be the case for media types other than HTML (and it > certainly would be for VoiceXML and CCXML). Yes, great model. But what is driving the whole thing? With the browser it is the human being that drives the interaction because she e.g. wants to buy a book. In the machine world you need a process that initiates a goal and this goal includes assumptions. > >> If the service does not provide the >> search resource anymore the client will break (if you code it to >> expect that >> there is a search resource and sudden;y there is none what else >> could the >> code do?). Humans can work on a solution for the problem, software >> cannot >> (unless we go into AI of some form). > > I'm not sure what is "broken" here. The service seems broken, but I'm > not sure if the client is broken -- you should be able to point it at > any other service that supports the client's media type(s) and it > should still work just fine. Yeah - and this is precisely what you cannot utter in a room full with the guys that assign the budget or care about the company being sued. Would you pay for Google's API and then, if part of the service disappears just go of to some other service on the Web? > Or is the service just changed so that > the search step is not required anymore? If the service is still > accomplishing a useful goal within the bounds of the client's media > type, things still seem ok. Yes, this sort of magic can be coded into the user agent component and this is what enables the evolvability of the components without the need for bringing all the devs in a room all the time. > If the service is now not doing something > useful -- well that is a service implementation issue. If it's still > working within the bounds of the media type then it is not a technical > issue with the contract between client and service _software_. It is > more of an issue between the operator of the client and the operator > of the service -- maybe I'm splitting hairs here but it seems to be > different than what you are describing. And more importantly doesn't > seem to have any differences in the machine-to-machine vs. > human-to-machine contexts. i.e. if Amazon stopped selling books, it > would trip up a human too. The solution -- go to a different start > URI -- applies in both contexts as well. Right. But as said above: there are a substantial number of people that won't buy into it. And often rightly so because when you assign budget to something. It is all about documenting the assumptions that might break. I do not see media types to be the means where this can be done. Especialy not when a service uses a combination of hypermedia semantics. Or how would you express that a service pronises to use a certain Atom extension? You could if you'd define application/myatom and make the extension mandatory but this obviously breaks orthogonality. > And Firefox is not going to > figure out what alternate URI to use just as a machine-driven client > will not. > > If the service started spitting back representations that did not > conform to the client's media type. Then I think you can say that the > software contract is broken (especially if the client is setting it's > accept headers properly). But this again equally trips up machine > driven and human driven client software. Yes. So, how does a service say what media types the client may safely expect? > >> >> Technically, this is inevitable and no other, more specialized >> interface >> will help you, because if a Web service does not implement some >> service.search() anymore the SOAP call will also fail and no WSDL >> will >> prevent that. This is just the nature of binding the components at >> runtime >> and not at compile time (as you would in a non-networked >> application). >> >> The problem is at the business level though because the WSDL >> specifies a >> contract that defines the search method to be there and if you SOAP >> call >> fails, you can take the WSDL and the stack trace, run to your service >> provider and say: "Where's that method you *promised*?"). >> > > Right. So in REST, the software contract is more "client specific". > i.e. the client supports a known media type and the service targets > that media type and all clients that support it. So the "missing > method" equivalent would be somehow not conforming to the media type > supported by the client. And if search capability was something that > had to be in every document of that media type then you get the same > sort of contract. But often that is not the case (it isn't the case > with HTML and lots of other media types anyways). > > Instead the service is publishing a URI and saying "somewhere behind > this URI, search is going to happen". So, how does it say that? > But that is a contract between > the client operator and the service operator about the semantics of > the service, not the interface. Well, yes. It is all about how this contract is best established and written. > The WSDL equivalent might be the case > where the search method is there but it doesn't provide search > semantics (e.g. it always spits back the same results no matter what > the search terms are). > > >> With REST, there is (deliberately) no such contract and the client's >> expectation that there will be a search resource is based on >> observation and >> trust and on some cloud-level based knowledge about the overall >> kind of the >> interaction. >> >> From the service owner perspective it is also an interesting >> question, how a >> developer would know if he could take away the search resource. >> After all, >> there is no contract to look at that would make clear what the client >> expectations really are. IOW, if you are in charge of evolving the >> service, >> you should have a pretty clear source that tells you what you can >> change and >> what you cannot change. This is rather easy on the media type level >> but it >> is also the combination of hypermedia sematics in use that matter. >> "Are my >> clients 'licensed' to assume the presence of that Atom extension or >> are they >> not? Well, we never told them we'd never take it away so we can >> drop it at >> any time, right?" >> >> One approach to all this is probably to simply state that a service >> will >> never evolve in an incompatible way (e.g. "we'll never remove >> anything") and >> if it has to be incompatible, there'll just be a new service. >> >> Now, I am not trying to be enterprisey and ride the 'oh, inside the >> enetrpsi >> there are the hard problems' horse, but 'follow your nose' just >> does not >> provide the specifics that managers and lawyers (usually rightly >> so) demand. >> >> What is particulary interesting is that IMHO there is the danger >> of, in the >> usual attempt to escape this situation, there is far too much >> coupling put >> into the descriptive documents and many of REST's advantages lost >> (see >> interface docs for example that list the URIs of the resources to >> use, what >> formats to expect and which HTTP return codes - and what they >> mean(!) in the >> service context). >> >> So, balance is really important. >> >> Jan >> > > So here's an observation: In the web (and in the VoiceXML/CCXML > world), extension support and in general, media type evolution is > driven by clients not services. In the machine-driven-REST world, the > trend seems to be the opposite. That means service interfaces are > "service specific" (e.g. the media type is service-specific or > includes service-specific extensions, namespaces or relations). That > means coupling. That means that you don't get the full advantages of > REST. If I had to point to a constraint being violated, I'd say it was > "Self-Descriptive Messages". I don't see how a message can be > self-descriptive if it is in a service-specific format. To me that is > a key part of using "standard" media types -- the media type exists > outside of the context of your service. Yes, that is a very, very good way to see it. When enterprises engage in a REST effort, they should not worry about defining services (but it is what everybody immediately does :-) The should form a central board (like the IETF) and get their media types sorted out. At least enough to get the project rolling. If you do the types as you need them for the services you won't get the generalization level right. (You should of course let the envisioned services inspire your central effort). So, to repeat your very true words: > "the media type exists outside of the context of your service." > But I know a lot of folks have > different interpretations of what "standard" means in the context of > REST. The differences are subtle but I think the implications are > huge. Yes and yes. Jan > > Andrew -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Nov 6, 2009, at 6:45 AM, wahbedahbe wrote: > > > I've been encouraging folks on this list to look at CCXML for a long > > time, but as far as I know I haven't convinced anyone to spend the > > time. Hey -- maybe CCXML isn't RESTful after all; it certainly has a > > different flavor than most of the media types coming from the REST > > community. I'd love to get all of your feedback; maybe I'm missing > > something. But if it isn't then perhaps the as yet to be formally > > defined style behind CCXML is a good alternative to REST for machine > > to machine interaction. It certainly seems to have many of the > > properties most readers of this list are looking for. > > > From the TR: "A CCXML session begins with the execution of a CCXML > document." > > Now, my understanding might be wrong because I dod not have the time > to put my head into the spec, but the above quote sounds a lot like > that the coordination between components in CCXML is achieved by > passing code (executable documents) around. To be RESTful the > coordination should be achieved by passing representations of state > around. > > Can you provide us with an example of a typical interaction? > > Jan A CCXML document describes a state machine for processing events raised up to the client. For each type of event fired, the document describes the transition -- the next state as well as the "actions" to be taken on the transition. This could be events sent back down to the underlying platform, the execution of some javascript or it could describe a page transition. The entire service is composed of a set of mini-state machine documents. A GET or a POST can be used to transition from document to document. Javascript variables can be marshalled into the query string of a GET or the body of a POST much like an HTML form (though the syntax is very different as this is not a form abstraction). Also, there is no PUT or DELETE support like in (current) HTML. Page transitions are kind of interesting in CCXML because they are broken into two steps. First, a <fetch> tag is executed on a page transition that tells the client to perform the GET or POST. An event is fired when the request is complete and the page is parsed and ready to go. A <goto> tag can then be executed to complete the transition. This model is used to allow the state machine to continue handling events during page transitions. Ancillary script resources can be handled in a similar way with <fetch> and <script> (though <script> can also just use a src attribute in which case the script is fetched and parsed when the parent ccxml document is first being prepared for execution). In HTML, a good portion of the javascript processing is focused on handling events. This is the same in CCXML, but here the scripts do not modify a DOM -- the ccxml markup describes the state machine being executed and a state machine that changes as it executes would likely make most developers heads explode! The objects exposed to the script are objects controlled by the client -- calls, conferences and dialogs (an automated phone system session, usually implemented in VoiceXML -- http://www.w3.org/TR/voicexml20/). I think perhaps the window object would be a good HTML analog. The events in ccxml are mostly describing changes to these objects (though some are related to the document execution, e.g. "your <fetch> completed"). So an event might signify that caller A hung up. The messages/events sent down to the platform on a state transition are primarily used to invoke methods on those objects. You get an event back when your method completed. e.g. You send a message to join callers A and B together so they can hear each other and get an event back when the join completed. This is not implemented as simple javascript calls as asynchrony is important so that the state machine is never blocked. You could use asynchronous javascript functions I suppose, but the event handling is the primary purpose of the document format so the markup expresses event handling as much as possible. That's sort of a long intro to CCXML. Sorry about that, but I thought it was necessary to establish a bit of common ground on the format before answering your questions. Some of the simple examples in the spec might be worth a quick read at this point: http://www.w3.org/TR/ccxml/#SimpleExamples So back to your questions... Is a CCXML document executable? Yes. But I'd argue that an HTML document is too. It's funny, but when people look at HTML through REST-colored glasses they seem to completely miss all of the event handling going on. A huge portion of the content of an HTML document is focused on handling input events. Even if you strip out the javascript and the on* attributes, the markup is still telling the browser how to handle input events. The <a> element tells the browser what to do when the presentation of the enclosed text is "activated" (e.g. clicked). The <form> and <input> elements tell the browser how to present controls with specific interaction semantics. Maybe it's my own background in VoiceXML/CCXML but I've always thought of markup as "executable" -- a mini-program in a declarative form. The declarative format makes the program semantics more visible and allows tools and spiders to deal with the markup more easily to provide "secondary" types of document processing. e.g. what TBL describes as the Principle of Least Power: http://www.w3.org/DesignIssues/Principles.html To me "code on demand" means adding non-declarative executable content into the mix to provide functionality beyond what is expressable in declarative form at the expense of visibility. So because CCXML, VoiceXML, or event HTML markup can be seen as "executable" does not instantly imply code-on-demand (well, CCXML is unnecessarily tied to Javascript, but you could envision an equivalent language that didn't require it). Now, can you view a CCXML document as a representation of resource state? I think so. Consider a Google Voice like service that allows a single phone number to be used to contact you at a number of alternate numbers (your home, office and cell phone) and send the call to voice mail if you don't answer any of them. A resource here might be a single user's settings for the service, specifying the phone numbers to try and in what order, the number of seconds to wait on each number before sending the call to voicemail, etc. The standard approach most REST practitioners would take here would be to cook up an XML or JSON format for this data and stick it behind a URI. The same XML/JSON would be served to say an Ajax interface for editing the data and to the system that actually implemented the call control service that tried to reach you. I consider this a non-self-descriptive message because it's using a service-specific, non-standard format. All clients would be bound to the service because they are bound to it's format. Instead, content negotiation should be used to represent the resource in a format specific to the requesting client. So serve the HTML page to view and edit the user's settings to a web browser and serve the CCXML state machine that represented those settings to the call control system. The CCXML state machine is a representation of those settings, just as the HTML web page is a representation. Here the clients support their native markup formats and are not bound to the service at all. If you've got this far in the email, then thanks for reading -- this was a lot longer than I anticipated, but hopefully it gets the idea across. Regards, Andrew
On Nov 6, 2009, at 8:41 PM, Andrew Wahbe wrote: > > So here's an observation: In the web (and in the VoiceXML/CCXML > world), extension support and in general, media type evolution is > driven by clients not services. In the machine-driven-REST world, the > trend seems to be the opposite. That means service interfaces are > "service specific" (e.g. the media type is service-specific or > includes service-specific extensions, namespaces or relations). That > means coupling. That means that you don't get the full advantages of > REST. If I had to point to a constraint being violated, I'd say it was > "Self-Descriptive Messages". I don't see how a message can be > self-descriptive if it is in a service-specific format. To me that is > a key part of using "standard" media types -- the media type exists > outside of the context of your service. So (ideally and stretching the point): - services should only use media types understood by all clients - services should use as many hypermedia 'options' as possible (so if a client does not find a particuar link etc. it can infer that the supporting application data does not exist. JAn > But I know a lot of folks have > different interpretations of what "standard" means in the context of > REST. The differences are subtle but I think the implications are > huge. > > Andrew > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Fri, Nov 6, 2009 at 3:59 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Nov 6, 2009, at 8:41 PM, Andrew Wahbe wrote: > >> >> So here's an observation: In the web (and in the VoiceXML/CCXML >> world), extension support and in general, media type evolution is >> driven by clients not services. In the machine-driven-REST world, the >> trend seems to be the opposite. That means service interfaces are >> "service specific" (e.g. the media type is service-specific or >> includes service-specific extensions, namespaces or relations). That >> means coupling. That means that you don't get the full advantages of >> REST. If I had to point to a constraint being violated, I'd say it was >> "Self-Descriptive Messages". I don't see how a message can be >> self-descriptive if it is in a service-specific format. To me that is >> a key part of using "standard" media types -- the media type exists >> outside of the context of your service. > > So (ideally and stretching the point): > > - services should only use media types understood by all clients > - services should use as many hypermedia 'options' as possible > (so if a client does not find a particuar link etc. it can > infer that the supporting application data does not exist. > > JAn > Well not quite. I don't think you can expect "all clients" to understand the same media types. I'd say something more like: - services should use the media types understood by the clients they are targeting. On the other point, I'm not sure what you mean by hypermedia 'options'. Andrew
Jan: <snip> - services should only use media types understood by all clients - services should use as many hypermedia 'options' as possible (so if a client does not find a particuar link etc. it can infer that the supporting application data does not exist. </snip> Servers can only make promises on any pre-published URIs and on media-types (including format, scheme, and semantics such as links and relation values). mca http://amundsen.com/blog/ On Fri, Nov 6, 2009 at 15:59, Jan Algermissen <algermissen1971@...> wrote: > > On Nov 6, 2009, at 8:41 PM, Andrew Wahbe wrote: > >> >> So here's an observation: In the web (and in the VoiceXML/CCXML >> world), extension support and in general, media type evolution is >> driven by clients not services. In the machine-driven-REST world, the >> trend seems to be the opposite. That means service interfaces are >> "service specific" (e.g. the media type is service-specific or >> includes service-specific extensions, namespaces or relations). That >> means coupling. That means that you don't get the full advantages of >> REST. If I had to point to a constraint being violated, I'd say it was >> "Self-Descriptive Messages". I don't see how a message can be >> self-descriptive if it is in a service-specific format. To me that is >> a key part of using "standard" media types -- the media type exists >> outside of the context of your service. > > So (ideally and stretching the point): > > - services should only use media types understood by all clients > - services should use as many hypermedia 'options' as possible > (so if a client does not find a particuar link etc. it can > infer that the supporting application data does not exist. > > JAn > > > >> But I know a lot of folks have >> different interpretations of what "standard" means in the context of >> REST. The differences are subtle but I think the implications are >> huge. >> >> Andrew >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@acm.org > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Nov 6, 2009, at 10:27 PM, Andrew Wahbe wrote: > On Fri, Nov 6, 2009 at 3:59 PM, Jan Algermissen <algermissen1971@... > > wrote: >> >>> >> >> So (ideally and stretching the point): >> >> - services should only use media types understood by all clients >> - services should use as many hypermedia 'options' as possible >> (so if a client does not find a particuar link etc. it can >> infer that the supporting application data does not exist. >> >> JAn >> > > Well not quite. I don't think you can expect "all clients" to > understand the same media types. I'd say something more like: > - services should use the media types understood by the clients they > are targeting. > > On the other point, I'm not sure what you mean by hypermedia > 'options'. I meant that (ideally and stretching the point) services should make use of as much hypermedia as they can. If they know about foo links and if they can provide a foo link they should do so. So when clients do not find a foo link they can reasonably 'believe' that the server cannot provide one. (But this is just strectching the point and mumbling) Jan > > Andrew > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Nov 6, 2009, at 10:48 PM, Will Hartung wrote: > On Fri, Nov 6, 2009 at 1:37 PM, Jan Algermissen <algermissen1971@... > > wrote: >> I meant that (ideally and stretching the point) services should make >> use of >> as much hypermedia as they can. If they know about foo links and if >> they can >> provide a foo link they should do so. >> So when clients do not find a foo link they can reasonably 'believe' >> that >> the server cannot provide one. > > Where does the client get this expectation that a "foo" link exists or > should exist at all? Yes, exactly. When a service uses Atom and chooses some kind of extension, the client is unable to know this before the interaction because it cannot be specified. The client can only observe it and then trust the server to keep using the extension if it (the client) chosses to implement the extension. This calls for things like 'profiles' to provide a way for the server to communicate that it uses a certain extension (this has come up on the atom lists a couple of times and there even was a draft once). OTH, it seems contrary to what REST is aiming at. From a REST POV the understanding really is (IMHO) that the client makes observations, probably 'implements' them and then trusts the server to 'do its best'. And this is what is fine on the human Web (because we can compensate should such expectations break) and machine clients can not. I observe the 1-click at Amazon. I like it. I start using it. When it's suddenly gone I just go look what is there instead. Maybe use that old shopping cart again. Programs cannot do this. The latter is not even a problem because they can just fail and a human admin can find a solution. But trying to convince busniess people to base the entire architecture of an enterprise on this model is an excercise where you risk your credibility as an IT professional :-) > > If the client is expecting a "foo" link and a service isn't providing > one, then, I'd argue that there is a mismatch in the protocol that the > client is using and the one the server is using (whether by design, > version incompatibility, or bug, doesn't much matter). Right. But how to express the protocol? That is really the question. If you bundle up hypermedia semantics (media types, extensions, link rels, query params) you usually do not stick them in a single spec and therefore you do not have a place to define that you use exactly that bundle. > > Simply, when you have this kind of mis-communication, someone is using > the "wrong" protocol, or isn't following the spec properly of the > agreed upon/advertised protocol. See above. Which spec. > > The specification will dictate when and where a service will provide > the "foo" link, and where the client can expect to find one. > See above. Jan > > Regards, > > Will Hartung > (willh@...) -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Fri, Nov 6, 2009 at 3:25 PM, Jan Algermissen <algermissen1971@...> wrote: > OTH, it seems contrary to what REST is aiming at. From a REST POV the > understanding really is (IMHO) that the client makes observations, > probably 'implements' them and then trusts the server to 'do its > best'. When talking machine to machine, I don't think a REST client is any different in terms of rigidity than any other remote protocol. The REST client will do what it's told with the payloads it sends and receives. To a human, a robust REST service should be able to be "discovered". At this level, REST can be a GUI view to a normal protocols Command Line view. In theory, with little more than a host name, a human can "crawl" a REST service, and discover it features and data types. Each payload tells the user what are valid edges on the graph that makes up the API. In theory, with robust XSDs, which can be self documenting, the user can learn how to build those payloads, and what is and is not valid input for them. In a GUI, you browse Menus, you look at Dialog options, and slowly, over time, you can learn much about the application and it's capabilities. REST can be similar. The Common Interface means certain aspects of the API simply are left unsaid, the operations defined by the Common Interface. Whether a resource supports the Common Interface is in itself can be discoverable (for example, using OPTIONS with HTTP). So, in this case, a REST interface can be like Literal Programming. Imagine if each payload were an XML document, with an accompanying XSD, and, also, a XSLT processing instruction point to a template that renders a complete HTML, human readable description of the payload. But, none of this helps machine clients. It help people CREATE machine clients, but once created, as we all well know, machine clients are bone stupid and aggravatingly literal. If a service changes, and thus breaks an existing machine client, a user can, in theory, rely on the discoverable nature to fix the client and bring it back on line. But obviously if you need to truly rely on this transaction, it would be better to work with a guarantee of some kind from the provider that the service simply will not change underneath the feet of deployed clients (an SLA). At an automated level, the focus on the link relationships, rather than the URIs themselves, lets the underlying infrastructure change, potentially in even dramatic ways, without affect (we've seen how services like Amazon and Ebay have grown with us as consumers being pretty much none the wiser of the physical deployment aspects). By using a backwards compatible formats, older clients keep working, newer clients get new functionality. The machine web can handle physical changes fairly easily, with a robust client, and cooperative back ends. As long as the initial entry points are properly supported, most everything else can be discovered by the client as it does it's processing. However, the machine web can not handle incompatible API changes on its own. It simply can't. The clients are too rigid, as they must be. Earlier someone mentioned how Firefox doesn't have to be rewritten to use Google, or Amazon, or Ebay. In terms of RENDERING the servers content, they're correct. The fundamental difference is that Firefox is not USING those services. The Human user is. The human is interpreting the resource representations and leveraging that information to perform their task (search, buying, bidding, whatever). The contract between Firefox and Google is the same "I send you URLs, you send me HTML". The HTML is "opaque" to Firefox. It simply does not care what is being returned. It's job is not to care, rather it's job is to execute the payload, and present the results to the user. Now, if Firefox asked Google for "index.html", and Google replied "Here you go, text/html" and then streamed JPEG content, Firefox would blink, pause, go WTF, and finally dump a load of gibberish for the Human to interpret, since the JPEG content is most certainly NOT HTML like was asked for and promised. The machine web is cold, uncaring, and not very tolerant of change. It's particularly stubborn when it's lied too. I don't think it is appropriate to apply the Human Web to the Machine Web in this sense. REST is no magic bullet, REST offers no "intuition", or "interpretation" to make the machine web less rigid or more forgiving. A Client can be, modern web browsers can do amazing guess work and work almost at a "do what I mean" level. But they leave much of the hard part of leveraging actual services to the Carbon Based Lifeform driving the keyboard. Regards, Will Hartung (willh@...)
On Fri, Nov 6, 2009 at 3:17 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Nov 6, 2009, at 8:41 PM, Andrew Wahbe wrote: >> >> Ok so right out of the gate I have issues with this. "Coding a client >> for a service" seems unRESTful to me. Firefox is not coded for Google, >> Facebook or Amazon. It is coded for URIs, HTTP and HTML (and HTML's >> "friends" CSS, Javascript etc.). > > Ok, yes. I should have made the distinction between the 'user agent' and the > program that makes use of it. At least this is my mental model: you have a > library that implements all the specs that make sense to implement (meaning: > not only the ones for this service) and this piece is equivalent to the > browser. Then, you need some piece of code that uses the user agent to > interact with the service (or the Web for that matter). This piece of code > is equivalent to the human being. Inside this code (or its configuration) > one must manifest some expectations (e.g. that the search resource is there > so the user agent can carry out the search and hand the result back to the > program. Yes this terminology is something worth getting consensus on. I was using the terms "client" and "underlying platform" for your "user agent" and "client program" respectively. But I still think there is a client program in the case of a web browser -- it is the window manager in the OS. Take the human being out of the system for a minute and think about how the web works. Then try using that as the model for your own systems. > > If you want to get rid of that assumption you'd have to come up with a way > for the service to actually drive the client program (like a GUI app is > driven by the GUI). However, this does not work because the client is a > state machine on its own and cannot be entirely driven by the hyper,edia > received from the server. At some point there must be 'invocations of goals' > by the client side program. > See my email that elaborates on CCXML. The hypermedia document can be seen as a description of a state machine for handling client program events that is executed by the user agent. In response to those events, messages/events can be sent back down to the client or HTTP requests can be placed to the server. Often those requests to the server cause a new state machine to be loaded by the user agent. This is not only a description of the execution of a CCXML browser, but also a VoiceXML browser and an HTML browser. You just have to properly define the user agent and the client program to see it that way. From this perspective, the user agent is a dynamic mediator between the client program's event model and the server's resource model. The currently loaded hypermedia document controls the behavior of the mediator. So if you are using the right media type, the client's "goals" are expressed in terms of the client program's event model and translated into actions on the server's resources by the user agent. > >> Your client will only have code that >> has a notion of search invocation if it is inherent in the media type >> (including extensions and relations). But that is not necessary for >> search to work. The client could be performing a search without >> "knowing" it, because a combination of the media type, the current >> representation/state, and the client disposition and/or client side >> events have caused the search link to be followed. This is what >> happens when I use my browser to do a Google search. I don't see why >> this can't be the case for media types other than HTML (and it >> certainly would be for VoiceXML and CCXML). > > Yes, great model. But what is driving the whole thing? With the browser it > is the human being that drives the interaction because she e.g. wants to buy > a book. In the machine world you need a process that initiates a goal and > this goal includes assumptions. > > >> >>> If the service does not provide the >>> search resource anymore the client will break (if you code it to expect >>> that >>> there is a search resource and sudden;y there is none what else could the >>> code do?). Humans can work on a solution for the problem, software cannot >>> (unless we go into AI of some form). >> >> I'm not sure what is "broken" here. The service seems broken, but I'm >> not sure if the client is broken -- you should be able to point it at >> any other service that supports the client's media type(s) and it >> should still work just fine. > > Yeah - and this is precisely what you cannot utter in a room full with the > guys that assign the budget or care about the company being sued. Would you > pay for Google's API and then, if part of the service disappears just go of > to some other service on the Web? > This isn't a technical problem is it? The contract you are looking for here is a legal one isn't it? <snip> > > Right. But as said above: there are a substantial number of people that > won't buy into it. And often rightly so because when you assign budget to > something. It is all about documenting the assumptions that might break. I > do not see media types to be the means where this can be done. Especialy not > when a service uses a combination of hypermedia semantics. Or how would you > express that a service pronises to use a certain Atom extension? You could > if you'd define application/myatom and make the extension mandatory but this > obviously breaks orthogonality. > <snip> So here's an extension of my earlier observation about media types being "client specific" standards. This model tends to produce a situation where there are orders of magnitude more services than clients -- actually, you can argue that it is designed to do that. Compare the number of web browsers to the number of web sites. Also, think of extensions as the evolution of the media type. A specific combination of extensions can be thought of as a version of the media type (and the versions might branch quite heavily). A version of a client supports a version of the media type. If you put these together, you get the contracts that are quite common in the web today. e.g. "this service is designed to work with IE 8, Firefox 3+ and Safari 4+." Is this ideal? No. But so far the web hasn't been very successful at doing much better than this. Andrew
Hello Jan. Sorry to mix this in. In WSA world, particularly the SOAP definition, you have headers. The idea is that you may want to have intermediaries or connectors (using Roy's definitions) between the origin and the destination. If you want to encrypt the payload, fine. Headers are still enough info to manage all in-the-middle needs. But then, does it make the message non self-descriptive? What is the meaning of that? I think it is not the idea of a message anyone can read and know exactly what is happening. Let's see: to me, it makes no sense to think someone, that does not know anything about you app, nor anything about your business, may take any message in time, and completely know all the app history, what is the app state, what is it going on, and how will it end. SO, we can assume people that finds a message, may know what it is if it knows a little about the business. It may actually know what's up with the app if it knows the app. But if the one that finds the message is one of the intended intermediaries, it should know what is going on, and should know how to act at the event. In SOAP world there are elements to indicate intermediaries if they should process, pass or need to understand about the message, without knowing the payload. And it makes sense (a little). So, in your example, the actual fact that William does not know what is happening in the message with a form encoded data does not break any REST rule, if William is not part of your app. And if William is, it may actually need to know something in the HTTP headers to process, without even looking at payload. And if William needs to process the payload, it will be actually expecting that format, don't you think? The problem we often find in REST, is that it was designed for networked systems, with large grained hypermedia transfer apps. That is a domain, a clearly defined one, with media types created for it. Sometimes, you need a new media type, and if you dare to use your own, some people may jump and say that is out-of-band information, forcing you to use types not suitable for you app. Or maybe, your app is not suitable for REST. It may be out-of-band in that domain, but not in yours. Probably that is a discussion someday, someone may start: Do you want to follow REST or the Web Implementation? Interesting. Cheers! William Martinez. --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > On thing that keeps bugging me.... > > Suppose I have an order accepting resource /order-processor-a and the > client has discovered that it accepts application/order+xml (assuming > the type being a standard type). Order submissions would be done with > > (Case A:) > > POST /order-processor-a > Content-Type: application/order+xml > > <order> > <item>A</item> > <item>B</item> > </order> > > Now suppose I had another order processor that accepts submission of > orders in the form of form data, e.g. > > > (Case B:) > > POST /order-processor-b > Content-Type: application/x-www-form-urlencoded > > item=A&item=B > > > Isn't case B violating REST's message self descriptiveness constraint > because the meaning of the message depends on the knowledge that the > recipient is an order processor? IOW, an observer could only figure > out the meaning if it knew the past interactions and not form the > message itself. > > Is application/x-www-form-urlencoded as bad a choice as application/ > xml? In fact, is any general media type (e.g. text/uri-list) a > violation of the message self descriptiveness constraint? > > Thanks, > > Jan >
On Nov 6, 2009, at 8:41 PM, Andrew Wahbe wrote:
> the media type exists
> outside of the context of your service.
Picking this up again:
Hypermedia specifications[1] implicitly define goals. They do this by
establishing semantics on resources[2] and expressing client goals in
terms of these semantics. For example, such specifications would
define that a client can 'place an order' (the goal) by POSTing order
data to some resource that it discovered from received hypermedia as
being the order-processor.
All the available hypermedia specifcations establish the set of goals
that clients and servers can use during their communication.
Putting this in the context of service descriptions, I think that
services must describe what goals they support and that this on the
one hand provides clients with a means for service discovery and on
the other hand establishes the contract bteween client and server that
we've been talking about recently.
Makes sense?
Jan
[1] Media types, link relations, extensions,...
[2] depending on how they appear in the hypermedia received by the
client.
IOW their 'linking context'.
I tried to explain this here:
http://algermissen.blogspot.com/2009/09/hypermedia-context.html
Hi guys, I have to model a REST API supporting complex search operations: that is, I have to submit a kind of query document, and get back a response containing the query result. I was thinking of PUTting the query document (say to /documents/query) and reading back the result response. Is it a proper choice to use PUT for executing a query operation? Any thoughts? Thanks in advance, Cheers, Sergio B. -- Sergio Bossa Software Passionate and Open Source Enthusiast. URL: http://www.linkedin.com/in/sergiob
How complex? Search engines get a lot of mileage out of query parameters. On 11/9/09, Sergio Bossa <sergio.bossa@...> wrote: > Hi guys, > > I have to model a REST API supporting complex search operations: that > is, I have to submit a kind of query document, and get back a response > containing the query result. > I was thinking of PUTting the query document (say to /documents/query) > and reading back the result response. > Is it a proper choice to use PUT for executing a query operation? > Any thoughts? > > Thanks in advance, > Cheers, > > Sergio B. > > -- > Sergio Bossa > Software Passionate and Open Source Enthusiast. > URL: http://www.linkedin.com/in/sergiob > -- Sent from my mobile device
On Nov 9, 2009, at 7:32 AM, Sergio Bossa wrote: > Hi guys, > > I have to model a REST API supporting complex search operations: that > is, I have to submit a kind of query document, and get back a response > containing the query result. I suggest you take a look at OpenSearch.org. Make sure you look at the parameters extension - that should provide you with the descriotive part of your service. The parameters extension supports POST and the submission of application/x-www-form-urlencoded query data. You;d have to translate your query document into parameters I think. OTH, maybe OpenSearch also supports submission of other media types - have a look. > I was thinking of PUTting the query document (say to /documents/query) > and reading back the result response. No, PUT is wrong. POST would by the right method. PUT means 'replace state of resource with what I send you' and this is not what you want. > Is it a proper choice to use PUT for executing a query operation? No, PUT for a query is not correct. The best is GET and the use of query parameters in the URI but if you have to submit very complex stuff or if a result resource is created on the server then POST is the way to go. Jan > Any thoughts? > > Thanks in advance, > Cheers, > > Sergio B. > > -- > Sergio Bossa > Software Passionate and Open Source Enthusiast. > URL: http://www.linkedin.com/in/sergiob > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
I personally use POST plus a typical PRG to achieve the same -> POST /query Content-Type: application/x-www-form-urlencoded Param1=blah&Param2&another+thing <- 303 See Other Location: /queries/param1-blah+another-thing/ -> GET /queries.... The important aspect is that this lets cache query results quite agressively. Of course it only scales as low or as high as the querystring is simple / complex. Seb > CC: rest-discuss@yahoogroups.com > To: sergio.bossa@... > From: algermissen1971@... > Date: Mon, 9 Nov 2009 15:04:20 +0100 > Subject: Re: [rest-discuss] Complex search API > > > On Nov 9, 2009, at 7:32 AM, Sergio Bossa wrote: > > > Hi guys, > > > > I have to model a REST API supporting complex search operations: that > > is, I have to submit a kind of query document, and get back a response > > containing the query result. > > I suggest you take a look at OpenSearch.org. Make sure you look at the > parameters extension - that should provide you with the descriotive part > of your service. The parameters extension supports POST and the > submission > of application/x-www-form-urlencoded query data. You;d have to translate > your query document into parameters I think. OTH, maybe OpenSearch also > supports submission of other media types - have a look. > > > > > I was thinking of PUTting the query document (say to /documents/query) > > and reading back the result response. > > No, PUT is wrong. POST would by the right method. PUT means 'replace > state of resource with what I send you' and this is not what you want. > > > Is it a proper choice to use PUT for executing a query operation? > > No, PUT for a query is not correct. The best is GET and the use of > query parameters in the URI but if you have to submit very complex > stuff or if a result resource is created on the server then POST > is the way to go. > > Jan > > > > > > Any thoughts? > > > > > > > Thanks in advance, > > Cheers, > > > > Sergio B. > > > > -- > > Sergio Bossa > > Software Passionate and Open Source Enthusiast. > > URL: http://www.linkedin.com/in/sergiob > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > _________________________________________________________________ New Windows 7: Simplify what you do everyday. Find the right PC for you. http://www.microsoft.com/uk/windows/buy/
Sergio: The PRG pattern Seb mentions is also a very nice way to support "stored" queries if that is a possible feature. Using POST can result in creating shared or user-specific queries that can be listed and replayed at some future point: # request POST /query ... <body of search args> # response 201 Created Location: /query/user1/1 # request GET /query/user1/1 # response 200 OK ... # request GET /query/user1/ ... <query-list> <link href=".." rel="self" /> <link href="/query/user1/1" rel="http://example.org/rels/query" ... /> <link href="/query/user1/2" rel="http://example.org/rels/query" ... /> ... </query-list> mca http://amundsen.com/blog/ On Mon, Nov 9, 2009 at 12:42, Sebastien Lambla <seb@...> wrote: > > > I personally use POST plus a typical PRG to achieve the same > > -> > POST /query > Content-Type: application/x-www-form-urlencoded > > Param1=blah&Param2&another+thing > > <- > 303 See Other > Location: /queries/param1-blah+another-thing/ > > -> > GET /queries.... > > > The important aspect is that this lets cache query results quite > agressively. Of course it only scales as low or as high as the querystring > is simple / complex. > > Seb > > > CC: rest-discuss@yahoogroups.com > > To: sergio.bossa@... > > From: algermissen1971@... > > Date: Mon, 9 Nov 2009 15:04:20 +0100 > > Subject: Re: [rest-discuss] Complex search API > > > > > > > On Nov 9, 2009, at 7:32 AM, Sergio Bossa wrote: > > > > > Hi guys, > > > > > > I have to model a REST API supporting complex search operations: that > > > is, I have to submit a kind of query document, and get back a response > > > containing the query result. > > > > I suggest you take a look at OpenSearch.org. Make sure you look at the > > parameters extension - that should provide you with the descriotive part > > of your service. The parameters extension supports POST and the > > submission > > of application/x-www-form-urlencoded query data. You;d have to translate > > your query document into parameters I think. OTH, maybe OpenSearch also > > supports submission of other media types - have a look. > > > > > > > > > I was thinking of PUTting the query document (say to /documents/query) > > > and reading back the result response. > > > > No, PUT is wrong. POST would by the right method. PUT means 'replace > > state of resource with what I send you' and this is not what you want. > > > > > Is it a proper choice to use PUT for executing a query operation? > > > > No, PUT for a query is not correct. The best is GET and the use of > > query parameters in the URI but if you have to submit very complex > > stuff or if a result resource is created on the server then POST > > is the way to go. > > > > Jan > > > > > > > > > > > Any thoughts? > > > > > > > > > > > > Thanks in advance, > > > Cheers, > > > > > > Sergio B. > > > > > > -- > > > Sergio Bossa > > > Software Passionate and Open Source Enthusiast. > > > URL: http://www.linkedin.com/in/sergiob > > > > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > -------------------------------------- > > Jan Algermissen > > > > Mail: algermissen@... > > Blog: http://algermissen.blogspot.com/ > > Home: http://www.jalgermissen.com > > -------------------------------------- > > > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > ------------------------------ > New Windows 7: Simplify what you do everyday. Find the right PC for you.<http://www.microsoft.com/uk/windows/buy/> > >
Hello Sergio. All the above suggestions are great. I will just add the concept view. You see, PUT is for saying: here I have a representation payload for a particular resource. Please, put it here. And it will try to create the resource or replace it if it already exists. Now, the POST is to send something to a particular resource. The resource knows what to do with the payload. For instance, if you post some text to a blog, then the blog resource may add your text as a blog post. In this particular case, POST is the best option. Search may be one or even several resources, to which you post a query document. The search resource may create a new result resource you can then get. The search may be so intelligent, that if the query parameters yield an already created resource, then it will create a new one but return the one already there. Cheers! William MArtinez. --- In rest-discuss@yahoogroups.com, Sergio Bossa <sergio.bossa@...> wrote: > > Hi guys, > > I have to model a REST API supporting complex search operations: that > is, I have to submit a kind of query document, and get back a response > containing the query result. > I was thinking of PUTting the query document (say to /documents/query) > and reading back the result response. > Is it a proper choice to use PUT for executing a query operation? > Any thoughts? > > Thanks in advance, > Cheers, > > Sergio B. > > -- > Sergio Bossa > Software Passionate and Open Source Enthusiast. > URL: http://www.linkedin.com/in/sergiob >
On Sun, Nov 8, 2009 at 10:32 PM, Sergio Bossa <sergio.bossa@...> wrote: > Hi guys, > > I have to model a REST API supporting complex search operations: that > is, I have to submit a kind of query document, and get back a response > containing the query result. > I was thinking of PUTting the query document (say to /documents/query) > and reading back the result response. > Is it a proper choice to use PUT for executing a query operation? > Any thoughts? I discussed something like this as an answer to a StackOverflow question. http://stackoverflow.com/questions/1296421/rest-complex-applications/1297275#1297275 Regards, Will Hartung (willh@...)
Since that bulk of folks doing REST today are using HTTP, the SPDY announcement out of Google may be of interest. http://dev.chromium.org/spdy/spdy-whitepaper Looking it over, it's not clear to me that from a REST point of view, SPDY has that much too offer, and it has one really big potential Gotcha. It seems that it's primary goal is to reduce latency through connection multiplexing. While this is a concern for many Web sites, since they are basically a page with several related resources (notably images, css, and js files) that are all required to be available in order to provide the finished document to be displayed, I don't think that aspect will really affect MtoM (Machine to Machine, not to be confused with WS's MTOM) transactions. Most MtoM interactions tend to be, at the transaction level, synchronous transactions, compared to what can be a largely asynchronous exchange in the case of a web browser rendering a web page. One feature they advocate, that a REST system could well leverage, is Header Compression. Currently, compression only affects the body of an HTTP request, not the headers. And, particularly in the REST world where I think headers are relied upon even more heavily, a "simple request" can be much more than simply "GET /resource". Funny thing is, though, that REST over HTTP could have Header compression today between a compatible client and server. It could simply, "compress the headers" in to a single X-Compressed-Header, with a base64 encoding, or some other contrivance. My point being is that I think the header compression, should it be of value, can be done with current clients and servers over HTTP. Clearly, not all of the headers would be compressed, but a majority could be, and it can be made available through Con-neg. Mind, I've put a whole 2 minutes thinking this through. The darkside of SPDY, is that it's all over SSL. Which pretty much obliterates caching in the large, something I think many REST advocates hold dear. Clients could still cache, but that would be about it. Granted, it's still young, they're just playing with this, and it may or may not gain any tractions whatsoever. But I thought it would be interesting to take a quick look at it through REST colored glasses. At first glance, it's not clear that it would offer a whole lot to a REST MtoM architecture. Regards, Will Hartung (willh@...)
This blog entry curiously relates SPDY with Roy's WAKA... http://www.mnot.net/blog/2009/11/13/flip ______________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards António Manuel dos Santos Mota ______________________________________________________ 2009/11/12 Will Hartung <willh@...> > > > Since that bulk of folks doing REST today are using HTTP, the SPDY > announcement out of Google may be of interest. > > http://dev.chromium.org/spdy/spdy-whitepaper > > Looking it over, it's not clear to me that from a REST point of view, > SPDY has that much too offer, and it has one really big potential > Gotcha. > > It seems that it's primary goal is to reduce latency through > connection multiplexing. > > While this is a concern for many Web sites, since they are basically a > page with several related resources (notably images, css, and js > files) that are all required to be available in order to provide the > finished document to be displayed, I don't think that aspect will > really affect MtoM (Machine to Machine, not to be confused with WS's > MTOM) transactions. > > Most MtoM interactions tend to be, at the transaction level, > synchronous transactions, compared to what can be a largely > asynchronous exchange in the case of a web browser rendering a web > page. > > One feature they advocate, that a REST system could well leverage, is > Header Compression. Currently, compression only affects the body of an > HTTP request, not the headers. And, particularly in the REST world > where I think headers are relied upon even more heavily, a "simple > request" can be much more than simply "GET /resource". > > Funny thing is, though, that REST over HTTP could have Header > compression today between a compatible client and server. It could > simply, "compress the headers" in to a single X-Compressed-Header, > with a base64 encoding, or some other contrivance. My point being is > that I think the header compression, should it be of value, can be > done with current clients and servers over HTTP. Clearly, not all of > the headers would be compressed, but a majority could be, and it can > be made available through Con-neg. Mind, I've put a whole 2 minutes > thinking this through. > > The darkside of SPDY, is that it's all over SSL. Which pretty much > obliterates caching in the large, something I think many REST > advocates hold dear. Clients could still cache, but that would be > about it. > > Granted, it's still young, they're just playing with this, and it may > or may not gain any tractions whatsoever. But I thought it would be > interesting to take a quick look at it through REST colored glasses. > At first glance, it's not clear that it would offer a whole lot to a > REST MtoM architecture. > > Regards, > > Will Hartung > (willh@... <willh%40mirthcorp.com>) > >
Off topic, but I've just finished reading this book. It's excellent. Bill Roy T. Fielding wrote: > > > On Oct 23, 2009, at 10:28 AM, Noah Campbell wrote: > > > I'm looking for additional references for architectural properties > > found in section 2.3.4 of Roy's paper? I was curious how Roy came > > up with his list. I've never done a dissertation so if I'm parsing > > the paper incorrectly, please let me know. > > There wasn't any one reference. There are a lot of references in the > references list, some of which define what I called a property. > Usually these are defined in the literature as software qualities > or system properties. > > You might want to check the new book on Software Architecture by > Taylor (my dissertation committee chair), Medvidovic, and Dashovy: > > http://www.softwarearchitecturebook.com/ > <http://www.softwarearchitecturebook.com/> > http://www.amazon.com/dp/0470167742 <http://www.amazon.com/dp/0470167742> > > though I don't know if they used the same terminology as my diss. > I am still waiting for my free copy. ;-) > > ....Roy > >
Dear *. I am developing a rest-like architecture for my dissertation. Mobile clients shall communicate back and forth with the server. I wonder if you could kindly point me out some (academic) papers, articles and patterns/best practices. In addition I would be interested in material about WADL and a graphical representation of it - I found one paper but that stops short before the interesting part. Did someone here use various UML diagramms to describe their REST. I would highly appreciate your help. Kind regards S.W.Schilke PS: speaking of academic - you might be interested in the CfP at www.inc2010.org
On Nov 15, 2009, at 8:01 AM, swschilke wrote: > Dear *. > > I am developing a rest-like architecture for my dissertation. Mobile > clients shall communicate back and forth with the server. I wonder > if you could kindly point me out some (academic) papers, articles > and patterns/best practices. http://www.google.de/search?hl=en&q=REST brings up most of what is out there (interestingly Google recognizes 'rest' as the acronym. Not quite sure, but I think a while back that was not the case). > > In addition I would be interested in material about WADL and a > graphical representation of it - I found one paper but that stops > short before the interesting part. Did someone here use various UML > diagramms to describe their REST. What do you refer to with 'their REST'? Jan > > I would highly appreciate your help. > > Kind regards > > S.W.Schilke > > PS: speaking of academic - you might be interested in the CfP at www.inc2010.org > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Hi Bill,
Sorry, but this has been sitting in my to send box
for a while.
A while ago I was very interested in transactions and REST, and
there was some discussion on this list that lead to this blog
entry[1].
First, I will address the issue of transactions. 2PC is not
a good algorithm to use on a large scale distributed system
(ie one with lots of unreliable components connected by
unreliable networks). The issue with 2PC is that it is a blocking
protocol. If the transaction manager (TM) fails at the wrong
time (just before sending the commit/abort message) then the
resource managers (RMs) are left holding locks on resources until
the TM is available again.
The core of the problem with 2PC is that is an asynchronous
algorithm, which makes it an easy algorithm to understand
and implement, but as an asynchronous algorithm it cannot
guarantee consensus, which is ultimately required for transactional
behavior. (Distributed consensus is impossible in an asynchronous
system with just one faulty processor, see [1]).
So, for any discussion about REST and transactions, I think you
cannot start or even think about 2PC.
My interest in transactions and REST came about from trying to
solve a particular problem from the Grid Computing space: co-allocation.
The problem was to reserve a number of high end compute resources
for the same time, the resources could be a fiber optic network,
or a supercomputer, or a high graphics system. There resources are
expensive and run schedulers to optimize usage, so even making and
canceling a reservation on such a system might be expensive.
I came up with a solution based on Paxos and Paxos Commit.
Paxos is a consensus algorithm for getting a number of processors
to agree in a distributed system, it is a partially asynchronous
algorithm which means it is non-blocking, see [1] for more details
and references. Paxos Commit is Paxos applied to the transaction
problem, again see [1].
For a simple understanding of Paxos Commit you can imagine that
it is the same as 2PC except the TM is replicated, the message
exchanges are very similar.
The approach I used was as follows. The user gets the schedules
from all the resources, then chooses a time when he can use all
the resources. (As soon as he got the schedules they are out of
date, ie we are using optimistic concurrency. The request could
fail because the schedules have changed, the user must simply
retry).
The user sends the request to one of the TM's - the request
includes all the RM's that he wants to use and the time he
wants to use them at. The TM now invokes Paxos with the rest
of the TMs to choose a Transaction Identifier (TID) for the
transaction, the TID is a URI. Using Paxos to choose the TID
means that the request is idempotent, the user can resend to
any TM and be sure that only one transaction will occur.
Once the TID has been chosen the TM splits the request into
sub-requests, and sends them to each RM. We now use Paxos Commit
to complete the transaction.
The approach assume that RMs and the user can fail at any point,
but the TMs will always be available. The user and RM interactions
with the TMs is very simple, only a slight change to the classic 2PC
approach, all the complex interactions are between the TMs. This
means all the TMs can be presented as a simple service to the
user/RMs, without any need for them to do complex message exchanges.
The system was designed to use HTTP and URIs, but I am not sure
that makes it RESTful. Part of the remit of the design was to
show that you could build complex systems with HTTP, this was
to counter a WS-* argument that HTTP was too simple. [1] has
a reference to papers describing the design more completely.
I think the system is very powerful and would be very useful
in a cloud system, you get transactional behavior in a distributed
system which has high availability and a simple interface.
cheers
Mark Mc Keown
[1] http://betathoughts.blogspot.com/2007/06/brief-history-of-consensus-2pc-and.html
On Mon, Sep 21, 2009 at 4:27 PM, Bill Burke <bburke@...> wrote:
> Here's my thoughts on the compatibility of Transactions and REST. Maybe
> now you can see where I am coming from.
>
> http://bill.burkecentral.com/2009/09/21/credit-cards-transactions-and-rest/
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Jan Algermissen wrote: > > > On Nov 15, 2009, at 8:01 AM, swschilke wrote: > > > Dear *. > > > > I am developing a rest-like architecture for my dissertation. Mobile > > clients shall communicate back and forth with the server. I wonder > > if you could kindly point me out some (academic) papers, articles > > and patterns/best practices. > > http://www.google.de/search?hl=en&q=REST > <http://www.google.de/search?hl=en&q=REST> > > brings up most of what is out there (interestingly Google recognizes > 'rest' as the acronym. Not quite sure, but I think a while back that > was not the case). > It surelly wansn't the case a few months ago.... There is also "A search engine dedicated to the REST <http://en.wikipedia.org/wiki/REST> architectural style." but I don't know how good it is. http://search.onrest.org/ > > > > In addition I would be interested in material about WADL and a > > graphical representation of it - I found one paper but that stops > > short before the interesting part. Did someone here use various UML > > diagramms to describe their REST. > > What do you refer to with 'their REST'? > > Jan > > > > > I would highly appreciate your help. > > > > Kind regards > > > > S.W.Schilke > > > > PS: speaking of academic - you might be interested in the CfP at > www.inc2010.org > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... <mailto:algermissen%40acm.org> > Blog: http://algermissen.blogspot.com/ <http://algermissen.blogspot.com/> > Home: http://www.jalgermissen.com <http://www.jalgermissen.com> > -------------------------------------- > >
Hi S.W., Here is a paper of how Resource Oriented Architecture & allegedly (REST) maps to Pi-Calculus http://www.computer.org/portal/web/csdl/doi/10.1109/SERVICES.2007.66 There are other papers by the author about REST u can find them using Google Scholar http://scholar.google.com/scholar?hl=en&q=resource+oriented+architecture+overdick&btnG=Search&as_ylo=&as_vis=0 Google Scholar is a very good way to search for literature, If you want a scientific background in REST then you must read Fielding's dissertation and understand how it is applied in HTTP There are some articles by Steve Vinoski on REST http://scholar.google.com/scholar?hl=en&q=resource+oriented+architecture+steve+vinoski&btnG=Search&as_ylo=&as_vis=0 There is another REST influenced "Architecture" in the literature: Web Oriented Architecture, I haven't read about it, so I don't know what it is. My PhD is about REST too, more to the Semantic end, however I'm on kind of a crossroads right now and I'm taking time to re-review and understand the bigger picture. And as my supervisor says, go beyond the buzzwords. Regards, Areeb --- In rest-discuss@yahoogroups.com, "swschilke" <steffen.schilke@...> wrote: > > Dear *. > > I am developing a rest-like architecture for my dissertation. Mobile clients shall communicate back and forth with the server. I wonder if you could kindly point me out some (academic) papers, articles and patterns/best practices. > > In addition I would be interested in material about WADL and a graphical representation of it - I found one paper but that stops short before the interesting part. Did someone here use various UML diagramms to describe their REST. > > I would highly appreciate your help. > > Kind regards > > S.W.Schilke > > PS: speaking of academic - you might be interested in the CfP at www.inc2010.org >
William Martinez Pomares wrote: > > > Hello Sergio. > All the above suggestions are great. > I will just add the concept view. > > You see, PUT is for saying: here I have a representation payload for a > particular resource. Please, put it here. And it will try to create the > resource or replace it if it already exists. > > Now, the POST is to send something to a particular resource. The > resource knows what to do with the payload. For instance, if you post > some text to a blog, then the blog resource may add your text as a blog > post. > > In this particular case, POST is the best option. Search may be one or > even several resources, to which you post a query document. The search > resource may create a new result resource you can then get. The search > may be so intelligent, that if the query parameters yield an already > created resource, then it will create a new one but return the one > already there. > ... There's also SEARCH which is defined to be safe (as opposed to be POST), but allows a request body. On the other hand, the way it's currently specified makes the response body rather WebDAV-specific (which may or may not be a problem). BR, Julian
Hi guys, thanks all for your replies, and sorry for the late response. In the end, I was able to use a standard GET with query parameters. Anyways, the POST + GET suggestion seemed to me very interesting: I'll consider using it if I'll need complex queries in the future, even if it has a drawback IMHO for memory-intensive applications, forcing you to store the query result in some kind of storage to avoid saturating memory. Thanks again, Cheers, Sergio B. -- Sergio Bossa Software Passionate and Open Source Enthusiast. URL: http://www.linkedin.com/in/sergiob
A question that I'm struggling with right now is how to capture intent when you are creating/modifying/removing a resource. Take a simple example of a customer resource with one or more address sub-resources. We can POST an address representation to the customer resource, PUT on the individual address resources to modify them or DELETE an address resource altogether. In a particular situations, it may be important to later be able to say why that address was created or why was it modified. For instance, you may want to find out all the customers that have moved 10 miles away and are still regular customers. But if you did a simple PUT on the address you won't know why it was changed - the person might have moved or the previous address might have been incorrect. The only way I can think of to handle sending this piece of metadata when performing the operation is to include a custom HTTP header, like X-Reason or something similar that might be assigned specific codes. It doesn't seem like it makes sense to put it in the body of the PUT because it's not a part of the resource you are creating/changing/deleting. Has anybody come across something like this in the past? Does the custom header sound like a good/bad idea? Thanks, Rich
Richard, On Nov 20, 2009, at 1:07 AM, Richard Wallace wrote: > A question that I'm struggling with right now is how to capture intent > when you are creating/modifying/removing a resource. Take a simple > example of a customer resource with one or more address sub-resources. > We can POST an address representation to the customer resource, PUT on > the individual address resources to modify them or DELETE an address > resource altogether. In a particular situations, it may be important > to later be able to say why that address was created or why was it > modified. For instance, you may want to find out all the customers > that have moved 10 miles away and are still regular customers. But if > you did a simple PUT on the address you won't know why it was changed > - the person might have moved or the previous address might have been > incorrect. If you want to capture that information, it should be part of the representation you send and not be part of the request meta data. It is domain semantics and not protocol semantics, IMHO. I agree that it seems a bit misplaced in the payload of a PUT (though if the information was part of the e.g. address itself, why not?). You might consider using PATCH and make the update comment part of your diff format or you could use POST in combination woth 303, e.g. POST /customer/addresses <addressChange> <address>...</address> <reason>....</reason> </addressChange> > > The only way I can think of to handle sending this piece of metadata > when performing the operation is to include a custom HTTP header, like > X-Reason or something similar that might be assigned specific codes. > It doesn't seem like it makes sense to put it in the body of the PUT > because it's not a part of the resource you are > creating/changing/deleting. Has anybody come across something like > this in the past? This seems very well known from revision control system updates, e.g. cvs commit MyClass.cpp -m 'fixed bug foo' Maybe WebDAV provides something of this sort already?? > Does the custom header sound like a good/bad idea? > IMHO: NO! Jan > Thanks, > Rich > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Ahh - too late in the day....forgot the 303 response On Nov 20, 2009, at 1:27 AM, Jan Algermissen wrote: > or you could use POST in combination woth 303, e.g. > > POST /customers/111/addresses > > <addressChange> > <address>...</address> > <reason>....</reason> > </addressChange> > 303 See Other Location: /customers/111 Which signals that the POST changed the customer resource Jan
Seconded.
There's a whole chapter on connectors! :D
Cheers
Stu
________________________________
From: Bill de hOra <bill@...>
To: Roy T. Fielding <fielding@...>
Cc: Noah Campbell <noahcampbell@...>; Rest List <rest-discuss@yahoogroups.com>
Sent: Sun, November 15, 2009 8:39:51 AM
Subject: Re: [rest-discuss] Architectural properties for modifiability
Off topic, but I've just finished reading this book. It's excellent.
Bill
Roy T. Fielding wrote:
>
>
> On Oct 23, 2009, at 10:28 AM, Noah Campbell wrote:
>
> > I'm looking for additional references for architectural properties
> > found in section 2.3.4 of Roy's paper? I was curious how Roy came
> > up with his list. I've never done a dissertation so if I'm parsing
> > the paper incorrectly, please let me know.
>
> There wasn't any one reference. There are a lot of references in the
> references list, some of which define what I called a property.
> Usually these are defined in the literature as software qualities
> or system properties.
>
> You might want to check the new book on Software Architecture by
> Taylor (my dissertation committee chair), Medvidovic, and Dashovy:
>
> http://www.software architecturebook .com/
> <http://www.software architecturebook .com/>
> http://www.amazon. com/dp/047016774 2 <http://www.amazon. com/dp/047016774 2>
>
> though I don't know if they used the same terminology as my diss.
> I am still waiting for my free copy. ;-)
>
> ....Roy
>
>
__________________________________________________________________
Looking for the perfect gift? Give the gift of Flickr!
http://www.flickr.com/gift/On Nov 19, 2009, at 4:27 PM, Jan Algermissen wrote: > If you want to capture that information, it should be part of the > representation you send and not be part of the request meta data. It > is domain semantics and not protocol semantics, IMHO. > [...] > I agree that it seems a bit misplaced in the payload of a PUT (though > if the information was part of the e.g. address itself, why not?). You > might consider using PATCH and make the update comment part of your > diff format or you could use POST in combination woth 303, e.g. > > POST /customer/addresses > > <addressChange> > <address>...</address> > <reason>....</reason> > </addressChange> > Another option is to POST the change, including the reason for it, to the address itself - that makes the target explicit. You can decide whether you want to create a resource for the change itself and return its URI in a Location header. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Jan Algermissen wrote: > > > Richard, > > On Nov 20, 2009, at 1:07 AM, Richard Wallace wrote: > > > A question that I'm struggling with right now is how to capture intent > > when you are creating/modifying/ removing a resource. Take a simple > > example of a customer resource with one or more address sub-resources. > > We can POST an address representation to the customer resource, PUT on > > the individual address resources to modify them or DELETE an address > > resource altogether. In a particular situations, it may be important > > to later be able to say why that address was created or why was it > > modified. For instance, you may want to find out all the customers > > that have moved 10 miles away and are still regular customers. But if > > you did a simple PUT on the address you won't know why it was changed > > - the person might have moved or the previous address might have been > > incorrect. > > If you want to capture that information, it should be part of the > representation you send and not be part of the request meta data. It > is domain semantics and not protocol semantics, IMHO. > > I agree that it seems a bit misplaced in the payload of a PUT (though > if the information was part of the e.g. address itself, why not?). You > might consider using PATCH and make the update comment part of your > diff format or you could use POST in combination woth 303, e.g. > > POST /customer/addresses > > <addressChange> > <address>... </address> > <reason>.... </reason> > </addressChange> > > > > > The only way I can think of to handle sending this piece of metadata > > when performing the operation is to include a custom HTTP header, like > > X-Reason or something similar that might be assigned specific codes. > > It doesn't seem like it makes sense to put it in the body of the PUT > > because it's not a part of the resource you are > > creating/changing/ deleting. Has anybody come across something like > > this in the past? > > This seems very well known from revision control system updates, e.g. > cvs commit MyClass.cpp -m 'fixed bug foo' > > Maybe WebDAV provides something of this sort already?? > ... In WebDAV Versioning (RFC 3253): make the resource version-controlled, perform (1) CHECKOUT, (2) PUT, (3) PROPPATCH DAV:comment, then (4) CHECKIN. (DAV:comment is a live property used to supply a checkin comment; see <http://greenbytes.de/tech/webdav/rfc3253.html#rfc.section.3.1.1>). BR, Julian
i have a server that exposes some resources. I am supposed to build an api (in c ) to expose these resources. The responses are supposed to be in json. The idea is that they work cross platform, .net, whatever. it appears that putting stuff on top of curl might work. anyways, i am just looking for examples to get started to do this. What is the general idea and/or what works when developing the sdk interface. It seems that as far as a sdk is concerned things are no less complicated than SOAP? I mean, you still have to support plenty of manipulation for specific objects/resources. The question is how to abstract these resources. Is there a general paradigm or design pattern that works here?
qwertyqaa, On Nov 24, 2009, at 11:42 AM, qwertyqaa wrote: > i have a server that exposes some resources. I am supposed to build > an api (in c ) to expose these resources. The responses are supposed > to be in json. The idea is that they work cross platform, .net, > whatever. What do you mean by 'work cross platform'? Or IWO, how could HTTP+JSON not be cross platform? > > it appears that putting stuff on top of curl might work. curl/libcurl is a user agent/client connector. You need something else for the server side. > anyways, i am just looking for examples to get started to do this. > What is the general idea and/or what works when developing the sdk > interface. What do you mean by 'sdk interface' ? > > It seems that as far as a sdk is concerned things are no less > complicated than SOAP? Umm - you can try and stick your JSON into a SOAP envelope and parse that out on the client side again. Compare that with generating and parsing JSON alone. Especially doing that in C will make the difference more than obvious :-) > I mean, you still have to support plenty of manipulation for > specific objects/resources. I am not sure what you mean, can you explain? > > The question is how to abstract these resources. Is there a general > paradigm or design pattern that works here? > You might want to look at Atom/AtomPub for guidance - among other things it provides a standard way to deal with items and collections of items. Jan > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Tue, Nov 24, 2009 at 4:25 AM, Colin <colin.jack@...> wrote: >> A question that I'm struggling with right now is how to capture >> intent when you are creating/modifying/removing a resource. Take a >> simple example of a customer resource with one or more address sub- >> resources. >> >> We can POST an address representation to the customer resource, PUT >> on the individual address resources to modify them or DELETE an >> address resource altogether. > > > Hi, > > Seb posted about this exact example a while back, you might find it useful: > > http://tech.groups.yahoo.com/group/rest-discuss/message/12656 > > I guess the core issue here is that you are talking about the relationship between a party and an address rather than an operation on the address itself. > > Ta, > > Colin > > Ah, that helps clarify several things for me. Rickard's question is actually the same as my question, I just tried to make it a bit more generic to avoid getting into any CQRS discussions. I'm actually come to the opinion that if the reason for the change is important, it would probably be better to model the change itself and the set of changes that have taken place as a resource. So, for the address case we'd want a collection of address changes, a way to create a new change and a representation of address changes. Then we could POST to the collection with an address change. The server would create the new resource but would also cause the customer resource to be updated. Then the address change representation could contain as much or as little detail about why the address was changed as we need. Thanks, Rich
Hello I am new to the REST development. I have general question that which is the most popular and mostly used framework for RESTful web services. e.g. Jerset, Restlet or Rails. I am not asking for a specific language, but in general. Thanks Dhillon
>>>>> "dhillon" == dhillon sjsu <narpal.dhillon@...> writes:
dhillon> Hello I am new to the REST development. I have general
dhillon> question that which is the most popular and mostly used
dhillon> framework for RESTful web services. e.g. Jerset, Restlet
dhillon> or Rails. I am not asking for a specific language, but in
dhillon> general.
REST and framework don't belong in the same sentence. That's the short
answer.
The longer answer is that you don't need one, nor want one. If your
framework cannot tell you the HTTP method, doesn't allow (or makes it
heard) you to query or specify headers, it's probably not useful for
REST either.
--
Cheers,
Berend de Boer
On Wed, Nov 25, 2009 at 4:38 PM, <berend@...> wrote: >>>>>> "dhillon" == dhillon sjsu <narpal.dhillon@...> writes: > > dhillon> Hello I am new to the REST development. I have general > dhillon> question that which is the most popular and mostly used > dhillon> framework for RESTful web services. e.g. Jerset, Restlet > dhillon> or Rails. I am not asking for a specific language, but in > dhillon> general. > > REST and framework don't belong in the same sentence. That's the short > answer. ... and the wrong answer. Frameworks are useful even with REST, I have no clue why you'd suggest otherwise. I have no clue which is most popular or most used, just play around with a few and see which one resonates with you the most - Jersey did for me, but that should have zero impact on which one you choose... --tim
On Wed, Nov 25, 2009 at 1:38 PM, <berend@...> wrote: > REST and framework don't belong in the same sentence. That's the short > answer. > > The longer answer is that you don't need one, nor want one. If your > framework cannot tell you the HTTP method, doesn't allow (or makes it > heard) you to query or specify headers, it's probably not useful for > REST either. Oh, I pretty much disagree with this. The modern "REST" frameworks make working with HTTP and non-HTML workloads easier. After writing several of even the most simple services, you quickly discover lots of redundant code that readily can, and has been, captured well by the modern frameworks. That said, I have not used any of them, as my efforts have been simple enough that the value they bring don't outweigh the "weight" of learning and leveraging them. But my services have been really, really simple, so I just used raw Java Servlets for my work. What I have learned, tho, that, at least on the client side, the implementation of a REST system is not necessarily the difficult part (thus the attraction). In fact, I'd argue that writing a solid REST CLIENT is more difficult than a REST server, as much of the heavy lifting of the protocol is, in fact, in the client. Of course, the real "hard part" is the payload and endpoints themselves. Since, that's where the "REST"-ness of it all really lies, and the part most folks get wrong. A framework isn't necessary, anyone reasonable will likely factor out the bits they need if they just start writing implementations naturally. But that doesn't mean that the frameworks are useless. All that said, I can't comment on any of the specific frameworks. Servlets work up to a point, and have suited me so far, but are inadequate in the overall big picture (can't handle 100-Continue very well at all, for example). Any reasonable, modern layer above raw CGI will be a win though. Regards, Will Hartung (willh@...)
Dhillon: FWIW, here's my list of "must-haves" when evaluating a programming environment that aims to help developers work in the HTTP space: - Request Dispatcher : routes HTTP requests to the proper code. - URI Handler : parses the details of the URI (scheme, path, query info, etc.) - Mime Parser : handles the details of determining the media type including support for conneg - Request Handler : target of the Request Dispatcher; the fun code goes here - Transformer : converts stored data into the proper representation (and handles incoming representations, too) - HTTP Client : a 'mini HTTP Client' that allows you to make requests to other HTTP servers - Caching : basic support for caching and conditional requests - Authentication : understands various auth models I need all these things when I'm coding for HTTP and the quality of support in any framework determines whether I enjoy or dread working with that library. mca http://amundsen.com/blog/ On Wed, Nov 25, 2009 at 06:28, dhillon_sjsu <narpal.dhillon@...> wrote: > Hello > I am new to the REST development. I have general question that which is the most popular and mostly used framework for RESTful web services. > e.g. Jerset, Restlet or Rails. I am not asking for a specific language, but in general. > > Thanks > Dhillon > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
I did a "light" evaluation of some frameworks a year and a half or so ago. I'll try to find the results. But all of them were Java based... None of them, if I remember correctly, met our necessities mainly because they were all based on HTTP and we needed other protocol connectors as well. We finally used some parts of Jersey, basically the "server" stuff that didn't related directly with HTTP. At that time I liked it because it was a very light framework but I think it's not the case rigth now, with the addition of client-side stuff (that in my opinon should be a completely separated framework) and lot's of WADL stuff whose utility is very arguable, and so... On the brigth side they have a huge community (their list is very high traffic) and two excellent moderators Paul and Mark, that are very keen on answering every question. But my evaluation of the others didn't include their community/support so I guess maybe the others are also like this. I'll post more info if I can. 2009/11/25 dhillon_sjsu <narpal.dhillon@...> > > > Hello > I am new to the REST development. I have general question that which is the > most popular and mostly used framework for RESTful web services. > e.g. Jerset, Restlet or Rails. I am not asking for a specific language, but > in general. > > Thanks > Dhillon > > >
dhillon_sjsu wrote: > Hello > I am new to the REST development. I have general question that which is the most popular and mostly used framework for RESTful web services. > e.g. Jerset, Restlet or Rails. I am not asking for a specific language, but in general. > > Thanks > Dhillon > You're being told there are no REST frameworks because most frameworks are built specifically for RESTful HTTP. It's confusing because most people use 'REST' when they actually mean RESTful HTTP. Most 'standard' MVC frameworks are ok - provided you can play with the routing logic to avoid exposing RPC'ish URIs. I like to treat a controller as a resource, and constrain controller 'actions'/methods to the HTTP verbs. This helps to think in terms of resources and the uniform interface, and it simplifies the routing logic to exactly one controller per URI pattern. I played around with Zend Framework to test this, and was reasonably happy with the result: http://github.com/mikekelly/Resauce Exposing resources is the easy bit though - now you need to establish representations which respect the hypermedia constraint, I guess AtomPub could be viewed as a 'framework' for this - Mike
I myself I'm not a big fan of frameworks, I even wrote elsewhere about what I consider a anti-pattern that I called Framework Oriented Design Architecture, or FODA for short (Portuguese speakers will appreciate the irony...). Basically, what I call FODA is a more or less current practice of choosing a Framework (or two or three) and then design the architecture around the framework, rather than doing the opposite, and then having to "fit" the architecture to what the framework(s) can or not do, instead of the business model that it was supposed to fit. The best example of this is the myriad of applications that start by choose "Spring + Hibernate" without taking into account the limitations of those two frameworks, and then conform to those limitations that in turn limit the business value of the solution. That being said, it is undoubtedly true that frameworks are very useful in avoiding writing "plumbing" code and speeding up development. Like Spring Core when correctly used (I'm not so sure about Hibernate...) So of course, IMO, frameworks have their space in REST as in any development style, as long they do not dictate the overall architecture. So I would say, design your architecture first (simply putting the ideas in your head in a consistent order, defining clearly the ends to which it aims, even designing some fancy squares and circles and lines in a napkin, not necessarily a "formal" design - but remember you'll need that formality later in the process) and from them look not only at the frameworks that gives you what you need, but also *how* they do it, because probably you will need *some* of the functionalities of the framework but you will want to avoid to commit yourself to the whole stack it provides - or you risk falling into a FODA. For instance, since the beginning we knew that for business reasons we had to support not only HTTP but a few other methods of communicating with our clients/business partners, and we build the design with that in mind. Had we chosen a framework in the first place and we had to deal with big problems down the road - like choosing Jersey, that gives what we need but only for HTTP, or Restlet, that supports some other protocols but is way too "expensive" (not in money, but technologically speaking) to one of the goals of our design - to be simple and "light". And expansible. So we end up using Spring (core, beans, context), Spring Batch, Spring Web/MVC (only for the HTTP connector), big chunks of Jersey, Spring-Integration (on hold now), Hibernate (against my will) and a few others like JackRabbit, Funanbol, jBPM and others for very specific things. I hope this helps you in analysing the frameworks. berend@... wrote: > > >>>>> "dhillon" == dhillon sjsu <narpal.dhillon@... > <mailto:narpal.dhillon%40ymail.com>> writes: > > dhillon> Hello I am new to the REST development. I have general > dhillon> question that which is the most popular and mostly used > dhillon> framework for RESTful web services. e.g. Jerset, Restlet > dhillon> or Rails. I am not asking for a specific language, but in > dhillon> general. > > REST and framework don't belong in the same sentence. That's the short > answer. > > The longer answer is that you don't need one, nor want one. If your > framework cannot tell you the HTTP method, doesn't allow (or makes it > heard) you to query or specify headers, it's probably not useful for > REST either. > > -- > Cheers, > > Berend de Boer > >
So OpenRasta is just one point off, we don't have a caching infrastructure yet. :) -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of mike amundsen Sent: 25 November 2009 23:53 To: dhillon_sjsu Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Framework Dhillon: FWIW, here's my list of "must-haves" when evaluating a programming environment that aims to help developers work in the HTTP space: - Request Dispatcher : routes HTTP requests to the proper code. - URI Handler : parses the details of the URI (scheme, path, query info, etc.) - Mime Parser : handles the details of determining the media type including support for conneg - Request Handler : target of the Request Dispatcher; the fun code goes here - Transformer : converts stored data into the proper representation (and handles incoming representations, too) - HTTP Client : a 'mini HTTP Client' that allows you to make requests to other HTTP servers - Caching : basic support for caching and conditional requests - Authentication : understands various auth models I need all these things when I'm coding for HTTP and the quality of support in any framework determines whether I enjoy or dread working with that library. mca http://amundsen.com/blog/ On Wed, Nov 25, 2009 at 06:28, dhillon_sjsu <narpal.dhillon@...> wrote: > Hello > I am new to the REST development. I have general question that which is the most popular and mostly used framework for RESTful web services. > e.g. Jerset, Restlet or Rails. I am not asking for a specific language, but in general. > > Thanks > Dhillon > > > > > ------------------------------------ > > Yahoo! Groups Links > > > > ------------------------------------ Yahoo! Groups Links
There's a lot that can be done within the framework, for automatic management of response codes, generation of etags and many more scenarios I have planned for the next version. Want to have the time to sit down to do it right rather than implement a half baked implementation and impact users with crap caching instructions like some vendors i won't name do :) Seb -----Original Message----- From: mca@... [mailto:mca@...] On Behalf Of mike amundsen Sent: 26 November 2009 12:14 To: Sebastien Lambla Cc: dhillon_sjsu; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Framework Seb: support for conditionals is key for a framework, but the bulk of the caching spec can be covered by a separate tool (i.e. squid). mca http://amundsen.com/blog/ On Thu, Nov 26, 2009 at 07:03, Sebastien Lambla <seb@...> wrote: > So OpenRasta is just one point off, we don't have a caching infrastructure > yet. :) > > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On > Behalf Of mike amundsen > Sent: 25 November 2009 23:53 > To: dhillon_sjsu > Cc: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Framework > > Dhillon: > > FWIW, here's my list of "must-haves" when evaluating a programming > environment that aims to help developers work in the HTTP space: > - Request Dispatcher : routes HTTP requests to the proper code. > - URI Handler : parses the details of the URI (scheme, path, query info, > etc.) > - Mime Parser : handles the details of determining the media type > including support for conneg > - Request Handler : target of the Request Dispatcher; the fun code goes here > - Transformer : converts stored data into the proper representation > (and handles incoming representations, too) > - HTTP Client : a 'mini HTTP Client' that allows you to make requests > to other HTTP servers > - Caching : basic support for caching and conditional requests > - Authentication : understands various auth models > > I need all these things when I'm coding for HTTP and the quality of > support in any framework determines whether I enjoy or dread working > with that library. > > mca > http://amundsen.com/blog/ > > > > > On Wed, Nov 25, 2009 at 06:28, dhillon_sjsu <narpal.dhillon@...> > wrote: >> Hello >> I am new to the REST development. I have general question that which is > the most popular and mostly used framework for RESTful web services. >> e.g. Jerset, Restlet or Rails. I am not asking for a specific language, > but in general. >> >> Thanks >> Dhillon >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > > >
Dear *, can you kindly point me out some good examples of REST implementations. Preferable well documented ;-) - I've read the O'Reilly book about the Twitter API but I want to see more. Kind regards sws
The Netflix API is pretty good: http://developer.netflix.com/. The Flickr "REST" API works, and is worth checking out but isn't really REST in the official sense: http://www.flickr.com/services/api/. The LinkedIn REST API is new and exciting: http://developer.linkedin.com/community/apis Does this help? -Solomon On Thu, Nov 26, 2009 at 3:45 PM, swschilke <steffen.schilke@...> wrote: > > > Dear *, > > can you kindly point me out some good examples of REST implementations. > Preferable well documented ;-) - I've read the O'Reilly book about the > Twitter API but I want to see more. > > Kind regards > > sws > > >
Dhillon, I haven't used Restlet, but I've used Jersey and Rails a fair bit. Both work well and have different strengths and weaknesses. If I were going to write a full on API (without a UI) I'd probably go with Jersey. It has good separation, and does a lot of the tedious stuff for you (thinking content negotiation, URI construction, etc.). Thanks! Brandon On Wed, Nov 25, 2009 at 5:28 AM, dhillon_sjsu <narpal.dhillon@...> wrote: > Hello > I am new to the REST development. I have general question that which is the most popular and mostly used framework for RESTful web services. > e.g. Jerset, Restlet or Rails. I am not asking for a specific language, but in general. > > Thanks > Dhillon > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Using HTTP Cache control headers effectively on web site can help controlling actual requests hitting the application server. Designing URLs of website carefully following REST principles can help improving caching potential of website and effectively using cache infrastructure of the web. Anyone knows of websites which effectively use REST to improve cacheability? (I am not talking about using CDNs like Akamai which help serving static content on the site.) www.e4.com is one website I know, which used RESTful URLs and HTTP cache very effectively. Martin Fowler blogged about a technique called SegmentationByFreshness at http://www.martinfowler.com/bliki/SegmentationByFreshness.html. Thanks, Unmesh
On Wed, Nov 25, 2009 at 12:28 PM, dhillon_sjsu <narpal.dhillon@...>wrote: > Hello > I am new to the REST development. I have general question that which is the > most popular and mostly used framework for RESTful web services. > e.g. Jerset, Restlet or Rails. I am not asking for a specific language, but > in general. > We're pretty low in popularity rankings (since we don't bother too much), but we wrapped Restlet + Maven2 + Spring into Kauri which serves as a framework for developing REST-centric internet services and webapps. (And yes, we do expose methods, if you want.) www.kauriproject.org Thanks, Steven. -- Steven Noels http://outerthought.org/ Outerthought Open Source Java & XML stevenn at outerthought.org Makers of the Daisy CMS
Hi, REST is presented as set of architectural constraints. Most of the 'popular' REST literature (at least at the entry level), concentrates on definition of resources and resource identifiers. But I think resources and resource identifiers should always be thought of in the 'context of' other constraints. I was recently having a discussion with some colleagues about RESTful URIs for website we are working on. We need to display universal products and regionalized products The URLs are something like www.xyz.com/products www.xyz.com/product/id1 When I said that, we should have separate unique urls for regional products like www.xyz.com/region1/products www.xyz.com/region1/product/id1 Some people said that regions don't need be part of URL and we can get those from user preferences in HTTP session, keeping urls same. Everyone was forgetting 'stateless' constraint and almost assuming the existence of HTTP Session object provided by application servers. Thats when I thought, if REST is presented as pattern language, with explicit relation between patterns and stating which patterns are context building for other, will it be more useful to teach and design websites? Thanks, Unmesh
Hi All,
David Ryan and Esmond Pitt have completed their proposal to the 6lowapp IETF working group for an application protocol for embedded devices.
David has blogged about it here: http://blog.livemedia.com.au/2009/12/argot-submitted-to-ietf.html
I figured this would be of interest to the readers and discussion here.
Notable snippet from David's blog:
The XPL system is a departure from how most application
protocols are designed. Protocols are normally designed and then
the implementation is created from the design. XPL binds the
design and implementation together so that they are
interrelated. This has interesting consequences for versioning
and detailed discovery. A device can be interrogated to
discovery the structure of all the data that can be
sent/received to it. Combined with scripting languages and other
methods it would allow clients to be built automatically with
zero code. XPL is the first time that you can implant a formal
protocol description in any device or application down to the
smallest of devices.
Awhile back I mentioned my interest in fleshing out a part of the REST philosophy that Roy Fielding has shied away from completing himself. Roy has said in the past that a complete description of the REST architecture is effectively not provided in his thesis, because he did not include advice on the trade-offs involved in hypermedia type design. Sometimes, the best way to understand something is in terms of what it is not. What is Coke? Not Pepsi! With that idea, I give you the REST taste test, by giving you the chance to taste a recently proposed "thin server" architecture and tell me if it tastes like REST to you: http://z-bo.tumblr.com/post/246854981/what-is-sofea-what-is-soui Please note that it might be months before I post here again. Life's busy and I have a lot of competing interests. Also, the above blog is merely a rough draft of thoughts. I am sure there is plenty here that could be better explained. I actually wrote this a few weeks ago, in response to my twitchy reaction to reading about SOFEA and SOUI.
Slightly OT, but what are people experiences in implementing custom client-side media type handlers? I mean registering a helper application to handle, say, content of type application/vnd.my-cool-stuff when a representation of this type is returned in response to a browser request. What's the best option to do this (both registration as well as actual content handling) in a cross-platform way? Thanks, Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Dec 3, 2009, at 2:36 PM, Stefan Tilkov wrote: > Slightly OT, but what are people experiences in implementing custom > client-side media type handlers? I mean registering a helper > application to handle, say, content of type application/vnd.my-cool- > stuff when a representation of this type is returned in response to > a browser request. > > What's the best option to do this (both registration as well as > actual content handling) in a cross-platform way? > Hmm, not sure I understand what you are up to. Do you mean telling the OS where to dispatch or telling the browser where to dispatch? OTH, I must be misunderstanding you since any of these is surely very platform specific. Jan > Thanks, > Stefan > > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On 03.12.2009, at 15:27, Jan Algermissen <algermissen1971@...> wrote: > > On Dec 3, 2009, at 2:36 PM, Stefan Tilkov wrote: > >> Slightly OT, but what are people experiences in implementing custom >> client-side media type handlers? I mean registering a helper >> application to handle, say, content of type application/vnd.my-cool- >> stuff when a representation of this type is returned in response to >> a browser request. >> >> What's the best option to do this (both registration as well as >> actual content handling) in a cross-platform way? >> > > Hmm, not sure I understand what you are up to. Do you mean telling > the OS where to dispatch or telling the browser where to dispatch? Both. > OTH, I must be misunderstanding you since any of these is surely > very platform specific. I know, which is why I'm looking for documentation, tools or libraries that help to address this. Stefan Tilkov - sent from a mobile device - > > Jan > > > >> Thanks, >> Stefan >> >> -- >> Stefan Tilkov, http://www.innoq.com/blog/st/ >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > >
On Dec 3, 2009, at 3:35 PM, Stefan Tilkov wrote: > On 03.12.2009, at 15:27, Jan Algermissen <algermissen1971@...> > wrote: > >> >> On Dec 3, 2009, at 2:36 PM, Stefan Tilkov wrote: >> >>> Slightly OT, but what are people experiences in implementing custom >>> client-side media type handlers? I mean registering a helper >>> application to handle, say, content of type application/vnd.my-cool- >>> stuff when a representation of this type is returned in response to >>> a browser request. >>> >>> What's the best option to do this (both registration as well as >>> actual content handling) in a cross-platform way? >>> >> >> Hmm, not sure I understand what you are up to. Do you mean telling >> the OS where to dispatch or telling the browser where to dispatch? > > Both. > >> OTH, I must be misunderstanding you since any of these is surely >> very platform specific. > > I know, which is why I'm looking for documentation, tools or libraries > that help to address this. > This sounds like a demand for moving the HTTP client connector into the OS's network libs :-) But seriously: I cannot imagine e.g. IE, FF and Safari ever having the same, Language independent API for registering plugins. And since the HTTP client connector is part of the browsers the handler registration would have to be there. Can you reveal the use case beind your question? Jan > > > Stefan Tilkov > - sent from a mobile device - > >> >> Jan >> >> >> >>> Thanks, >>> Stefan >>> >>> -- >>> Stefan Tilkov, http://www.innoq.com/blog/st/ >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On 03.12.2009, at 15:54, Jan Algermissen wrote: > But seriously: I cannot imagine e.g. IE, FF and Safari ever having the > same, Language independent API for registering plugins. I'm not necessarily talking about plugins. An external application would be OK, too. > And since the > HTTP client connector is part of the browsers the handler registration > would have to be there. > > Can you reveal the use case beind your question? I'm considering to refactor an existing, monolithic Java fat client application into something that can handle individual documents sent as entities in an HTTP response (instead of turning the whole thing into a Web app). As it's Java, it would work on all platforms that matter. I can invent some vnd.* content-type for the document format. But what's the effort required to register this media type on the different platforms? How do the different environments pass the content to the helper app, and is there a library that hides potential differences? On the Mac, there's something called the Launch Services API, which at first glance seems very close: http://developer.apple.com/mac/library/DOCUMENTATION/Carbon/Conceptual/LaunchServicesConcepts/LSCIntro/LSCIntro.html How do other platforms handle this? Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Have you looked at JAF? http://java.sun.com/javase/technologies/desktop/javabeans/jaf/downloads/index.html I first used it when it came out in '98 IIRC, but haven't kept up with it. Mark.
It's specific to the browser and the underlying os. Checkout http://tools.ietf.org/html/rfc2183 for more information on signaling the document extension. After that it's up to the os. If you want the document to render specifically in the browser (like PDF does in Safari, FF, IE) then you need to have an addon that understand that's content-type or content-disposition. Here's an example of how to do it on MSFT. http://windowsxp.mvps.org/ie/pdf.htm Why not reuse an existing media-type? On Thu, Dec 3, 2009 at 7:19 AM, Stefan Tilkov <stefan.tilkov@...>wrote: > > > On 03.12.2009, at 15:54, Jan Algermissen wrote: > > But seriously: I cannot imagine e.g. IE, FF and Safari ever having the > same, Language independent API for registering plugins. > > > I'm not necessarily talking about plugins. An external application would be > OK, too. > > And since the > HTTP client connector is part of the browsers the handler registration > would have to be there. > > > Can you reveal the use case beind your question? > > > I'm considering to refactor an existing, monolithic Java fat client > application into something that can handle individual documents sent as > entities in an HTTP response (instead of turning the whole thing into a Web > app). As it's Java, it would work on all platforms that matter. I can invent > some vnd.* content-type for the document format. But what's the effort > required to register this media type on the different platforms? How do the > different environments pass the content to the helper app, and is there a > library that hides potential differences? > > On the Mac, there's something called the Launch Services API, which at > first glance seems very close: > > > http://developer.apple.com/mac/library/DOCUMENTATION/Carbon/Conceptual/LaunchServicesConcepts/LSCIntro/LSCIntro.html > > How do other platforms handle this? > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > >
On 03.12.2009, at 17:46, Mark Baker wrote: > Have you looked at JAF? > > http://java.sun.com/javase/technologies/desktop/javabeans/jaf/downloads/index.html > > I first used it when it came out in '98 IIRC, but haven't kept up with it. > Thanks, but AFAICT, that would only work within a Java environment, i.e. I can register types within my Java program to be invoked once a particular media type shows up. Stefan > Mark. > >
On 03.12.2009, at 21:51, Noah Campbell wrote: > It's specific to the browser and the underlying os. Checkout http://tools.ietf.org/html/rfc2183 for more information on signaling the document extension. After that it's up to the os. > I know. > If you want the document to render specifically in the browser (like PDF does in Safari, FF, IE) then you need to have an addon that understand that's content-type or content-disposition. Here's an example of how to do it on MSFT. http://windowsxp.mvps.org/ie/pdf.htm This is the only kind of information I can find, and it's not what I'm looking for. All of my Google searches show up similar things. What I'm interested in is at least a description explaining how to add media type handlers to different kinds of environments (OS and/or browser). > > Why not reuse an existing media-type? > I fail to see how that would help me, as I'm trying to invoke a particular proprietary handler. Anyway, it seems this is too OT for this list, so I'll stop and maybe report on research results later. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Stefan, (I think it isn't too OT, because its 'real life, applied REST' :-) On Dec 3, 2009, at 4:19 PM, Stefan Tilkov wrote: > On 03.12.2009, at 15:54, Jan Algermissen wrote: > >> But seriously: I cannot imagine e.g. IE, FF and Safari ever having >> the >> same, Language independent API for registering plugins. > > I'm not necessarily talking about plugins. An external application > would be OK, too. Yes,I meant to write handler, not plugin. > >> And since the >> HTTP client connector is part of the browsers the handler >> registration >> would have to be there. > >> >> Can you reveal the use case beind your question? > > I'm considering to refactor an existing, monolithic Java fat client > application into something that can handle individual documents sent > as entities in an HTTP response (instead of turning the whole thing > into a Web app). The place to register the handler (the place to configure the dispatcher) is the client side HTTP connector your 'something' uses. I like the idea of having that in the 'file handling layer' of the OS by treating URLs like file names. I was thinking about a tool the other day that would display resources as desktop items and allow double click for GET and drag and drop for PUT or POST (depending which one the resource supports). Moving to trash would be a DELETE and 'Get Info' a HEAD. An item that is a collection could open up (like a finder window) and make its elements likewise accessible. Maybe one would drag a Pizza order document of type application/order +xml to the order-processor item on the destop that represents the URL of your favorite delivery service. (Maybe that provokes some ideas for your use case) > As it's Java, it would work on all platforms that matter. I can > invent some vnd.* content-type for the document format. But what's > the effort required to register this media type on the different > platforms? How do the different environments pass the content to the > helper app, and is there a library that hides potential differences? I think it might be possible to do once per OS, but I doubt it would ever be OS independent (as the connector would have to be part of the OS). > > On the Mac, there's something called the Launch Services API, which > at first glance seems very close: > > http://developer.apple.com/mac/library/DOCUMENTATION/Carbon/Conceptual/LaunchServicesConcepts/LSCIntro/LSCIntro.html > > How do other platforms handle this? No idea, sorry. But I bet Microsoft has something like it. Jan > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 This JSON Schema media type has been submitted as an Internet Draft to the IETF, and I thought this might be of interest to REST advocates since a substantial portion of the specification is devoted to describing link relations in the JSON documents defined by JSON schema (intended to provide a more interoperable mechanism for hypertext navigation of JSON in a REST architecture): http://tools.ietf.org/html/draft-zyp-json-schema-01 Any feedback is appreciated. Thanks, - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com - -- Kris Zyp SitePen (503) 806-1841 http://sitepen.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAksZ6wwACgkQ9VpNnHc4zAyNSQCcCYl9pcTz/xU9MtpCZ47O5zsX AUwAoKoZ8YAhHCBfc/GJs9C4aEQAapKh =soAw -----END PGP SIGNATURE-----
Describing query parameters
One of the recuring questions of REST is how to describe services while
adhering to the ideal of "one URL must rule them all" constraint,
meaning you should only publish the root URL of your API and then
everything should be discoverable from there.
This goes very well when you are working with simple ressource URLs,
like for instance a single movie at http:://movies.org/movie/1234.
Everybody seems to agree that you just need to know the returned
content-type apriori to consume such a URL. So far so good - if I happen
to stumble upon a movie URL then my content-type tells me it's a movie,
and I know what to do with that.
The next question is then: how do I discover this URL when I am not
allowed to assume anything about the URL template? The answer seems to
be: you get the URL from the collection of all movies URL, for instance
http://movies.org/movies. This ressource could simply return an XML list
of all movies in the system, and you could then select the entry that
corresponds to your intended movie.
Unfortunately our movie collection is way too big for this approach to
work, so we turn to searching or filtering the collection through query
parameters. You can for instance get all the thrillers by filtering with
the "thriller" category: http://movies.org/movies?category=thriller.
But, wait a moment, how do I know what parameters to use? This has
nothing to do with content-types, so publishing some movie vendor
content-type does not help me at all. To me it's quite obvious that my
movie API must somehow describe the available query parameters to the
consumers. But how (and without documenting the actual URL)?
Now we come to my first question: has anyone been using XHTML forms for
describing URL-parameters for REST services?
Here is an example description of my movies search service:
<form action="http://movies.org/movies" method="GET">
<p>Some general introduction ...</p>
<label for="categoryRef">The "category" parameter can filter the
movies collection by category</label>
<input type="text" name="category" id="categoryRef"/>
<input type="submit" value="Search"/>
</form>
XHTML forms seems like a perfect fit for me: 1) you get an executable
specification that the end user can try out, 2) you can include human
readable prosa for the parameter descriptions using <label> tags, 3) you
describe both the action URL and the HTTP method, and 4) you should in
principple be able to auto-generate code proxies for the forms.
This doesn't remove the need for a human to actually study the service
before programming the application, but it makes it easy to do so, and
it makes it possible to resolve the actual service URL at runtime by
looking at the forms action="..." attribute. I don't see any sensible
way of making the parameters themselves somewhat computer deductable
from the description, so they must be hardcoded into the application
after reading the published (online) description.
Describing available services
Now we have a way of describing a single service and it's associated
query parameters. But still we have no way of discovering the actual URL
of the service since this must not be known apriori.
What we need is a list of all the available service URLs (or collection
URLs) in a computer (and human) readable form.
So here is my next question: are there any de-facto standards for a list
of service URLs?
Personally I would suggest using XHTML again with a description list:
<dl>
<dt><a href="http://movies.org/movies" class="movies">Movies</a></dt>
<dd>The "Movies" service lets you search all our available moves.</dd>
</dl>
Now our client can look for the anchor tag marked as class="movies" and
fetch the URL from the href attribute.
Another easy solution could be to use Atom which essentially is just a
list of URLs and their description.
Describing actions
This has been covered quite well by Subbu Allamaraju at InfoQ:
http://www.infoq.com/articles/subbu-allamaraju-rest
<http://www.infoq.com/articles/subbu-allamaraju-rest> - you simply
include action links in your formats.
So the URL for buying a movie would be embedded in the movie description
together with all sorts of other actions you can perform on the movie:
<movie>
...
<actions>
<link href="http://movies.org/movie/1234/buy" rel="buy-movie"/>
<link href="http://movie-rating.org/rate/1234" rel="rate-movie"/>
</actions>
</move>
At each of these URLs you would be given an XHTML form describing how to
actually buy or rate a movie.
Putting it all together
So here is what my fictive movie client would do to use the API:
# Request service URLs
GET /
# Response - from this the client derives the "movies" URL.
200 OK
ContentType: text/xhtml
<html>
<dl>
<dt><a href="http://movies.org/movies" class="movies">Movies</a></dt>
<dd>The "Movies" service lets you search all our available moves.</dd>
</dl>
</html>
# Request movie description
GET /movies
# Response - from this form the client derives the action URL to use for
searching movies
200 OK
ContentType: text/xhtml
<html>
<form action="http://movies.org/movies">
<p>Some general introduction ...</p>
<label for="categoryRef">The "category" parameter can filter the
movies collection by category</label>
<input type="text" name="category" id="categoryRef"/>
<input type="submit" value="Search"/>
</form>
</html>
# Request thrillers
GET /movies?category=thrillers
# Response
200 OK
ContentType: application/vnd.movies.movie-collection+xml
<movies>
<movie href="http://movies.org/movie/1234">Thriller no. 1</movie>
</movies>
# Alternative response (using a micro format)
200 OK
ContentType: text/xhtml
<html>
<ul>
<li><a href="http://movies.org/movie/1234" class="movie">Thriller no.
1</a></li>
</ul>
</html>
# At last we can request the actual movie
GET /movies/1234
# Response
200 OK
ContentType: application/vnd.movies.movie+xml
<movie>
...
</movie>
/Jørn Wildt
(somewhat related to the older discussion here:
http://tech.groups.yahoo.com/group/rest-discuss/message/13707)
It's amazing how easy it is to get this stuff wrong. Bugger! I was really trying hard to get it right. After reading up a few more blog post I would like to correct some issues (thanks to Duncan Cragg for his great REST dialogues). Describing actions The example action URLs illustrated some wrong thinking. The links should be to other resources, nouns, not verbs as I used for examples. Instead of a "buy" and "rate" URL, it should be "orders" (to which a new order can be posted) and "ratings" (to which a new rating can be posted). So it should be: <movie> ... <actions> <link href="http://movies.org/movie/1234/orders" rel="orders"/> <link href="http://movie-rating.org/ratings/1234" rel="ratings"/> </actions> </move> Describing content types Another thing I missed, and which I believe lots of other people overlook when designing REST APIs, is that content type descriptions describe not only the returned content type, but also what you can do with it! The last was not obvious to me - and it explains why it is so hard to grasp that content types are the only descriptions needed: if you only consider content types as a returned type description then you cannot see where the state changes are described. In the movies example this means the returned content type of the /movies collection could be application/vnd.movies+xml with a content type description that says "you can use 'category' as a query parameter to filter movie collections by category". Right? The same idea goes for buying a movie: the content type of /movies/1234/orders is application/vnd.movies.order+xml with a description that says "you can buy a movie by POSTing a new order to any movie order collection". Open questions ... I still find the XHTML forms description of a ressource as an interesting idea. Does anyone have experience with it? Consider a movie collection ressource with content type application/vnd.movies+xml: I can search this collection by posting a search query (and get redirected to the result) - but how do the service destinguish between posting a search query and posting a new movie? By simply looking at the posted data? Thanks, Jørn
I think it's generally accepted that "entry point" URLs can be templated, and rely on external documentation. Mind, everything should, inevitably, be discoverable from a simple GET /, but that's just an entry point like any other entry point. I don't think there is any requirement that there be a single entry point in to your service, the root entry. At the same time, however, the more entry points you publish, the more you cede control to the client in terms of activities and operations, because ideally you are committed to those entry points that you publish, and the structure of those entry points. This is the balance you're trying to achieve. As for query parameters, these are not "discoverable". That's simple truth, at least not in an MtoM transcation. All data types are essentially a priori information. Information that it is assumed "everyone" knows. Now, your search form is a perfectly acceptable way of publishing documentation for a search, but there should be no assumption that a system will be able to "figure it out", and "know what to do" in order to search. That behavior will have to be coded. A programmer could send a request, and see the resulting payload and go "oh, I see how they want us to do searches", so in that sense, it's "inline" documentation. But, you can just as easily post a link to the documentation telling them the same thing. Even then, with your simple form, you'll notice theirs no information about the contents of the search parameters, other than they're strings. For example, in Google, the search criteria is really a search expression with a specific syntax. Even if you could publish a grammar of the syntax accepted by the search service, that doesn't really communicate the semantics of how you would want to use it. Inevitably this will need to be documented. All of the documentation can be bundled. You can call "GET /" and get a human readable document documenting and describing all of the services, while at the same time exposing those services and providing the proper links that systems can use to use the service. It's nice, but you can see how it can be expensive as well with larger payloads. My only complaint with using XHTML as a content-type is that it's too vague. You may as well use "text/plain". Both are perfectly adequate for humans, but terrible for machines, which is why I prefer the datatypes be as specific as possible. You can perhaps rely on microformats and other patterns within the XHTML, letting XHTML be more a wrapper, then the machines can analyze an XHTML payload and look for embedded formats that it understands. That's a good compromise in this case, but when the client sees "application/xhtml+xml", all it "knows" is that it's XHTML, there's no contract that it actually contains anything relevant to the client in terms of microformats, or anything else. Regards, Will Hartung (willh@...)
At the very least, XHTML+XML tells you you can throw an xml parser against it. A whole set of tools will work assuming it is well formated. With text/html its not quite so simple, but there do exist parsers and all their quirks for dealing with HTML. text/plain would tell you that sed/awk/grep/etc. will work. -Noah On Mon, Dec 7, 2009 at 11:17 AM, Will Hartung <willh@...> wrote: > I think it's generally accepted that "entry point" URLs can be > templated, and rely on external documentation. Mind, everything > should, inevitably, be discoverable from a simple GET /, but that's > just an entry point like any other entry point. > > I don't think there is any requirement that there be a single entry > point in to your service, the root entry. > > At the same time, however, the more entry points you publish, the more > you cede control to the client in terms of activities and operations, > because ideally you are committed to those entry points that you > publish, and the structure of those entry points. This is the balance > you're trying to achieve. > > As for query parameters, these are not "discoverable". That's simple > truth, at least not in an MtoM transcation. > > All data types are essentially a priori information. Information that > it is assumed "everyone" knows. > > Now, your search form is a perfectly acceptable way of publishing > documentation for a search, but there should be no assumption that a > system will be able to "figure it out", and "know what to do" in order > to search. That behavior will have to be coded. > > A programmer could send a request, and see the resulting payload and > go "oh, I see how they want us to do searches", so in that sense, it's > "inline" documentation. But, you can just as easily post a link to the > documentation telling them the same thing. > > Even then, with your simple form, you'll notice theirs no information > about the contents of the search parameters, other than they're > strings. For example, in Google, the search criteria is really a > search expression with a specific syntax. > > Even if you could publish a grammar of the syntax accepted by the > search service, that doesn't really communicate the semantics of how > you would want to use it. Inevitably this will need to be documented. > > All of the documentation can be bundled. You can call "GET /" and get > a human readable document documenting and describing all of the > services, while at the same time exposing those services and providing > the proper links that systems can use to use the service. It's nice, > but you can see how it can be expensive as well with larger payloads. > > My only complaint with using XHTML as a content-type is that it's too > vague. You may as well use "text/plain". Both are perfectly adequate > for humans, but terrible for machines, which is why I prefer the > datatypes be as specific as possible. You can perhaps rely on > microformats and other patterns within the XHTML, letting XHTML be > more a wrapper, then the machines can analyze an XHTML payload and > look for embedded formats that it understands. That's a good > compromise in this case, but when the client sees > "application/xhtml+xml", all it "knows" is that it's XHTML, there's no > contract that it actually contains anything relevant to the client in > terms of microformats, or anything else. > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > >
How do one model different operations on the same ressource when we only have POST (and PUT/DELETE does not fit)? Lets assume we have a User in some system and we want to be able to: 1) Change password 2) Change e-mail 3) Change address For concurrency, versioning and other reasons we want to distingush these three operations from each other. This means the client must make explicit which operation it performs. It is not allowed to post the whole ressource representation since this increases the risc of versioning conflicts where two clients reads the same ressource, changes different properties, and then posts the whole ressource back again, overwriting the changes done by the other client. One solution is to switch on the posted content type: if it's a "password" then do one thing, if it's a "e-mail" then do something else and so on. This although seems a bit like using the SOAP "action" header and tunneling everyhting through a POST. Another solution is to have one sub-ressource for each operation, like for instance /users/1234/password, /users/1234/email, /users/1234/address - now you know your operation by looking at the ressource your are posting to. Are there better solutions out there? Thanks, Jørn
On Mon, Dec 7, 2009 at 9:50 PM, Jørn Wildt <jw@...> wrote: > > > How do one model different operations on the same ressource when we only > have POST (and PUT/DELETE does not fit)? > > Lets assume we have a User in some system and we want to be able to: > > 1) Change password > 2) Change e-mail > 3) Change address > > For concurrency, versioning and other reasons we want to distingush these > three operations from each other. This means the client must make explicit > which operation it performs. It is not allowed to post the whole ressource > representation since this increases the risc of versioning conflicts where > two clients reads the same ressource, changes different properties, and then > posts the whole ressource back again, overwriting the changes done by > the other client. > > One solution is to switch on the posted content type: if it's a "password" > then do one thing, if it's a "e-mail" then do something else and so on. This > although seems a bit like using the SOAP "action" header and tunneling > everyhting through a POST. > For single field updates, this is probably not optimal, but if you've got different types of state changes that can be initiated by the same resource, each requiring a different set of data, this is a pretty reasonable approach. > > Another solution is to have one sub-ressource for each operation, like for > instance /users/1234/password, /users/1234/email, /users/1234/address - now > you know your operation by looking at the ressource your are posting to. > Many RESTafarians frown at doing "partial updates" (i.e. only update the fields that are actually included in the request body) with a PUT -- I tend towards the pragmatic view and used this in several APIs -- but when you're doing a POST I don't see a reason why it should not make sense. Letting the client change whatever combination of fields they need to in *one* request (and therefore probably a single database transaction) would seem reasonable to me. > > Are there better solutions out there? > Another thing you might consider is the PATCH verb, but it is not as commonly used. > > Thanks, Jørn > > Craig > >
Thanks for your input. > Many RESTafarians frown at doing "partial updates" (i.e. only update the > fields that are actually included in the request body) with a PUT Can you say why or point to some online ressource with this debate? > For single field updates, this is probably not optimal, but if you've got > different types of state changes that can be initiated by the same resource, > each requiring a different set of data, this is a pretty reasonable > approach. I wasn't really thinking of single fields updates although I can see my examples are such. Your description "different types of state changes that can be initiated by the same resource, each requiring a different set of data" fits my intention quite well. Another example could be a collection where your can POST either a search query or a new member of the collection. /Jørn
On Mon, Dec 7, 2009 at 9:50 PM, Jørn Wildt <jw@...> wrote: > For concurrency, versioning and other reasons we want to distingush these > three operations from each other. This means the client must make explicit > which operation it performs. It is not allowed to post the whole ressource > representation since this increases the risc of versioning conflicts where > two clients reads the same ressource, changes different properties, and then > posts the whole ressource back again, overwriting the changes done by > the other client. > What does the scope of the change have to do with it? A change in the resource is a change in the resource. You can detect the change using an If-Not-Modified header in HTTP, as one example, as a mechanism of optimistic locking. If the PUT fails (due to detection of the change), you have the option of refetching the resource, making your changes again, and resubmitting or simply choosing to stomp on what happened before. Obviously the latter isn't a particularly good idea. The nice part of PUT/Refetch/Merge/PUT again is that it's pretty much guaranteed to work in all cases. That is, in the end, the data looks like what you would expect it to look like. You could use a more granular system (as Craig mentioned), but the nut there is the problem doesn't go away. You STILL have to (should) detect backend changes happening and handle it appropriately. You STILL should handle the optimistic locking scenario. By doing it on the entire resource, you only have to do this once, rather than with every granular change you wish to make. Obviously the amount of activity on a resource will affect what you want to do. But, in truth, if a resource is especially "hot", where clients are constantly racing to get things accomplished, you may well be better breaking that resource up in to something more granular or rethinking it to eliminate the race conditions rather than relying on any kind of locking/control scheme. > One solution is to switch on the posted content type: if it's a "password" > then do one thing, if it's a "e-mail" then do something else and so on. This > although seems a bit like using the SOAP "action" header and tunneling > everyhting through a POST. > This is RPC, not a resource system. > Another solution is to have one sub-ressource for each operation, like for > instance /users/1234/password, /users/1234/email, /users/1234/address - now > you know your operation by looking at the ressource your are posting to. > But you still have the locking issue anyway, as I mentioned before. The data is finer, more granular, so perhaps the overall impact will be less, but the problem still remains. Regards, Will Hartung (willh@...)
On Mon, Dec 7, 2009 at 11:01 PM, Jorn Wildt <jw@...> wrote: > Another example could be a collection where your can POST either a search > query or a new member of the collection. > You shouldn't be POSTing a query. POST is (typically) creating a new, unnamed resource (that is, the server gets to decide the name). Many simply use a GET for search, and provide options. If you wanted use POST for a search (perhaps your search criteria is simply too long, or some other reason), then you'd likely be better off to make the search query itself a first class resource, perhaps managed through they /queries resource. You can POST the criteria to /queries, and it returns a query identifier, that you can then use later, /shoes?query=http://host.com/queries/1234. Regards, Will Hartung (willh@...)
Jorn,
On Dec 8, 2009, at 8:01 AM, Jorn Wildt wrote:
> Thanks for your input.
>
>> Many RESTafarians frown at doing "partial updates" (i.e. only
>> update the
>> fields that are actually included in the request body) with a PUT
>
> Can you say why or point to some online ressource with this debate?
>
>> For single field updates, this is probably not optimal, but if
>> you've got
>> different types of state changes that can be initiated by the same
>> resource,
>> each requiring a different set of data, this is a pretty reasonable
>> approach.
>
> I wasn't really thinking of single fields updates although I can see
> my examples are such. Your description "different types of state
> changes that can be initiated by the same resource, each requiring a
> different set of data" fits my intention quite well.
For the update scenario you have three choices:
1) PUT the complete new state (e.g. the whole person
representation)
2) PATCH the resource with an appropriate diff
3) POST to an update-processor subresource, e.g.
POST /person/3344/updates and have server return
303 See Other
Location: /person/3344
to tell client that the person resource has changed
>
> Another example could be a collection where your can POST either a
> search query or a new member of the collection.
>
You need different resources for this because the resource semantics
determine the actual 'meaning' of POST. Doing two things that are
conceptually different would overload this meaning.
Besides - you should use GET for the querying.
HTH,
Jan
> /Jørn
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
Hi,
I am doing some research in Web Services and RESTful Web Services. I was wondering - is there a journal for research like this?
Thanks,
Sean.
Will Hartung wrote: > > > > One solution is to switch on the posted content type: if it's a > "password" then do one thing, if it's a "e-mail" then do something > else and so on. This although seems a bit like using the SOAP > "action" header and tunneling everyhting through a POST. > > > This is RPC, not a resource system. Ok - Why do you say that?
I have the same doubt, if it is driven by the content-type, why is RPC? 2009/12/8 Mike Kelly <mike@...> > > > Will Hartung wrote: > > > > > > > > One solution is to switch on the posted content type: if it's a > > "password" then do one thing, if it's a "e-mail" then do something > > else and so on. This although seems a bit like using the SOAP > > "action" header and tunneling everyhting through a POST. > > > > > > This is RPC, not a resource system. > > Ok - Why do you say that? > >
Guilherme Silveira wrote: > > > One solution is to switch on the posted content type: if it's a > > "password" then do one thing, if it's a "e-mail" then do something > > else and so on. This although seems a bit like using the SOAP > > "action" header and tunneling everyhting through a POST. > > > > > > This is RPC, not a resource system. > > Ok - Why do you say that? > > Hello Will, > > If you add control (action) information within anything else but http > headers or verb, you break the uniform interface: the action depends > on something that only your system can understand. > It is not visible to intermediate layers what that request represents. > > Summing up, you lose visibility and you break the uniform interface > > You can create custom proxies that understand this kind of messages, > but why do it if you already have the current ones in the real world > working for you? The proposal was to use content-type to indicate the nature of the POST, so it is equally visible - and this seems a more appropriate use of the uniform interface than partial updates with PUT. If you take the approach of splitting up the user resource into more granular 'sub-resources', but yet you continue to expose the main user resource as a composite resource that derives state from these new sub-resources; there is an equal overall loss in visibility: GET /user/1234 ... ... <address uri="/user/1234/address">London, UK</address> ... ... PUT /user/1234/address <address>Paris, France</address> GET /user/1234 ... ... <address uri="/user/1234/address">London, UK</address> [??] ... ... The above lacks visibility since the user resource's (composite) state has been udpated "invisibly" as far as the system is concerned. Obviously the solution here is to avoid composite resources and stick to hyperlinks only - most people use composite resources, however, because they 'make things easier' and/or it avoids the overhead that comes with the increased number of HTTP requests required. - Mike
On Dec 8, 2009, at 1:01 PM, Guilherme Silveira wrote:
> Hello Jorn,
>
> As you mentioned, If you use the HTTP verbs in a way they were not
> mean to be used, you break the uniform interface (and create a
> proprietary extension for the app protocol).
> An exposed resource does not need to be a full representation of what
> you have in your database.
>
> PUT /user/{username} ==> will update the user information
> PUT /user//{username}/password ==> will update the user'slogin
> resource
> PUT /user//{username}/contact ==> will update the user's contact
> resource
> PUT /user//{username}/address ==> will update the user's address
> resource
>
> Some stolen comments: "Resources are not storage items (or, at least,
> they aren’t always equivalent to some storage item on the back-end).
> The same resource state can be overlayed by multiple resources, just
> as an XML document can be represented as a sequence of bytes or a tree
> of individually addressable nodes."
>
> Try not to think as URI <= 1 to 1 mapping => database tables. This is
> one of the typical mistakes people would make with hibernate/ejb in
> the java world in its early days.
>
> Note that all invocations are idempotent and lockable if you use the
> corresponding http headers.
By 'lockable' you mean 'concurrency controllable', yes? There are no
http headers for locking.
Jan
>
> Although breaking an internal element into different resource
> representation is fine, I am not sure about opinions on whether there
> can be two ways of POSTING a resource (i.e. POST /user will create the
> user and POST /full_user will create it will its entire
> representation), although I believe its just fine.
>
> Regards
>
> Guilherme Silveira
> Caelum | Ensino e Inovação
> http://www.caelum.com.br/
>
>
> 2009/12/8 Jan Algermissen <algermissen1971@...>
>>
>>
>>
>> Jorn,
>>
>> On Dec 8, 2009, at 8:01 AM, Jorn Wildt wrote:
>>
>>> Thanks for your input.
>>>
>>>> Many RESTafarians frown at doing "partial updates" (i.e. only
>>>> update the
>>>> fields that are actually included in the request body) with a PUT
>>>
>>> Can you say why or point to some online ressource with this debate?
>>>
>>>> For single field updates, this is probably not optimal, but if
>>>> you've got
>>>> different types of state changes that can be initiated by the same
>>>> resource,
>>>> each requiring a different set of data, this is a pretty reasonable
>>>> approach.
>>>
>>> I wasn't really thinking of single fields updates although I can see
>>> my examples are such. Your description "different types of state
>>> changes that can be initiated by the same resource, each requiring a
>>> different set of data" fits my intention quite well.
>>
>> For the update scenario you have three choices:
>>
>> 1) PUT the complete new state (e.g. the whole person
>> representation)
>> 2) PATCH the resource with an appropriate diff
>> 3) POST to an update-processor subresource, e.g.
>> POST /person/3344/updates and have server return
>> 303 See Other
>> Location: /person/3344
>>
>> to tell client that the person resource has changed
>>
>>>
>>> Another example could be a collection where your can POST either a
>>> search query or a new member of the collection.
>>>
>>
>> You need different resources for this because the resource semantics
>> determine the actual 'meaning' of POST. Doing two things that are
>> conceptually different would overload this meaning.
>>
>> Besides - you should use GET for the querying.
>>
>> HTH,
>> Jan
>>
>>> /Jørn
>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>
>> --------------------------------------
>> Jan Algermissen
>>
>> Mail: algermissen@...
>> Blog: http://algermissen.blogspot.com/
>> Home: http://www.jalgermissen.com
>> --------------------------------------
>>
>>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
On Mon, Dec 7, 2009 at 11:50 PM, Jørn Wildt <jw@...> wrote: > Another solution is to have one sub-ressource for each operation, like for instance /users/1234/password, /users/1234/email, /users/1234/address - now you know your operation by looking at the ressource your are posting to. > That's the one I use all the time. I do not understand the objections to it. Maybe people are thinking of the /users/1234 resource as a database record. It's just a resource. So is /users/1234/password.
> > > One solution is to switch on the posted content type: if it's a > > "password" then do one thing, if it's a "e-mail" then do something > > else and so on. This although seems a bit like using the SOAP > > "action" header and tunneling everyhting through a POST. > > > > > > This is RPC, not a resource system. > > Ok - Why do you say that? > Hello Will, If you add control (action) information within anything else but http headers or verb, you break the uniform interface: the action depends on something that only your system can understand. It is not visible to intermediate layers what that request represents. Summing up, you lose visibility and you break the uniform interface You can create custom proxies that understand this kind of messages, but why do it if you already have the current ones in the real world working for you? Regards > >
On Dec 8, 2009, at 2:10 PM, Bob Haugen wrote: > On Mon, Dec 7, 2009 at 11:50 PM, Jørn Wildt <jw@...> > wrote: >> Another solution is to have one sub-ressource for each operation, >> like for instance /users/1234/password, /users/1234/email, /users/ >> 1234/address - now you know your operation by looking at the >> ressource your are posting to. >> > > That's the one I use all the time. I do not understand the objections > to it. Not an objection, but something to consider: Splitting a resource into sub resources increases the amount of relationships that need to be understood by client and server. OTH, it makes the use of text/plain possible for representing the sub resources (and any complex format that can be avoided is one thing less to maintain). Jan > Maybe people are thinking of the /users/1234 resource as a > database record. It's just a resource. So is /users/1234/password. > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Hello Jorn,
As you mentioned, If you use the HTTP verbs in a way they were not
mean to be used, you break the uniform interface (and create a
proprietary extension for the app protocol).
An exposed resource does not need to be a full representation of what
you have in your database.
PUT /user/{username} ==> will update the user information
PUT /user//{username}/password ==> will update the user'slogin resource
PUT /user//{username}/contact ==> will update the user's contact resource
PUT /user//{username}/address ==> will update the user's address resource
Some stolen comments: "Resources are not storage items (or, at least,
they aren’t always equivalent to some storage item on the back-end).
The same resource state can be overlayed by multiple resources, just
as an XML document can be represented as a sequence of bytes or a tree
of individually addressable nodes."
Try not to think as URI <= 1 to 1 mapping => database tables. This is
one of the typical mistakes people would make with hibernate/ejb in
the java world in its early days.
Note that all invocations are idempotent and lockable if you use the
corresponding http headers.
Although breaking an internal element into different resource
representation is fine, I am not sure about opinions on whether there
can be two ways of POSTING a resource (i.e. POST /user will create the
user and POST /full_user will create it will its entire
representation), although I believe its just fine.
Regards
Guilherme Silveira
Caelum | Ensino e Inovação
http://www.caelum.com.br/
2009/12/8 Jan Algermissen <algermissen1971@...>
>
>
>
> Jorn,
>
> On Dec 8, 2009, at 8:01 AM, Jorn Wildt wrote:
>
> > Thanks for your input.
> >
> >> Many RESTafarians frown at doing "partial updates" (i.e. only
> >> update the
> >> fields that are actually included in the request body) with a PUT
> >
> > Can you say why or point to some online ressource with this debate?
> >
> >> For single field updates, this is probably not optimal, but if
> >> you've got
> >> different types of state changes that can be initiated by the same
> >> resource,
> >> each requiring a different set of data, this is a pretty reasonable
> >> approach.
> >
> > I wasn't really thinking of single fields updates although I can see
> > my examples are such. Your description "different types of state
> > changes that can be initiated by the same resource, each requiring a
> > different set of data" fits my intention quite well.
>
> For the update scenario you have three choices:
>
> 1) PUT the complete new state (e.g. the whole person
> representation)
> 2) PATCH the resource with an appropriate diff
> 3) POST to an update-processor subresource, e.g.
> POST /person/3344/updates and have server return
> 303 See Other
> Location: /person/3344
>
> to tell client that the person resource has changed
>
> >
> > Another example could be a collection where your can POST either a
> > search query or a new member of the collection.
> >
>
> You need different resources for this because the resource semantics
> determine the actual 'meaning' of POST. Doing two things that are
> conceptually different would overload this meaning.
>
> Besides - you should use GET for the querying.
>
> HTH,
> Jan
>
> > /Jørn
> >
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
>
> --------------------------------------
> Jan Algermissen
>
> Mail: algermissen@acm.org
> Blog: http://algermissen.blogspot.com/
> Home: http://www.jalgermissen.com
> --------------------------------------
>
>
> By 'lockable' you mean 'concurrency controllable', yes? There are no > http headers for locking. Yes, sorry. > Obviously the solution here is to avoid composite resources and stick to > hyperlinks only - most people use composite resources, however, because they > 'make things easier' and/or it avoids the overhead that comes with the > increased number of HTTP requests required. > - Mike Agreed... can you give an example on the hyperlinks helping it? (linking from the base resource - which doesnt contain the composite ones - to other resources that can be PUT to or something else?) Any reading suggestions with similar examples? Thanks Mike, Guilherme
On Tue, Dec 8, 2009 at 7:30 AM, Jan Algermissen <algermissen1971@...> wrote: > Splitting a resource into sub resources increases the amount of > relationships that need to be understood by client and server. The main resource (or some other entry point) can (and usually does) offer hyperlinks to the subresources.
Does anyone know of an existing link relation that should be used to link to an atom feed of a service's "health" (e.g. Amazon status[1])? I've looked at the various registries (atom, mnot, whatwg, etc.) and haven't seen anything appropriate yet. Thanks, --tim [1] - http://status.aws.amazon.com/rss/ElasticMapReduce.rss
"Bob Haugen" <bob.haugen@gmail.com> schrieb: >On Tue, Dec 8, 2009 at 7:30 AM, Jan Algermissen <algermissen1971@mac.com> wrote: >> Splitting a resource into sub resources increases the amount of >> relationships that need to be understood by client and server. > >The main resource (or some other entry point) can (and usually does) >offer hyperlinks to the subresources. As long as you can agree on a limited set of relation types to the subresources a client must only understand them. One kind of relation would be e.g. "property" for the relation of /user/xxx to /user/xxx/name. "Property" will indicate to the client that the rated resource represents a single property of the linking resource. I'm not sure that this is the best example, but I hope, you get the idea. What remains is the task to define a ontology of relation for you resources. I wonder if there is something generic what can be used, e.g. in the rdf or owl ecosystems. -billy -- Sent from my Android phone with K-9. Please excuse my brevity.
Does this get through? My two previous messages are lost some where ...
(sorry for the noice)
/Jørn
----- Original Message -----
From: Bob Haugen
To: rest-discuss@yahoogroups.com
Sent: Tuesday, December 08, 2009 3:47 PM
Subject: Re: [rest-discuss] Multiple operations on the same ressource
On Tue, Dec 8, 2009 at 7:30 AM, Jan Algermissen <algermissen1971@...> wrote:
> Splitting a resource into sub resources increases the amount of
> relationships that need to be understood by client and server.
The main resource (or some other entry point) can (and usually does)
offer hyperlinks to the subresources.
On Dec 8, 2009, at 7:44 PM, Tim Williams wrote: > Does anyone know of an existing link relation that should be used to > link to an atom feed of a service's "health" No. Probably 'status' (linking from a resource to another resource that represents its status) is a generic enough concept to warrant standardization? Hmm, that reminds me of something I have been wanting to do since 2004 or so: investigating standardization of service management (e.g. ITIL) related concepts (including monitoring systems, trouble ticketing systems etc.). (Your 'status' would be a core concept). If there is anybody interested in discussing this further, let me know. Jan > (e.g. Amazon status[1])? > I've looked at the various registries (atom, mnot, whatwg, etc.) and > haven't seen anything appropriate yet. > > Thanks, > --tim > > [1] - http://status.aws.amazon.com/rss/ElasticMapReduce.rss > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
> > One solution is to switch on the posted content type: if it's a > > "password" then do one thing, if it's a "e-mail" then do something > > else and so on. This although seems a bit like using the SOAP > > "action" header and tunneling everyhting through a POST. > > > > This is RPC, not a resource system. > > Ok - Why do you say that? > > If you add control (action) information within anything else but http > headers or verb, you break the uniform interface: the action depends on Sorry, but it seems to me that there is a slight misunderstanding here: I was talking about switching on content type which is a known header. I did although compare it to SOAP's action header, so the question is: what is the context for your answer - the content type header or the action header? I guess you are referring to the action header? Switching on content type still seems okay to me - even though I do not know if it makes sense when you take the other answers into account. The feed back so far is mostly: create another ressource to POST to - don't do different POSTs to the same ressource. /J�rn ----- Original Message ----- From: "Guilherme Silveira" <guilherme.silveira@...> To: "Mike Kelly" <mike@...> Cc: "Will Hartung" <willh@...>; "J�rn Wildt" <jw@...>; "rest-discuss" <rest-discuss@yahoogroups.com> Sent: Tuesday, December 08, 2009 1:27 PM Subject: Re: [rest-discuss] Multiple operations on the same resource > > > One solution is to switch on the posted content type: if it's a > > "password" then do one thing, if it's a "e-mail" then do something > > else and so on. This although seems a bit like using the SOAP > > "action" header and tunneling everyhting through a POST. > > > > > > This is RPC, not a resource system. > > Ok - Why do you say that? > Hello Will, If you add control (action) information within anything else but http headers or verb, you break the uniform interface: the action depends on something that only your system can understand. It is not visible to intermediate layers what that request represents. Summing up, you lose visibility and you break the uniform interface You can create custom proxies that understand this kind of messages, but why do it if you already have the current ones in the real world working for you? Regards > >
On Tue, Dec 8, 2009 at 1:07 PM, Jørn Wildt <jw@...> wrote: >> > One solution is to switch on the posted content type: if it's a >> > "password" then do one thing, if it's a "e-mail" then do something >> > else and so on. This although seems a bit like using the SOAP >> > "action" header and tunneling everyhting through a POST. >> > >> > This is RPC, not a resource system. >> >> Ok - Why do you say that? >> >> If you add control (action) information within anything else but http >> headers or verb, you break the uniform interface: the action depends on > > Sorry, but it seems to me that there is a slight misunderstanding here: I > was talking about switching on content type which is a known header. I did > although compare it to SOAP's action header, so the question is: what is the > context for your answer - the content type header or the action header? I > guess you are referring to the action header? > > Switching on content type still seems okay to me - even though I do not know > if it makes sense when you take the other answers into account. The feed > back so far is mostly: create another ressource to POST to - don't do > different POSTs to the same ressource. No, you're right. I misspoke. It's an interesting idea. The premise is that PUT take a resource representation and performs the update. It does "muddy" the concept of a PUT at the detail level. But, from a pragmatic level, it's really much like quibbling over the difference between: UPDATE name SET (firstName, middleInitial, lastName) VALUES (:origFirstName, :origMiddleInitial, :newLastName) WHERE id = :id; and, simply: UPDATE name SET (lastName) VALUES (:newLastName) WHERE id = :id; I think to be pedantic, you would use PATCH instead of PUT for this, but that's just because it seems to have found favor (I don't know the origin for PATCH, as it's not one of HTTP verbs, though WebDAV uses PROPPATCH, so there's likely some inspiration from that). As for the argument about uniform interface, and that using PUT with a fragment doesn't quite comply with that, I'd probably disagree as well, as the uniform interface (i.e. PUT will take the entire resource representation and do the right thing) still exists, this is just an overloading of it. What would be best is that the availability of a "fragment enabled" PUT is discoverable (perhaps via OPTIONS, or some other negotiation protocol), so that clients can degrade gracefully. So, basically, I think a fragment can work well, but I think you should still be able to send the entire resource as well, using the fragments as an optimization for those clients that support it. Regards, Will Hartung (willh@...)
Can anyone tell the reason for letting "reply" only reply to the posting author on this list? The result is that most people do a "reply to all" which results in two duplicate replies on my mail client most of the time. Can it be changed? Thanks, J�rn
Stefan,
[yet another late reply to this one :-)]
On Sep 2, 2008, at 11:04 AM, Stefan Tilkov wrote:
> On 02.09.2008, at 08:59, Roy T. Fielding wrote:
>
>> On Sep 1, 2008, at 10:41 PM, Stefan Tilkov wrote:
>>> What do you call the concept of "classes" or "types" of resources in
>>> your RESTful designs? E.g. when you decide to turn each "customer"
>>> into its own identifiable resource - http://example.com/customers/
>>> 1234
>>> - what does http://example.com/customers/{id} describe? Both
>>> "resource
>>> class" and "resource type" would work, but don't seem really
>>> convincing.
>>>
>>
>> We call them resources. If they had types, they would be strongly
>> coupled to whatever expected that type.
>>
>> ....Roy
>
> I don't want to suggest there's this kind of coupling (in fact I view
> the lack of it as a strength, and this is why I'm unhappy with
> "type"). What does the template identify? Resource, obviously, but
> also a "group of similarly identified resources"?
>
> It's not really the URI template connection I'm concerned with, I just
> wonder whether there's a better term than "kind of …", as in "The
> first step in designing an application interface should be to identify
> the different kinds of resources you want to expose."
What about thinking about this in terms of "kinds of application
states"? Resources represent application states that the client can
transition to. In M2M interactions the client code is a manifestation
of the assumptions the client (developer) makes about the application
state it reaches at a certain point.
When I code a client for an ordering service, there will be (in some
way) the coded assumption that after submitting an order there will be
an application state that represents the order and (most importantly)
provides the next transitions for 'operations' on that order (order
change, order cancelation).
I understand this in the sense that the client expects a certain "kind
of application state" and this expectation corresponds to the reason
why the server "exposes certain kinds of resources".
Despite the fact that all resources are just resources this client
side expectation about the next available transitions (which is what
the "kind of application state" essentially means) is just not going
to go away.
Jan
>
> Thanks,
> Stefan
> --
> Stefan Tilkov, http://www.innoq.com/blog/st/
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...g
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
Jørn Wildt wrote: > Can anyone tell the reason for letting "reply" only reply to the posting > author on this list? http://www.unicom.com/pw/reply-to-harmful.html http://woozle.org/~neale/papers/reply-to-still-harmful.html
On Wed, Dec 9, 2009 at 4:57 AM, Jon Hanna <jon@...> wrote: > Jørn Wildt wrote: >> Can anyone tell the reason for letting "reply" only reply to the posting >> author on this list? > > http://www.unicom.com/pw/reply-to-harmful.html > http://woozle.org/~neale/papers/reply-to-still-harmful.html I don't know much about "munging". I don't know much about IETF mail specifications. But I do know that this apparently purist approach runs counter to my own user expectations and is, therefore, unfriendly. It's different than every other list I subscribe to. Anyway, my +1 for seeing if it can't be changed - even if that's the 'wrong' thing to do:) --tim
There has been a lot of discussion about the right way to implement a REST service, but less focus on how you would actually code a client. I have been looking at RESTFulie[1], Subbu Alamarju[2], and the Starbucks[3] example, and would like to discuss a similar typed approach in C#.
I am experimenting with an actual implementation and would like some feedback before getting too far :-)
Thanks, Jørn
[1] http://github.com/caelum/restfulie
[2] http://www.infoq.com/articles/subbu-allamaraju-rest
[3] http://www.infoq.com/articles/webber-rest-workflow
Service example documentation
In order to discuss a REST client we need a service example. My first use case is a movie shop where we can search for movies in a specific category. To do so the shop has published a single search service URL template: http://movies.org/movies?category={category}.
The shop also publishes three ressource mime types:
// Example "application/vnd.movies.movie+xml"
<Movie>
<Self href="http://movies.org/movies/91"/>
<Title>Strange Dawn</Title>
<Category>Thriller</Category>
<Director href="http://movies.org/persons/47"/>
</Movie>
// Example "application/vnd.movies.movie-collection+xml"
<Movies>
<Self href="http://movies.org/movies?category=Thriller"/>
<Movie>
<Title>Strange Dawn</Title>
<Self href="http://movies.org/movies/91"/>
</Movie>
<Movie>...</Movie>
<Movie>...</Movie>
</Movies>
// Example "application/vnd.movies.person+xml"
<Person>
<Self href="http://movies.org/persons/47"/>
<Name>Richard Strangelove</Name>
<Photo href="http://facebook.com/photos/hh31y1"/>
</Person>
Comments
- I have avoided Atom Links since, in my experience, these don't serialize well in the C# standard XML serializer. You could although create your own serializer, so this is not an important restriction.
- Notice how the person type has external references :-)
Code example - Searching
The cleanest client usage I can come up with is:
// A link (template). This should be fetched from a configuration file.
Link MoviesSearchLink = new Link("http://movies.org/movies?category={category}");
// Anonymous class with search parameters. Reflection is used to extract values.
// This is about the simplest way to write a "fixed hashmap" in C#
var movieSearchParameter = new { category = "Thriller" };
// Get ressource stored at the link endpoint
MovieCollection movies = MoviesSearchLink.Get<MovieCollection>(movieSearchParameter);
// Iterate over all movies and print title
foreach (Movie movie in movies)
Console.WriteLine("Title: " + movie.Title);
Comments:
- A Link is untyped. We do not know what lies at the end of it.
- A link knows how to merge parameters into URL templates.
- The result of GETing a link is typed. The actual type is defined by the returned mime type.
- In order to do something usefull with the search we must assume that it returns a MovieCollection. Hence the generic type specifier in the Get<T>() method. This is apriori information which I cannot see how to code without.
Parsing ressources
One piece of magic is how Get<MovieCollection>(params) knows how to convert the bytes returned from the endpoint to a MovieCollection. For this we create a MimeTypeRegistry:
MimeTypeRegistry.Register<MovieCollection, MovieCollectionBuilder>("application/vnd.movies.movie-collection");
which is equal to:
MimeTypeRegistry.Register(typeof(MovieCollection), typeof(MovieCollectionBuilder), "application/vnd.movies.movie-collection");
This means: when ever we must parse a specific mime type, we look up a builder in the registry and uses this to parse the returned ressource representation.
The typed Get<MovieCollection>(params) method GETs the ressource data, instantiates the corresponding builder, verifies that the built object type matches the requested and returns the built object.
Comments:
- This is static typing which RESTafarians seems to shy away from. But the type depends on the returned ressource, _not_ the URL. So to my knowledge this is fine.
- It is not required to use the type safe Get<T>(), you could also call Get() which returns an object. The actual returned type then depends solely on the mime type of the ressource, and it is up to the programmer to decide what to do with it.
- I am quite sure you can write some pretty generic XML builders without much overhead.
- This is not limited to XML, you could add image/jpeg and other well known mime types. You just need to supply a proper builder.
Code example - Getting sub-ressources
Now we want to get information about the director of the movie:
// One of the returned self links from the search query
Link movieLink = movies[0].Self;
// Get the actual movie
Movie movie = movieLink.Get<Movie>();
// Get the director
MoviePerson director = movie.Director.Get<MoviePerson>();
Comments:
- There are no hard coded links here.
- The only apriori information we use is the knowledge of the types of the referenced ressources. These types are documented in the mime type in which the links are used.
Versioning
Now our wonderfull movie shop decides to be able to sell and rate movies. They do their own selling, but uses the fameous ratings.org service to rate their movies. So the shop creates a new version of the movie mime type:
// Example "application/vnd.movies.movie.v2+xml"
<Movie>
<Self href="http://movies.org/movies/91"/>
<Title>Strange Dawn</Title>
<Category>Thriller</Category>
<Director href="http://movies.org/persons/47"/>
<Orders href="http://movies.org/movies/91/orders"/>
<Ratings href=http://ratings.org/ratings?item=http%3a%2f%2fmovies.org%2fmovies%2f91/>
</Movie>
In order to service both old and new clients the shop decides to return the initial movie mime type by default. Never clients should use the Accept header to indicate that they want the new version. The same goes for the movies collection type.
Our existing client code works happily as it did before.
Code example - A new client
The new client code would look like this:
// A link (template). This should be fetched from a configuration file.
Link MoviesSearchLink = new Link("http://movies.org/movies?category={category}");
// Anonymous class with search parameters. Reflection is used to extract values.
// This is about the simplest way to write a "fixed hashmap" in C#
var movieSearchParameter = new { category = "thriller" };
// Setting up the Accept header
var movieSearchHeaders = new { Accept = "application/vnd.movies.movie-collection.v2" }
// Get ressource stored at the link endpoint
MovieCollection movies = MoviesSearchLink.Get<MovieCollection>(movieSearchParameter, movieSearchHeaders);
// Iterate over all movies and print title
foreach (Movie movie in movies)
Console.WriteLine("Title: " + movie.Title);
Code example - Buying movies
Now we have a movie which has an embedded link to it's sales orders. To buy a movie we post a new order to the sales order collection:
// One of the returned self links from the search query
Link movieLink = movies[0].Self;
// Get the actual movie
Movie movie = movieLink.Get<Movie>();
// Create a new order request
MovieOrderRequest orderRequest = new MovieOrderRequest(movie.Self, 1 /* quantity */);
// Post the order request to the order collection
// Assume it returns the newly created order
MovieOrder order = movie.Orders.Post(orderRequest);
Comments:
- The POST result in a redirect to the newly created order. The system GETs this new order and returns it. This means we loose the intermediate data returned from the POST.
Other verbs
The Link class is has built-in support for GET/PUT/POST/DELETE. Other verbs can be executed through a generic "Request" method:
SomeType x = someLink.Request("SOMEVERB", somePayload);
Caching
The Link class and it's associted methods should of course respect ETag and if-not-modified-since etc. This would require the framework to be initialized with a cache implementation of some kind.
Error handling
I would suggest using execptions for error handling.
Jørn:
This line stands out first: "I have avoided Atom Links since, in my
experience, these don't serialize well in the C# standard XML serializer."
My advice is to be wary of serializers when coding for HTTP. There are so
many variances with incoming responses I think you'll find it a real task to
build apps based on successfully converting incoming response bodies into
code-able objects. Using serializers also tends to lead programmers to
tight-binding between the code and the HTTP response body. This means
changes in the body may blow the serializer code. This is especially true
when working with "generic" media-types such as XML and JSON, etc. since
they have very little semantic value built into them.
That leads me to another bit of advice I'll offer: think about link
semantics from the very start when creating your library. The Web browser
client works because the link semantics of the HTML media-type are
well-defined (and pretty narrow). There are a limited number of link
elements. Some are in-doc links (IMG, LINK, SCRIPT, etc.), some are
navigational links (A, FORM). All, except FORM, are limited to using the GET
method. It's the semantic model of HTML that allows browsers to properly
handle HTTP responses from previously unknown locations and still provide
full functionality - even a decade after the semantics where defined. I
suspect you'll find that building a client to properly locate, identify,
and understand the link semantics of a single media type
(application/vnd.movies.movie+xml) is challenging by itself. Building one
that handles multiple media-types just adds to the fun<g>.
I also encourage you to treat HTTP control data (headers) as top-level
programming objects in your library. Allowing programmers to decorate
requests with control data (content-encoding, media-type, authorization,
cache-control, etc.) and have direct access to the control data on responses
will improve the flexibility of any client/server built w/ your library.
In the big picture, I prefer looking at HTTP programming from the
stand-point of "resource programming." I look for a code library that lets
me define a resource, associate or or more URIs with that resource, handle
multiple representations of the resource (for both requests and response
bodies), and properly decorate requests and responses w/ control data. I
also want to make sure it handles mime-types properly (conneg included),
conditional requests (GET and PUT), and supports flexible authentication
models.
FWIW, I started work on a REST-ful HTTP C# framework a while back [1]. It's
been dormant for quite some time as the current version works well for me,
but there are lots of places it needs work. I've also built an HTTP
utilities library [2] with most all the bits I need for building REST-ful
HTTP apps. It's smaller and lighter than my 'framework' library. I mention
these as some of the code there might be helpful and/or act as a cautionary
tale as you work on your own projects.
mca
http://amundsen.com/blog/
[1] http://exyus.com
[2]
http://code.google.com/p/mikeamundsen/source/browse/#svn/trunk/Amundsen.Utilities
On Wed, Dec 9, 2009 at 17:00, Jørn Wildt <jw@...> wrote:
>
>
> There has been a lot of discussion about the right way to implement a REST
> service, but less focus on how you would actually code a client. I have been
> looking at RESTFulie[1], Subbu Alamarju[2], and the Starbucks[3] example,
> and would like to discuss a similar typed approach in C#.
>
> I am experimenting with an actual implementation and would like some
> feedback before getting too far :-)
>
> Thanks, Jørn
>
>
> [1] http://github.com/caelum/restfulie
> [2] http://www.infoq.com/articles/subbu-allamaraju-rest
> [3] http://www.infoq.com/articles/webber-rest-workflow
>
>
> *Service example documentation*
> In order to discuss a REST client we need a service example. My first use
> case is a movie shop where we can search for movies in a specific category.
> To do so the shop has published a single search service URL template:
> http://movies.org/movies?category={category<http://movies.org/movies?category=%7Bcategory>}.
>
>
> The shop also publishes three ressource mime types:
>
> // Example "application/vnd.movies.movie+xml"
> <Movie>
> <Self href="http://movies.org/movies/91"/>
> <Title>Strange Dawn</Title>
> <Category>Thriller</Category>
> <Director href="http://movies.org/persons/47"/>
> </Movie>
>
> // Example "application/vnd.movies.movie-collection+xml"
> <Movies>
> <Self href="http://movies.org/movies?category=Thriller"/>
> <Movie>
> <Title>Strange Dawn</Title>
> <Self href="http://movies.org/movies/91"/>
> </Movie>
> <Movie>...</Movie>
> <Movie>...</Movie>
> </Movies>
>
> // Example "application/vnd.movies.person+xml"
> <Person>
> <Self href="http://movies.org/persons/47"/>
> <Name>Richard Strangelove</Name>
> <Photo href="http://facebook.com/photos/hh31y1"/>
> </Person>
>
> Comments
>
> - I have avoided Atom Links since, in my experience, these don't serialize
> well in the C# standard XML serializer. You could although create your own
> serializer, so this is not an important restriction.
>
> - Notice how the person type has external references :-)
>
>
> *Code example - Searching*
> The cleanest client usage I can come up with is:
>
> // A link (template). This should be fetched from a configuration file.
> Link MoviesSearchLink = new Link("
> http://movies.org/movies?category={category}");
>
> // Anonymous class with search parameters. Reflection is used to extract
> values.
> // This is about the simplest way to write a "fixed hashmap" in C#
> var movieSearchParameter = new { category = "Thriller" };
>
> // Get ressource stored at the link endpoint
> MovieCollection movies =
> MoviesSearchLink.Get<MovieCollection>(movieSearchParameter);
>
> // Iterate over all movies and print title
> foreach (Movie movie in movies)
> Console.WriteLine("Title: " + movie.Title);
> Comments:
>
> - A Link is untyped. We do not know what lies at the end of it.
>
> - A link knows how to merge parameters into URL templates.
>
> - The result of GETing a link is typed. The actual type is defined by the
> returned mime type.
>
> - In order to do something usefull with the search we must assume that it
> returns a MovieCollection. Hence the generic type specifier in the Get<T>()
> method. This is apriori information which I cannot see how to code without.
>
>
> *Parsing ressources*
> One piece of magic is how Get<MovieCollection>(params) knows how to convert
> the bytes returned from the endpoint to a MovieCollection. For this we
> create a MimeTypeRegistry:
>
> MimeTypeRegistry.Register<MovieCollection,
> MovieCollectionBuilder>("application/vnd.movies.movie-collection");
>
> which is equal to:
>
> MimeTypeRegistry.Register(typeof(MovieCollection),
> typeof(MovieCollectionBuilder), "application/vnd.movies.movie-collection");
>
> This means: when ever we must parse a specific mime type, we look up a
> builder in the registry and uses this to parse the returned ressource
> representation.
>
> The typed Get<MovieCollection>(params) method GETs the ressource data,
> instantiates the corresponding builder, verifies that the built object type
> matches the requested and returns the built object.
>
> Comments:
>
> - This is static typing which RESTafarians seems to shy away from. But the
> type depends on the returned ressource, _not_ the URL. So to my knowledge
> this is fine.
>
> - It is not required to use the type safe Get<T>(), you could also call
> Get() which returns an object. The actual returned type then depends solely
> on the mime type of the ressource, and it is up to the programmer to decide
> what to do with it.
>
> - I am quite sure you can write some pretty generic XML builders without
> much overhead.
>
> - This is not limited to XML, you could add image/jpeg and other well known
> mime types. You just need to supply a proper builder.
>
>
> *Code example - Getting sub-ressources*
> Now we want to get information about the director of the movie:
>
> // One of the returned self links from the search query
> Link movieLink = movies[0].Self;
>
> // Get the actual movie
> Movie movie = movieLink.Get<Movie>();
>
> // Get the director
> MoviePerson director = movie.Director.Get<MoviePerson>();
>
> Comments:
>
> - There are no hard coded links here.
>
> - The only apriori information we use is the knowledge of the types of the
> referenced ressources. These types are documented in the mime type in which
> the links are used.
>
>
> *Versioning*
> Now our wonderfull movie shop decides to be able to sell and rate movies.
> They do their own selling, but uses the fameous ratings.org service to
> rate their movies. So the shop creates a new version of the movie mime type:
>
> // Example "application/vnd.movies.movie.*v2*+xml"
> <Movie>
> <Self href="http://movies.org/movies/91"/>
> <Title>Strange Dawn</Title>
> <Category>Thriller</Category>
> <Director href="http://movies.org/persons/47"/>
> <Orders href="http://movies.org/movies/91/orders"/>
> <Ratings href=
> http://ratings.org/ratings?item=http%3a%2f%2fmovies.org%2fmovies%2f91/>
> </Movie>
>
> In order to service both old and new clients the shop decides to return the
> initial movie mime type by default. Never clients should use the Accept
> header to indicate that they want the new version. The same goes for the
> movies collection type.
>
> Our existing client code works happily as it did before.
>
>
> *Code example - A new client*
> The new client code would look like this:
>
> // A link (template). This should be fetched from a configuration file.
> Link MoviesSearchLink = new Link("
> http://movies.org/movies?category={category}");
>
> // Anonymous class with search parameters. Reflection is used to extract
> values.
> // This is about the simplest way to write a "fixed hashmap" in C#
> var movieSearchParameter = new { category = "thriller" };
>
> // Setting up the Accept header
> var movieSearchHeaders = new { Accept =
> "application/vnd.movies.movie-collection.v2" }
>
> // Get ressource stored at the link endpoint
> MovieCollection movies =
> MoviesSearchLink.Get<MovieCollection>(movieSearchParameter,
> movieSearchHeaders);
>
> // Iterate over all movies and print title
> foreach (Movie movie in movies)
> Console.WriteLine("Title: " + movie.Title);
>
> *Code example - Buying movies*
> Now we have a movie which has an embedded link to it's sales orders. To buy
> a movie we post a new order to the sales order collection:
>
> // One of the returned self links from the search query
> Link movieLink = movies[0].Self;
>
> // Get the actual movie
> Movie movie = movieLink.Get<Movie>();
>
> // Create a new order request
> MovieOrderRequest orderRequest = new MovieOrderRequest(movie.Self, 1 /*
> quantity */);
>
> // Post the order request to the order collection
> // Assume it returns the newly created order
> MovieOrder order = movie.Orders.Post(orderRequest);
>
> Comments:
>
> - The POST result in a redirect to the newly created order. The system GETs
> this new order and returns it. This means we loose the intermediate data
> returned from the POST.
>
>
> *Other verbs*
> The Link class is has built-in support for GET/PUT/POST/DELETE. Other verbs
> can be executed through a generic "Request" method:
>
> SomeType x = someLink.Request("SOMEVERB", somePayload);
>
>
> *Caching*
> The Link class and it's associted methods should of course respect ETag and
> if-not-modified-since etc. This would require the framework to be
> initialized with a cache implementation of some kind.
>
>
> *Error handling*
> I would suggest using execptions for error handling.
>
>
>
>
>
Just a couple of thoughts out loud. I like the mime-type mapping framework, that would be a nice, generic piece of code all on its own, and it should marshal both ways. I think you will need to use that in many places. Not simply for GETs, but for pushing data as well. But another place that would be valuable is the fact that most any HTTP result can have a typed body. So it's easy to envision when you get some warning, or other error message (perhaps a redirect), that you can leverage the ability to send more interesting data than what is simply in the headers, and have that data marshaled for you automagically. I think the LINK infrastructure should be aware of things like redirects, and expose those things. If you hit a URI that gets a permanent or temporary redirect, it would be nice for the client to honor that at least somewhat transparently. And by that I don't me silently following the redirect, but I mean if it sees you hitting it again, will simply jump straight to the final destination. For the trivial case, I don't see a reason why you should be having to set the Accept header -- framework should do that for you. It should set the content type on the way out properly, and set the accept header as well, since it "knows" you want the Movie info. The common headers should be First Class. I shouldn't have to set an "If-Modified" myself, I should be able to link.setIfModifiedSince(myDate), so I don't have to marshal the dates myself. Going back to the mime mapping, that should be controllable at a higher, "global"/framework level, but also at the request level. I can easily see having to map "text/xml" to something different at the request level depending on the link. So, anyway, just some quick thoughts. Regards, Will Hartung (willh@...)
Mike,
It sounds like your library is quite similar to the one we developed for our application as well. MindTouch Dream [1] is a .NET framework for building portable web-services that can run as a standalone process, windows service, or natively under IIS. It also runs under Linux using Mono. Dream is used by quite a few sites including Mozilla [2], Novell [3], and Washington Post [4]. The license is Apache 2.0 for easy reuse.
It's quite fun to build an entire application with a RESTful interface. :)
- Steve
[1] http://developer.mindtouch.com/Dream
[2] https://developer.mozilla.org/
[3] http://monodevelop.com/
[4] http://whorunsgov.com/
--------------
Steve G. Bjorg
http://mindtouch.com
http://twitter.com/bjorg
irc.freenode.net #mindtouch
On Dec 9, 2009, at 2:58 PM, mike amundsen wrote:
>
> Jørn:
>
> This line stands out first: "I have avoided Atom Links since, in my experience, these don't serialize well in the C# standard XML serializer."
> My advice is to be wary of serializers when coding for HTTP. There are so many variances with incoming responses I think you'll find it a real task to build apps based on successfully converting incoming response bodies into code-able objects. Using serializers also tends to lead programmers to tight-binding between the code and the HTTP response body. This means changes in the body may blow the serializer code. This is especially true when working with "generic" media-types such as XML and JSON, etc. since they have very little semantic value built into them.
>
> That leads me to another bit of advice I'll offer: think about link semantics from the very start when creating your library. The Web browser client works because the link semantics of the HTML media-type are well-defined (and pretty narrow). There are a limited number of link elements. Some are in-doc links (IMG, LINK, SCRIPT, etc.), some are navigational links (A, FORM). All, except FORM, are limited to using the GET method. It's the semantic model of HTML that allows browsers to properly handle HTTP responses from previously unknown locations and still provide full functionality - even a decade after the semantics where defined. I suspect you'll find that building a client to properly locate, identify, and understand the link semantics of a single media type (application/vnd.movies.movie+xml) is challenging by itself. Building one that handles multiple media-types just adds to the fun<g>.
>
> I also encourage you to treat HTTP control data (headers) as top-level programming objects in your library. Allowing programmers to decorate requests with control data (content-encoding, media-type, authorization, cache-control, etc.) and have direct access to the control data on responses will improve the flexibility of any client/server built w/ your library.
>
> In the big picture, I prefer looking at HTTP programming from the stand-point of "resource programming." I look for a code library that lets me define a resource, associate or or more URIs with that resource, handle multiple representations of the resource (for both requests and response bodies), and properly decorate requests and responses w/ control data. I also want to make sure it handles mime-types properly (conneg included), conditional requests (GET and PUT), and supports flexible authentication models.
>
> FWIW, I started work on a REST-ful HTTP C# framework a while back [1]. It's been dormant for quite some time as the current version works well for me, but there are lots of places it needs work. I've also built an HTTP utilities library [2] with most all the bits I need for building REST-ful HTTP apps. It's smaller and lighter than my 'framework' library. I mention these as some of the code there might be helpful and/or act as a cautionary tale as you work on your own projects.
>
> mca
> http://amundsen.com/blog/
>
> [1] http://exyus.com
> [2] http://code.google.com/p/mikeamundsen/source/browse/#svn/trunk/Amundsen.Utilities
>
>
>
> On Wed, Dec 9, 2009 at 17:00, Jørn Wildt <jw@fjeldgruppen.dk> wrote:
>
>
> There has been a lot of discussion about the right way to implement a REST service, but less focus on how you would actually code a client. I have been looking at RESTFulie[1], Subbu Alamarju[2], and the Starbucks[3] example, and would like to discuss a similar typed approach in C#.
>
> I am experimenting with an actual implementation and would like some feedback before getting too far :-)
>
> Thanks, Jørn
>
>
> [1] http://github.com/caelum/restfulie
> [2] http://www.infoq.com/articles/subbu-allamaraju-rest
> [3] http://www.infoq.com/articles/webber-rest-workflow
>
>
> Service example documentation
> In order to discuss a REST client we need a service example. My first use case is a movie shop where we can search for movies in a specific category. To do so the shop has published a single search service URL template: http://movies.org/movies?category={category}.
>
> The shop also publishes three ressource mime types:
>
> // Example "application/vnd.movies.movie+xml"
> <Movie>
> <Self href="http://movies.org/movies/91"/>
> <Title>Strange Dawn</Title>
> <Category>Thriller</Category>
> <Director href="http://movies.org/persons/47"/>
> </Movie>
>
> // Example "application/vnd.movies.movie-collection+xml"
> <Movies>
> <Self href="http://movies.org/movies?category=Thriller"/>
> <Movie>
> <Title>Strange Dawn</Title>
> <Self href="http://movies.org/movies/91"/>
> </Movie>
> <Movie>...</Movie>
> <Movie>...</Movie>
> </Movies>
>
> // Example "application/vnd.movies.person+xml"
> <Person>
> <Self href="http://movies.org/persons/47"/>
> <Name>Richard Strangelove</Name>
> <Photo href="http://facebook.com/photos/hh31y1"/>
> </Person>
>
> Comments
>
> - I have avoided Atom Links since, in my experience, these don't serialize well in the C# standard XML serializer. You could although create your own serializer, so this is not an important restriction.
>
> - Notice how the person type has external references :-)
>
>
> Code example - Searching
> The cleanest client usage I can come up with is:
>
> // A link (template). This should be fetched from a configuration file.
> Link MoviesSearchLink = new Link("http://movies.org/movies?category={category}");
>
> // Anonymous class with search parameters. Reflection is used to extract values.
> // This is about the simplest way to write a "fixed hashmap" in C#
> var movieSearchParameter = new { category = "Thriller" };
>
> // Get ressource stored at the link endpoint
> MovieCollection movies = MoviesSearchLink.Get<MovieCollection>(movieSearchParameter);
>
> // Iterate over all movies and print title
> foreach (Movie movie in movies)
> Console.WriteLine("Title: " + movie.Title);
> Comments:
>
> - A Link is untyped. We do not know what lies at the end of it.
>
> - A link knows how to merge parameters into URL templates.
>
> - The result of GETing a link is typed. The actual type is defined by the returned mime type.
>
> - In order to do something usefull with the search we must assume that it returns a MovieCollection. Hence the generic type specifier in the Get<T>() method. This is apriori information which I cannot see how to code without.
>
>
> Parsing ressources
> One piece of magic is how Get<MovieCollection>(params) knows how to convert the bytes returned from the endpoint to a MovieCollection. For this we create a MimeTypeRegistry:
>
> MimeTypeRegistry.Register<MovieCollection, MovieCollectionBuilder>("application/vnd.movies.movie-collection");
>
> which is equal to:
>
> MimeTypeRegistry.Register(typeof(MovieCollection), typeof(MovieCollectionBuilder), "application/vnd.movies.movie-collection");
>
> This means: when ever we must parse a specific mime type, we look up a builder in the registry and uses this to parse the returned ressource representation.
>
> The typed Get<MovieCollection>(params) method GETs the ressource data, instantiates the corresponding builder, verifies that the built object type matches the requested and returns the built object.
>
> Comments:
>
> - This is static typing which RESTafarians seems to shy away from. But the type depends on the returned ressource, _not_ the URL. So to my knowledge this is fine.
>
> - It is not required to use the type safe Get<T>(), you could also call Get() which returns an object. The actual returned type then depends solely on the mime type of the ressource, and it is up to the programmer to decide what to do with it.
>
> - I am quite sure you can write some pretty generic XML builders without much overhead.
>
> - This is not limited to XML, you could add image/jpeg and other well known mime types. You just need to supply a proper builder.
>
>
> Code example - Getting sub-ressources
> Now we want to get information about the director of the movie:
>
> // One of the returned self links from the search query
> Link movieLink = movies[0].Self;
>
> // Get the actual movie
> Movie movie = movieLink.Get<Movie>();
>
> // Get the director
> MoviePerson director = movie.Director.Get<MoviePerson>();
>
> Comments:
>
> - There are no hard coded links here.
>
> - The only apriori information we use is the knowledge of the types of the referenced ressources. These types are documented in the mime type in which the links are used.
>
>
> Versioning
> Now our wonderfull movie shop decides to be able to sell and rate movies. They do their own selling, but uses the fameous ratings.org service to rate their movies. So the shop creates a new version of the movie mime type:
>
> // Example "application/vnd.movies.movie.v2+xml"
> <Movie>
> <Self href="http://movies.org/movies/91"/>
> <Title>Strange Dawn</Title>
> <Category>Thriller</Category>
> <Director href="http://movies.org/persons/47"/>
> <Orders href="http://movies.org/movies/91/orders"/>
> <Ratings href=http://ratings.org/ratings?item=http%3a%2f%2fmovies.org%2fmovies%2f91/>
> </Movie>
>
> In order to service both old and new clients the shop decides to return the initial movie mime type by default. Never clients should use the Accept header to indicate that they want the new version. The same goes for the movies collection type.
>
> Our existing client code works happily as it did before.
>
>
> Code example - A new client
> The new client code would look like this:
>
> // A link (template). This should be fetched from a configuration file.
> Link MoviesSearchLink = new Link("http://movies.org/movies?category={category}");
>
> // Anonymous class with search parameters. Reflection is used to extract values.
> // This is about the simplest way to write a "fixed hashmap" in C#
> var movieSearchParameter = new { category = "thriller" };
>
> // Setting up the Accept header
> var movieSearchHeaders = new { Accept = "application/vnd.movies.movie-collection.v2" }
>
> // Get ressource stored at the link endpoint
> MovieCollection movies = MoviesSearchLink.Get<MovieCollection>(movieSearchParameter, movieSearchHeaders);
>
> // Iterate over all movies and print title
> foreach (Movie movie in movies)
> Console.WriteLine("Title: " + movie.Title);
>
> Code example - Buying movies
> Now we have a movie which has an embedded link to it's sales orders. To buy a movie we post a new order to the sales order collection:
>
> // One of the returned self links from the search query
> Link movieLink = movies[0].Self;
>
> // Get the actual movie
> Movie movie = movieLink.Get<Movie>();
>
> // Create a new order request
> MovieOrderRequest orderRequest = new MovieOrderRequest(movie.Self, 1 /* quantity */);
>
> // Post the order request to the order collection
> // Assume it returns the newly created order
> MovieOrder order = movie.Orders.Post(orderRequest);
>
> Comments:
>
> - The POST result in a redirect to the newly created order. The system GETs this new order and returns it. This means we loose the intermediate data returned from the POST.
>
>
> Other verbs
> The Link class is has built-in support for GET/PUT/POST/DELETE. Other verbs can be executed through a generic "Request" method:
>
> SomeType x = someLink.Request("SOMEVERB", somePayload);
>
>
> Caching
> The Link class and it's associted methods should of course respect ETag and if-not-modified-since etc. This would require the framework to be initialized with a cache implementation of some kind.
>
>
> Error handling
> I would suggest using execptions for error handling.
>
>
>
>
>
>
>
Steve:
i've poked around in your code, then<g>!
mca
http://amundsen.com/blog/
On Wed, Dec 9, 2009 at 19:20, Steve Bjorg <steveb@...> wrote:
> Mike,
>
> It sounds like your library is quite similar to the one we developed for
> our application as well. MindTouch Dream [1] is a .NET framework for
> building portable web-services that can run as a standalone process, windows
> service, or natively under IIS. It also runs under Linux using Mono. Dream
> is used by quite a few sites including Mozilla [2], Novell [3], and
> Washington Post [4]. The license is Apache 2.0 for easy reuse.
>
> It's quite fun to build an entire application with a RESTful interface. :)
>
> - Steve
>
> [1] http://developer.mindtouch.com/Dream
> [2] https://developer.mozilla.org/
> [3] http://monodevelop.com/
> [4] http://whorunsgov.com/
>
> --------------
> Steve G. Bjorg
> http://mindtouch.com
> http://twitter.com/bjorg
> irc.freenode.net #mindtouch
>
> On Dec 9, 2009, at 2:58 PM, mike amundsen wrote:
>
>
>
> Jørn:
>
> This line stands out first: "I have avoided Atom Links since, in my
> experience, these don't serialize well in the C# standard XML serializer."
> My advice is to be wary of serializers when coding for HTTP. There are so
> many variances with incoming responses I think you'll find it a real task to
> build apps based on successfully converting incoming response bodies into
> code-able objects. Using serializers also tends to lead programmers to
> tight-binding between the code and the HTTP response body. This means
> changes in the body may blow the serializer code. This is especially true
> when working with "generic" media-types such as XML and JSON, etc. since
> they have very little semantic value built into them.
>
> That leads me to another bit of advice I'll offer: think about link
> semantics from the very start when creating your library. The Web browser
> client works because the link semantics of the HTML media-type are
> well-defined (and pretty narrow). There are a limited number of link
> elements. Some are in-doc links (IMG, LINK, SCRIPT, etc.), some are
> navigational links (A, FORM). All, except FORM, are limited to using the GET
> method. It's the semantic model of HTML that allows browsers to properly
> handle HTTP responses from previously unknown locations and still provide
> full functionality - even a decade after the semantics where defined. I
> suspect you'll find that building a client to properly locate, identify,
> and understand the link semantics of a single media type
> (application/vnd.movies.movie+xml) is challenging by itself. Building one
> that handles multiple media-types just adds to the fun<g>.
>
> I also encourage you to treat HTTP control data (headers) as top-level
> programming objects in your library. Allowing programmers to decorate
> requests with control data (content-encoding, media-type, authorization,
> cache-control, etc.) and have direct access to the control data on responses
> will improve the flexibility of any client/server built w/ your library.
>
> In the big picture, I prefer looking at HTTP programming from the
> stand-point of "resource programming." I look for a code library that lets
> me define a resource, associate or or more URIs with that resource, handle
> multiple representations of the resource (for both requests and response
> bodies), and properly decorate requests and responses w/ control data. I
> also want to make sure it handles mime-types properly (conneg included),
> conditional requests (GET and PUT), and supports flexible authentication
> models.
>
> FWIW, I started work on a REST-ful HTTP C# framework a while back [1].
> It's been dormant for quite some time as the current version works well for
> me, but there are lots of places it needs work. I've also built an HTTP
> utilities library [2] with most all the bits I need for building REST-ful
> HTTP apps. It's smaller and lighter than my 'framework' library. I mention
> these as some of the code there might be helpful and/or act as a cautionary
> tale as you work on your own projects.
>
> mca
> http://amundsen.com/blog/
>
> [1] http://exyus.com
> [2]
> http://code.google.com/p/mikeamundsen/source/browse/#svn/trunk/Amundsen.Utilities
>
>
>
> On Wed, Dec 9, 2009 at 17:00, Jørn Wildt <jw@...> wrote:
>
>>
>>
>> There has been a lot of discussion about the right way to implement a REST
>> service, but less focus on how you would actually code a client. I have been
>> looking at RESTFulie[1], Subbu Alamarju[2], and the Starbucks[3] example,
>> and would like to discuss a similar typed approach in C#.
>>
>> I am experimenting with an actual implementation and would like some
>> feedback before getting too far :-)
>>
>> Thanks, Jørn
>>
>>
>> [1] http://github.com/caelum/restfulie
>> [2] http://www.infoq.com/articles/subbu-allamaraju-rest
>> [3] http://www.infoq.com/articles/webber-rest-workflow
>>
>>
>> *Service example documentation*
>> In order to discuss a REST client we need a service example. My first use
>> case is a movie shop where we can search for movies in a specific category.
>> To do so the shop has published a single search service URL template:
>> http://movies.org/movies?category={category<http://movies.org/movies?category=%7Bcategory>}.
>>
>>
>> The shop also publishes three ressource mime types:
>>
>> // Example "application/vnd.movies.movie+xml"
>> <Movie>
>> <Self href="http://movies.org/movies/91"/>
>> <Title>Strange Dawn</Title>
>> <Category>Thriller</Category>
>> <Director href="http://movies.org/persons/47"/>
>> </Movie>
>>
>> // Example "application/vnd.movies.movie-collection+xml"
>> <Movies>
>> <Self href="http://movies.org/movies?category=Thriller"/>
>> <Movie>
>> <Title>Strange Dawn</Title>
>> <Self href="http://movies.org/movies/91"/>
>> </Movie>
>> <Movie>...</Movie>
>> <Movie>...</Movie>
>> </Movies>
>>
>> // Example "application/vnd.movies.person+xml"
>> <Person>
>> <Self href="http://movies.org/persons/47"/>
>> <Name>Richard Strangelove</Name>
>> <Photo href="http://facebook.com/photos/hh31y1"/>
>> </Person>
>>
>> Comments
>>
>> - I have avoided Atom Links since, in my experience, these don't serialize
>> well in the C# standard XML serializer. You could although create your own
>> serializer, so this is not an important restriction.
>>
>> - Notice how the person type has external references :-)
>>
>>
>> *Code example - Searching*
>> The cleanest client usage I can come up with is:
>>
>> // A link (template). This should be fetched from a configuration file.
>> Link MoviesSearchLink = new Link("
>> http://movies.org/movies?category={category}");
>>
>> // Anonymous class with search parameters. Reflection is used to extract
>> values.
>> // This is about the simplest way to write a "fixed hashmap" in C#
>> var movieSearchParameter = new { category = "Thriller" };
>>
>> // Get ressource stored at the link endpoint
>> MovieCollection movies =
>> MoviesSearchLink.Get<MovieCollection>(movieSearchParameter);
>>
>> // Iterate over all movies and print title
>> foreach (Movie movie in movies)
>> Console.WriteLine("Title: " + movie.Title);
>> Comments:
>>
>> - A Link is untyped. We do not know what lies at the end of it.
>>
>> - A link knows how to merge parameters into URL templates.
>>
>> - The result of GETing a link is typed. The actual type is defined by the
>> returned mime type.
>>
>> - In order to do something usefull with the search we must assume that it
>> returns a MovieCollection. Hence the generic type specifier in the Get<T>()
>> method. This is apriori information which I cannot see how to code without.
>>
>>
>> *Parsing ressources*
>> One piece of magic is how Get<MovieCollection>(params) knows how to
>> convert the bytes returned from the endpoint to a MovieCollection. For this
>> we create a MimeTypeRegistry:
>>
>> MimeTypeRegistry.Register<MovieCollection,
>> MovieCollectionBuilder>("application/vnd.movies.movie-collection");
>>
>> which is equal to:
>>
>> MimeTypeRegistry.Register(typeof(MovieCollection),
>> typeof(MovieCollectionBuilder), "application/vnd.movies.movie-collection");
>>
>> This means: when ever we must parse a specific mime type, we look up a
>> builder in the registry and uses this to parse the returned ressource
>> representation.
>>
>> The typed Get<MovieCollection>(params) method GETs the ressource data,
>> instantiates the corresponding builder, verifies that the built object type
>> matches the requested and returns the built object.
>>
>> Comments:
>>
>> - This is static typing which RESTafarians seems to shy away from. But the
>> type depends on the returned ressource, _not_ the URL. So to my knowledge
>> this is fine.
>>
>> - It is not required to use the type safe Get<T>(), you could also call
>> Get() which returns an object. The actual returned type then depends solely
>> on the mime type of the ressource, and it is up to the programmer to decide
>> what to do with it.
>>
>> - I am quite sure you can write some pretty generic XML builders without
>> much overhead.
>>
>> - This is not limited to XML, you could add image/jpeg and other well
>> known mime types. You just need to supply a proper builder.
>>
>>
>> *Code example - Getting sub-ressources*
>> Now we want to get information about the director of the movie:
>>
>> // One of the returned self links from the search query
>> Link movieLink = movies[0].Self;
>>
>> // Get the actual movie
>> Movie movie = movieLink.Get<Movie>();
>>
>> // Get the director
>> MoviePerson director = movie.Director.Get<MoviePerson>();
>>
>> Comments:
>>
>> - There are no hard coded links here.
>>
>> - The only apriori information we use is the knowledge of the types of the
>> referenced ressources. These types are documented in the mime type in which
>> the links are used.
>>
>>
>> *Versioning*
>> Now our wonderfull movie shop decides to be able to sell and rate movies.
>> They do their own selling, but uses the fameous ratings.org service to
>> rate their movies. So the shop creates a new version of the movie mime type:
>>
>> // Example "application/vnd.movies.movie.*v2*+xml"
>> <Movie>
>> <Self href="http://movies.org/movies/91"/>
>> <Title>Strange Dawn</Title>
>> <Category>Thriller</Category>
>> <Director href="http://movies.org/persons/47"/>
>> <Orders href="http://movies.org/movies/91/orders"/>
>> <Ratings href=
>> http://ratings.org/ratings?item=http%3a%2f%2fmovies.org%2fmovies%2f91/>
>> </Movie>
>>
>> In order to service both old and new clients the shop decides to return
>> the initial movie mime type by default. Never clients should use the Accept
>> header to indicate that they want the new version. The same goes for the
>> movies collection type.
>>
>> Our existing client code works happily as it did before.
>>
>>
>> *Code example - A new client*
>> The new client code would look like this:
>>
>> // A link (template). This should be fetched from a configuration
>> file.
>> Link MoviesSearchLink = new Link("
>> http://movies.org/movies?category={category}");
>>
>> // Anonymous class with search parameters. Reflection is used to extract
>> values.
>> // This is about the simplest way to write a "fixed hashmap" in C#
>> var movieSearchParameter = new { category = "thriller" };
>>
>> // Setting up the Accept header
>> var movieSearchHeaders = new { Accept =
>> "application/vnd.movies.movie-collection.v2" }
>>
>> // Get ressource stored at the link endpoint
>> MovieCollection movies =
>> MoviesSearchLink.Get<MovieCollection>(movieSearchParameter,
>> movieSearchHeaders);
>>
>> // Iterate over all movies and print title
>> foreach (Movie movie in movies)
>> Console.WriteLine("Title: " + movie.Title);
>>
>> *Code example - Buying movies*
>> Now we have a movie which has an embedded link to it's sales orders. To
>> buy a movie we post a new order to the sales order collection:
>>
>> // One of the returned self links from the search query
>> Link movieLink = movies[0].Self;
>>
>> // Get the actual movie
>> Movie movie = movieLink.Get<Movie>();
>>
>> // Create a new order request
>> MovieOrderRequest orderRequest = new MovieOrderRequest(movie.Self, 1 /*
>> quantity */);
>>
>> // Post the order request to the order collection
>> // Assume it returns the newly created order
>> MovieOrder order = movie.Orders.Post(orderRequest);
>>
>> Comments:
>>
>> - The POST result in a redirect to the newly created order. The system
>> GETs this new order and returns it. This means we loose the intermediate
>> data returned from the POST.
>>
>>
>> *Other verbs*
>> The Link class is has built-in support for GET/PUT/POST/DELETE. Other
>> verbs can be executed through a generic "Request" method:
>>
>> SomeType x = someLink.Request("SOMEVERB", somePayload);
>>
>>
>> *Caching*
>> The Link class and it's associted methods should of course respect ETag
>> and if-not-modified-since etc. This would require the framework to be
>> initialized with a cache implementation of some kind.
>>
>>
>> *Error handling*
>> I would suggest using execptions for error handling.
>>
>>
>>
>>
>>
>
>
>
>
>
Well as we’re in the middle of citing the .net solutions helping for ReST
architectures, may as well talk about www.openrasta.com. It’s used by many
OSS developers, but as it’s not a commercial tool as such, I don’t have
links to give you that I have the right to publish :) It’s MIT license, it
runs on IIS, windows service, standalone process, in-memory, and there’s
even a branch running in Silverlight.
Judging from only google, I’ll happily declare we’re the most talked about
OSS ReST/HTTP focused server framework on .net. Ah the joys of marketing!
There’s a client library in the pipeline, that reuses the extensible
pipeline and codecs existing in the server framework, but that won’t happen
for a few iterations.
To go back to your point, I second what Mike is saying: Define your media
type by studying your link relationships. A lot can happen in your
user-agent when you react to certain link types or certain media types,
making the application very interactive.
And I’ll also say what I said before: versioning through the media type
shouldn’t be a habit, you only need versioning of your document types when
you’re restricting yourself to xml serializers, or worse, add schema
enforcing through xsd. If you remove those restrictions and simply try to
parse just what is needed from an xml feed for an operation to succeed,
everything will become much much easier.
Seb
From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On
Behalf Of Steve Bjorg
Sent: 10 December 2009 00:20
To: mike amundsen
Cc: Jørn Wildt; rest-discuss
Subject: Re: [rest-discuss] The "purist" C# REST client?
Mike,
It sounds like your library is quite similar to the one we developed for our
application as well. MindTouch Dream [1] is a .NET framework for building
portable web-services that can run as a standalone process, windows service,
or natively under IIS. It also runs under Linux using Mono. Dream is used
by quite a few sites including Mozilla [2], Novell [3], and Washington Post
[4]. The license is Apache 2.0 for easy reuse.
It's quite fun to build an entire application with a RESTful interface. :)
- Steve
[1] http://developer.mindtouch.com/Dream
[2] https://developer.mozilla.org/
[3] http://monodevelop.com/
[4] http://whorunsgov.com/
--------------
Steve G. Bjorg
http://mindtouch.com
http://twitter.com/bjorg
irc.freenode.net #mindtouch
On Dec 9, 2009, at 2:58 PM, mike amundsen wrote:
Jørn:
This line stands out first: "I have avoided Atom Links since, in my
experience, these don't serialize well in the C# standard XML serializer."
My advice is to be wary of serializers when coding for HTTP. There are so
many variances with incoming responses I think you'll find it a real task to
build apps based on successfully converting incoming response bodies into
code-able objects. Using serializers also tends to lead programmers to
tight-binding between the code and the HTTP response body. This means
changes in the body may blow the serializer code. This is especially true
when working with "generic" media-types such as XML and JSON, etc. since
they have very little semantic value built into them.
That leads me to another bit of advice I'll offer: think about link
semantics from the very start when creating your library. The Web browser
client works because the link semantics of the HTML media-type are
well-defined (and pretty narrow). There are a limited number of link
elements. Some are in-doc links (IMG, LINK, SCRIPT, etc.), some are
navigational links (A, FORM). All, except FORM, are limited to using the GET
method. It's the semantic model of HTML that allows browsers to properly
handle HTTP responses from previously unknown locations and still provide
full functionality - even a decade after the semantics where defined. I
suspect you'll find that building a client to properly locate, identify, and
understand the link semantics of a single media type
(application/vnd.movies.movie+xml) is challenging by itself. Building one
that handles multiple media-types just adds to the fun<g>.
I also encourage you to treat HTTP control data (headers) as top-level
programming objects in your library. Allowing programmers to decorate
requests with control data (content-encoding, media-type, authorization,
cache-control, etc.) and have direct access to the control data on responses
will improve the flexibility of any client/server built w/ your library.
In the big picture, I prefer looking at HTTP programming from the
stand-point of "resource programming." I look for a code library that lets
me define a resource, associate or or more URIs with that resource, handle
multiple representations of the resource (for both requests and response
bodies), and properly decorate requests and responses w/ control data. I
also want to make sure it handles mime-types properly (conneg included),
conditional requests (GET and PUT), and supports flexible authentication
models.
FWIW, I started work on a REST-ful HTTP C# framework a while back [1]. It's
been dormant for quite some time as the current version works well for me,
but there are lots of places it needs work. I've also built an HTTP
utilities library [2] with most all the bits I need for building REST-ful
HTTP apps. It's smaller and lighter than my 'framework' library. I mention
these as some of the code there might be helpful and/or act as a cautionary
tale as you work on your own projects.
mca
http://amundsen.com/blog/
[1] http://exyus.com <http://exyus.com/>
[2]
http://code.google.com/p/mikeamundsen/source/browse/#svn/trunk/Amundsen.Util
ities
On Wed, Dec 9, 2009 at 17:00, Jørn Wildt <jw@...> wrote:
There has been a lot of discussion about the right way to implement a REST
service, but less focus on how you would actually code a client. I have been
looking at RESTFulie[1], Subbu Alamarju[2], and the Starbucks[3] example,
and would like to discuss a similar typed approach in C#.
I am experimenting with an actual implementation and would like some
feedback before getting too far :-)
Thanks, Jørn
[1] http://github.com/caelum/restfulie
[2] http://www.infoq.com/articles/subbu-allamaraju-rest
[3] http://www.infoq.com/articles/webber-rest-workflow
Service example documentation
In order to discuss a REST client we need a service example. My first use
case is a movie shop where we can search for movies in a specific category.
To do so the shop has published a single search service URL template:
http://movies.org/movies?category={category
<http://movies.org/movies?category=%7Bcategory> }.
The shop also publishes three ressource mime types:
// Example "application/vnd.movies.movie+xml"
<Movie>
<Self href="http://movies.org/movies/91"/>
<Title>Strange Dawn</Title>
<Category>Thriller</Category>
<Director href="http://movies.org/persons/47"/>
</Movie>
// Example "application/vnd.movies.movie-collection+xml"
<Movies>
<Self href="http://movies.org/movies?category=Thriller"/>
<Movie>
<Title>Strange Dawn</Title>
<Self href="http://movies.org/movies/91"/>
</Movie>
<Movie>...</Movie>
<Movie>...</Movie>
</Movies>
// Example "application/vnd.movies.person+xml"
<Person>
<Self href="http://movies.org/persons/47"/>
<Name>Richard Strangelove</Name>
<Photo href="http://facebook.com/photos/hh31y1"/>
</Person>
Comments
- I have avoided Atom Links since, in my experience, these don't serialize
well in the C# standard XML serializer. You could although create your own
serializer, so this is not an important restriction.
- Notice how the person type has external references :-)
Code example - Searching
The cleanest client usage I can come up with is:
// A link (template). This should be fetched from a configuration file.
Link MoviesSearchLink = new
Link("http://movies.org/movies?category={category}");
// Anonymous class with search parameters. Reflection is used to extract
values.
// This is about the simplest way to write a "fixed hashmap" in C#
var movieSearchParameter = new { category = "Thriller" };
// Get ressource stored at the link endpoint
MovieCollection movies =
MoviesSearchLink.Get<MovieCollection>(movieSearchParameter);
// Iterate over all movies and print title
foreach (Movie movie in movies)
Console.WriteLine("Title: " + movie.Title);
Comments:
- A Link is untyped. We do not know what lies at the end of it.
- A link knows how to merge parameters into URL templates.
- The result of GETing a link is typed. The actual type is defined by the
returned mime type.
- In order to do something usefull with the search we must assume that it
returns a MovieCollection. Hence the generic type specifier in the Get<T>()
method. This is apriori information which I cannot see how to code without.
Parsing ressources
One piece of magic is how Get<MovieCollection>(params) knows how to convert
the bytes returned from the endpoint to a MovieCollection. For this we
create a MimeTypeRegistry:
MimeTypeRegistry.Register<MovieCollection,
MovieCollectionBuilder>("application/vnd.movies.movie-collection");
which is equal to:
MimeTypeRegistry.Register(typeof(MovieCollection),
typeof(MovieCollectionBuilder), "application/vnd.movies.movie-collection");
This means: when ever we must parse a specific mime type, we look up a
builder in the registry and uses this to parse the returned ressource
representation.
The typed Get<MovieCollection>(params) method GETs the ressource data,
instantiates the corresponding builder, verifies that the built object type
matches the requested and returns the built object.
Comments:
- This is static typing which RESTafarians seems to shy away from. But the
type depends on the returned ressource, _not_ the URL. So to my knowledge
this is fine.
- It is not required to use the type safe Get<T>(), you could also call
Get() which returns an object. The actual returned type then depends solely
on the mime type of the ressource, and it is up to the programmer to decide
what to do with it.
- I am quite sure you can write some pretty generic XML builders without
much overhead.
- This is not limited to XML, you could add image/jpeg and other well known
mime types. You just need to supply a proper builder.
Code example - Getting sub-ressources
Now we want to get information about the director of the movie:
// One of the returned self links from the search query
Link movieLink = movies[0].Self;
// Get the actual movie
Movie movie = movieLink.Get<Movie>();
// Get the director
MoviePerson director = movie.Director.Get<MoviePerson>();
Comments:
- There are no hard coded links here.
- The only apriori information we use is the knowledge of the types of the
referenced ressources. These types are documented in the mime type in which
the links are used.
Versioning
Now our wonderfull movie shop decides to be able to sell and rate movies.
They do their own selling, but uses the fameous ratings.org
<http://ratings.org/> service to rate their movies. So the shop creates a
new version of the movie mime type:
// Example "application/vnd.movies.movie.v2+xml"
<Movie>
<Self href="http://movies.org/movies/91"/>
<Title>Strange Dawn</Title>
<Category>Thriller</Category>
<Director href="http://movies.org/persons/47"/>
<Orders href="http://movies.org/movies/91/orders"/>
<Ratings
href=http://ratings.org/ratings?item=http%3a%2f%2fmovies.org%2fmovies%2f91/>
</Movie>
In order to service both old and new clients the shop decides to return the
initial movie mime type by default. Never clients should use the Accept
header to indicate that they want the new version. The same goes for the
movies collection type.
Our existing client code works happily as it did before.
Code example - A new client
The new client code would look like this:
// A link (template). This should be fetched from a configuration file.
Link MoviesSearchLink = new
Link("http://movies.org/movies?category={category}");
// Anonymous class with search parameters. Reflection is used to extract
values.
// This is about the simplest way to write a "fixed hashmap" in C#
var movieSearchParameter = new { category = "thriller" };
// Setting up the Accept header
var movieSearchHeaders = new { Accept =
"application/vnd.movies.movie-collection.v2" }
// Get ressource stored at the link endpoint
MovieCollection movies =
MoviesSearchLink.Get<MovieCollection>(movieSearchParameter,
movieSearchHeaders);
// Iterate over all movies and print title
foreach (Movie movie in movies)
Console.WriteLine("Title: " + movie.Title);
Code example - Buying movies
Now we have a movie which has an embedded link to it's sales orders. To buy
a movie we post a new order to the sales order collection:
// One of the returned self links from the search query
Link movieLink = movies[0].Self;
// Get the actual movie
Movie movie = movieLink.Get<Movie>();
// Create a new order request
MovieOrderRequest orderRequest = new MovieOrderRequest(movie.Self, 1 /*
quantity */);
// Post the order request to the order collection
// Assume it returns the newly created order
MovieOrder order = movie.Orders.Post(orderRequest);
Comments:
- The POST result in a redirect to the newly created order. The system GETs
this new order and returns it. This means we loose the intermediate data
returned from the POST.
Other verbs
The Link class is has built-in support for GET/PUT/POST/DELETE. Other verbs
can be executed through a generic "Request" method:
SomeType x = someLink.Request("SOMEVERB", somePayload);
Caching
The Link class and it's associted methods should of course respect ETag and
if-not-modified-since etc. This would require the framework to be
initialized with a cache implementation of some kind.
Error handling
I would suggest using execptions for error handling.
-1. Users (myself included) are used to "Reply" going just to the author of an email and "Reply All" going to everyone on the original. I have always had the converse annoyance of intending to send a private reply on a list configured contrary to this expectation and having that accidentally go out to everyone. I personally find this accidental privacy leak to be a bigger headache than having to delete a duplicate email. I did, however, remove your email address from my response, so hopefully you only get one copy. :-) Jon ________________________________ From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jørn Wildt Sent: Wednesday, December 09, 2009 12:30 AM To: rest-discuss Subject: [rest-discuss] OT: mail list management Can anyone tell the reason for letting "reply" only reply to the posting author on this list? The result is that most people do a "reply to all" which results in two duplicate replies on my mail client most of the time. Can it be changed? Thanks, Jørn
> Using serializers also tends to lead programmers to
> tight-binding between the code and the HTTP
> response body. This means
> changes in the body may blow the serializer code.
Yes, I understand this. From a framework point of view it's left to the
programmer to decide what kind of object-builder to register with a certain
mime type. It can be a very specific parser/builder (like the movie example,
or an image file) or a very loose builder (like a generic XML DOM object).
> I suspect you'll find that building a client to properly locate, identify,
> and understand the link semantics of a single media type
> (application/vnd.movies.movie+xml) is challenging by itself.
For a well documented and stable mime type it should not be a problem. But
maybe I have something to learn here :-)
> Building one
> that handles multiple media-types just adds to the fun
That's why the framework allows registration of one handler per mime type.
Makes coding them quite a bit easier. Instead of having one handler for all
mime types. But maybe I misunderstand you.
> I also encourage you to treat HTTP control data (headers) as top-level
> programming objects in your library
Will do.
Thanks for the feedback, J�rn
----- Original Message -----
From: "mike amundsen" <mamund@...>
To: "J�rn Wildt" <jw@...>
Cc: "rest-discuss" <rest-discuss@yahoogroups.com>
Sent: Wednesday, December 09, 2009 11:58 PM
Subject: Re: [rest-discuss] The "purist" C# REST client?
J�rn:
This line stands out first: "I have avoided Atom Links since, in my
experience, these don't serialize well in the C# standard XML serializer."
My advice is to be wary of serializers when coding for HTTP. There are so
many variances with incoming responses I think you'll find it a real task to
build apps based on successfully converting incoming response bodies into
code-able objects. Using serializers also tends to lead programmers to
tight-binding between the code and the HTTP response body. This means
changes in the body may blow the serializer code. This is especially true
when working with "generic" media-types such as XML and JSON, etc. since
they have very little semantic value built into them.
That leads me to another bit of advice I'll offer: think about link
semantics from the very start when creating your library. The Web browser
client works because the link semantics of the HTML media-type are
well-defined (and pretty narrow). There are a limited number of link
elements. Some are in-doc links (IMG, LINK, SCRIPT, etc.), some are
navigational links (A, FORM). All, except FORM, are limited to using the GET
method. It's the semantic model of HTML that allows browsers to properly
handle HTTP responses from previously unknown locations and still provide
full functionality - even a decade after the semantics where defined. I
suspect you'll find that building a client to properly locate, identify,
and understand the link semantics of a single media type
(application/vnd.movies.movie+xml) is challenging by itself. Building one
that handles multiple media-types just adds to the fun<g>.
I also encourage you to treat HTTP control data (headers) as top-level
programming objects in your library. Allowing programmers to decorate
requests with control data (content-encoding, media-type, authorization,
cache-control, etc.) and have direct access to the control data on responses
will improve the flexibility of any client/server built w/ your library.
In the big picture, I prefer looking at HTTP programming from the
stand-point of "resource programming." I look for a code library that lets
me define a resource, associate or or more URIs with that resource, handle
multiple representations of the resource (for both requests and response
bodies), and properly decorate requests and responses w/ control data. I
also want to make sure it handles mime-types properly (conneg included),
conditional requests (GET and PUT), and supports flexible authentication
models.
FWIW, I started work on a REST-ful HTTP C# framework a while back [1]. It's
been dormant for quite some time as the current version works well for me,
but there are lots of places it needs work. I've also built an HTTP
utilities library [2] with most all the bits I need for building REST-ful
HTTP apps. It's smaller and lighter than my 'framework' library. I mention
these as some of the code there might be helpful and/or act as a cautionary
tale as you work on your own projects.
mca
http://amundsen.com/blog/
[1] http://exyus.com
[2]
http://code.google.com/p/mikeamundsen/source/browse/#svn/trunk/Amundsen.Utilities
On Wed, Dec 9, 2009 at 17:00, J�rn Wildt <jw@...> wrote:
>
>
> There has been a lot of discussion about the right way to implement a REST
> service, but less focus on how you would actually code a client. I have
> been
> looking at RESTFulie[1], Subbu Alamarju[2], and the Starbucks[3] example,
> and would like to discuss a similar typed approach in C#.
>
> I am experimenting with an actual implementation and would like some
> feedback before getting too far :-)
>
> Thanks, J�rn
>
>
> [1] http://github.com/caelum/restfulie
> [2] http://www.infoq.com/articles/subbu-allamaraju-rest
> [3] http://www.infoq.com/articles/webber-rest-workflow
>
>
> *Service example documentation*
> In order to discuss a REST client we need a service example. My first use
> case is a movie shop where we can search for movies in a specific
> category.
> To do so the shop has published a single search service URL template:
> http://movies.org/movies?category={category<http://movies.org/movies?category=%7Bcategory>}.
>
>
> The shop also publishes three ressource mime types:
>
> // Example "application/vnd.movies.movie+xml"
> <Movie>
> <Self href="http://movies.org/movies/91"/>
> <Title>Strange Dawn</Title>
> <Category>Thriller</Category>
> <Director href="http://movies.org/persons/47"/>
> </Movie>
>
> // Example "application/vnd.movies.movie-collection+xml"
> <Movies>
> <Self href="http://movies.org/movies?category=Thriller"/>
> <Movie>
> <Title>Strange Dawn</Title>
> <Self href="http://movies.org/movies/91"/>
> </Movie>
> <Movie>...</Movie>
> <Movie>...</Movie>
> </Movies>
>
> // Example "application/vnd.movies.person+xml"
> <Person>
> <Self href="http://movies.org/persons/47"/>
> <Name>Richard Strangelove</Name>
> <Photo href="http://facebook.com/photos/hh31y1"/>
> </Person>
>
> Comments
>
> - I have avoided Atom Links since, in my experience, these don't serialize
> well in the C# standard XML serializer. You could although create your own
> serializer, so this is not an important restriction.
>
> - Notice how the person type has external references :-)
>
>
> *Code example - Searching*
> The cleanest client usage I can come up with is:
>
> // A link (template). This should be fetched from a configuration file.
> Link MoviesSearchLink = new Link("
> http://movies.org/movies?category={category}");
>
> // Anonymous class with search parameters. Reflection is used to extract
> values.
> // This is about the simplest way to write a "fixed hashmap" in C#
> var movieSearchParameter = new { category = "Thriller" };
>
> // Get ressource stored at the link endpoint
> MovieCollection movies =
> MoviesSearchLink.Get<MovieCollection>(movieSearchParameter);
>
> // Iterate over all movies and print title
> foreach (Movie movie in movies)
> Console.WriteLine("Title: " + movie.Title);
> Comments:
>
> - A Link is untyped. We do not know what lies at the end of it.
>
> - A link knows how to merge parameters into URL templates.
>
> - The result of GETing a link is typed. The actual type is defined by the
> returned mime type.
>
> - In order to do something usefull with the search we must assume that it
> returns a MovieCollection. Hence the generic type specifier in the
> Get<T>()
> method. This is apriori information which I cannot see how to code
> without.
>
>
> *Parsing ressources*
> One piece of magic is how Get<MovieCollection>(params) knows how to
> convert
> the bytes returned from the endpoint to a MovieCollection. For this we
> create a MimeTypeRegistry:
>
> MimeTypeRegistry.Register<MovieCollection,
> MovieCollectionBuilder>("application/vnd.movies.movie-collection");
>
> which is equal to:
>
> MimeTypeRegistry.Register(typeof(MovieCollection),
> typeof(MovieCollectionBuilder),
> "application/vnd.movies.movie-collection");
>
> This means: when ever we must parse a specific mime type, we look up a
> builder in the registry and uses this to parse the returned ressource
> representation.
>
> The typed Get<MovieCollection>(params) method GETs the ressource data,
> instantiates the corresponding builder, verifies that the built object
> type
> matches the requested and returns the built object.
>
> Comments:
>
> - This is static typing which RESTafarians seems to shy away from. But the
> type depends on the returned ressource, _not_ the URL. So to my knowledge
> this is fine.
>
> - It is not required to use the type safe Get<T>(), you could also call
> Get() which returns an object. The actual returned type then depends
> solely
> on the mime type of the ressource, and it is up to the programmer to
> decide
> what to do with it.
>
> - I am quite sure you can write some pretty generic XML builders without
> much overhead.
>
> - This is not limited to XML, you could add image/jpeg and other well
> known
> mime types. You just need to supply a proper builder.
>
>
> *Code example - Getting sub-ressources*
> Now we want to get information about the director of the movie:
>
> // One of the returned self links from the search query
> Link movieLink = movies[0].Self;
>
> // Get the actual movie
> Movie movie = movieLink.Get<Movie>();
>
> // Get the director
> MoviePerson director = movie.Director.Get<MoviePerson>();
>
> Comments:
>
> - There are no hard coded links here.
>
> - The only apriori information we use is the knowledge of the types of the
> referenced ressources. These types are documented in the mime type in
> which
> the links are used.
>
>
> *Versioning*
> Now our wonderfull movie shop decides to be able to sell and rate movies.
> They do their own selling, but uses the fameous ratings.org service to
> rate their movies. So the shop creates a new version of the movie mime
> type:
>
> // Example "application/vnd.movies.movie.*v2*+xml"
> <Movie>
> <Self href="http://movies.org/movies/91"/>
> <Title>Strange Dawn</Title>
> <Category>Thriller</Category>
> <Director href="http://movies.org/persons/47"/>
> <Orders href="http://movies.org/movies/91/orders"/>
> <Ratings href=
> http://ratings.org/ratings?item=http%3a%2f%2fmovies.org%2fmovies%2f91/>
> </Movie>
>
> In order to service both old and new clients the shop decides to return
> the
> initial movie mime type by default. Never clients should use the Accept
> header to indicate that they want the new version. The same goes for the
> movies collection type.
>
> Our existing client code works happily as it did before.
>
>
> *Code example - A new client*
> The new client code would look like this:
>
> // A link (template). This should be fetched from a configuration file.
> Link MoviesSearchLink = new Link("
> http://movies.org/movies?category={category}");
>
> // Anonymous class with search parameters. Reflection is used to extract
> values.
> // This is about the simplest way to write a "fixed hashmap" in C#
> var movieSearchParameter = new { category = "thriller" };
>
> // Setting up the Accept header
> var movieSearchHeaders = new { Accept =
> "application/vnd.movies.movie-collection.v2" }
>
> // Get ressource stored at the link endpoint
> MovieCollection movies =
> MoviesSearchLink.Get<MovieCollection>(movieSearchParameter,
> movieSearchHeaders);
>
> // Iterate over all movies and print title
> foreach (Movie movie in movies)
> Console.WriteLine("Title: " + movie.Title);
>
> *Code example - Buying movies*
> Now we have a movie which has an embedded link to it's sales orders. To
> buy
> a movie we post a new order to the sales order collection:
>
> // One of the returned self links from the search query
> Link movieLink = movies[0].Self;
>
> // Get the actual movie
> Movie movie = movieLink.Get<Movie>();
>
> // Create a new order request
> MovieOrderRequest orderRequest = new MovieOrderRequest(movie.Self, 1 /*
> quantity */);
>
> // Post the order request to the order collection
> // Assume it returns the newly created order
> MovieOrder order = movie.Orders.Post(orderRequest);
>
> Comments:
>
> - The POST result in a redirect to the newly created order. The system
> GETs
> this new order and returns it. This means we loose the intermediate data
> returned from the POST.
>
>
> *Other verbs*
> The Link class is has built-in support for GET/PUT/POST/DELETE. Other
> verbs
> can be executed through a generic "Request" method:
>
> SomeType x = someLink.Request("SOMEVERB", somePayload);
>
>
> *Caching*
> The Link class and it's associted methods should of course respect ETag
> and
> if-not-modified-since etc. This would require the framework to be
> initialized with a cache implementation of some kind.
>
>
> *Error handling*
> I would suggest using execptions for error handling.
>
>
>
>
>
----- Original Message -----
From: "mike amundsen" <mamund@...>
To: "J�rn Wildt" <jw@...>
Cc: "rest-discuss" <rest-discuss@yahoogroups.com>
Sent: Wednesday, December 09, 2009 11:58 PM
Subject: Re: [rest-discuss] The "purist" C# REST client?
J�rn:
This line stands out first: "I have avoided Atom Links since, in my
experience, these don't serialize well in the C# standard XML serializer."
My advice is to be wary of serializers when coding for HTTP. There are so
many variances with incoming responses I think you'll find it a real task to
build apps based on successfully converting incoming response bodies into
code-able objects. Using serializers also tends to lead programmers to
tight-binding between the code and the HTTP response body. This means
changes in the body may blow the serializer code. This is especially true
when working with "generic" media-types such as XML and JSON, etc. since
they have very little semantic value built into them.
That leads me to another bit of advice I'll offer: think about link
semantics from the very start when creating your library. The Web browser
client works because the link semantics of the HTML media-type are
well-defined (and pretty narrow). There are a limited number of link
elements. Some are in-doc links (IMG, LINK, SCRIPT, etc.), some are
navigational links (A, FORM). All, except FORM, are limited to using the GET
method. It's the semantic model of HTML that allows browsers to properly
handle HTTP responses from previously unknown locations and still provide
full functionality - even a decade after the semantics where defined. I
suspect you'll find that building a client to properly locate, identify,
and understand the link semantics of a single media type
(application/vnd.movies.movie+xml) is challenging by itself. Building one
that handles multiple media-types just adds to the fun<g>.
I also encourage you to treat HTTP control data (headers) as top-level
programming objects in your library. Allowing programmers to decorate
requests with control data (content-encoding, media-type, authorization,
cache-control, etc.) and have direct access to the control data on responses
will improve the flexibility of any client/server built w/ your library.
In the big picture, I prefer looking at HTTP programming from the
stand-point of "resource programming." I look for a code library that lets
me define a resource, associate or or more URIs with that resource, handle
multiple representations of the resource (for both requests and response
bodies), and properly decorate requests and responses w/ control data. I
also want to make sure it handles mime-types properly (conneg included),
conditional requests (GET and PUT), and supports flexible authentication
models.
FWIW, I started work on a REST-ful HTTP C# framework a while back [1]. It's
been dormant for quite some time as the current version works well for me,
but there are lots of places it needs work. I've also built an HTTP
utilities library [2] with most all the bits I need for building REST-ful
HTTP apps. It's smaller and lighter than my 'framework' library. I mention
these as some of the code there might be helpful and/or act as a cautionary
tale as you work on your own projects.
mca
http://amundsen.com/blog/
[1] http://exyus.com
[2]
http://code.google.com/p/mikeamundsen/source/browse/#svn/trunk/Amundsen.Utilities
On Wed, Dec 9, 2009 at 17:00, J�rn Wildt <jw@...> wrote:
>
>
> There has been a lot of discussion about the right way to implement a REST
> service, but less focus on how you would actually code a client. I have
> been
> looking at RESTFulie[1], Subbu Alamarju[2], and the Starbucks[3] example,
> and would like to discuss a similar typed approach in C#.
>
> I am experimenting with an actual implementation and would like some
> feedback before getting too far :-)
>
> Thanks, J�rn
>
>
> [1] http://github.com/caelum/restfulie
> [2] http://www.infoq.com/articles/subbu-allamaraju-rest
> [3] http://www.infoq.com/articles/webber-rest-workflow
>
>
> *Service example documentation*
> In order to discuss a REST client we need a service example. My first use
> case is a movie shop where we can search for movies in a specific
> category.
> To do so the shop has published a single search service URL template:
> http://movies.org/movies?category={category<http://movies.org/movies?category=%7Bcategory>}.
>
>
> The shop also publishes three ressource mime types:
>
> // Example "application/vnd.movies.movie+xml"
> <Movie>
> <Self href="http://movies.org/movies/91"/>
> <Title>Strange Dawn</Title>
> <Category>Thriller</Category>
> <Director href="http://movies.org/persons/47"/>
> </Movie>
>
> // Example "application/vnd.movies.movie-collection+xml"
> <Movies>
> <Self href="http://movies.org/movies?category=Thriller"/>
> <Movie>
> <Title>Strange Dawn</Title>
> <Self href="http://movies.org/movies/91"/>
> </Movie>
> <Movie>...</Movie>
> <Movie>...</Movie>
> </Movies>
>
> // Example "application/vnd.movies.person+xml"
> <Person>
> <Self href="http://movies.org/persons/47"/>
> <Name>Richard Strangelove</Name>
> <Photo href="http://facebook.com/photos/hh31y1"/>
> </Person>
>
> Comments
>
> - I have avoided Atom Links since, in my experience, these don't serialize
> well in the C# standard XML serializer. You could although create your own
> serializer, so this is not an important restriction.
>
> - Notice how the person type has external references :-)
>
>
> *Code example - Searching*
> The cleanest client usage I can come up with is:
>
> // A link (template). This should be fetched from a configuration file.
> Link MoviesSearchLink = new Link("
> http://movies.org/movies?category={category}");
>
> // Anonymous class with search parameters. Reflection is used to extract
> values.
> // This is about the simplest way to write a "fixed hashmap" in C#
> var movieSearchParameter = new { category = "Thriller" };
>
> // Get ressource stored at the link endpoint
> MovieCollection movies =
> MoviesSearchLink.Get<MovieCollection>(movieSearchParameter);
>
> // Iterate over all movies and print title
> foreach (Movie movie in movies)
> Console.WriteLine("Title: " + movie.Title);
> Comments:
>
> - A Link is untyped. We do not know what lies at the end of it.
>
> - A link knows how to merge parameters into URL templates.
>
> - The result of GETing a link is typed. The actual type is defined by the
> returned mime type.
>
> - In order to do something usefull with the search we must assume that it
> returns a MovieCollection. Hence the generic type specifier in the
> Get<T>()
> method. This is apriori information which I cannot see how to code
> without.
>
>
> *Parsing ressources*
> One piece of magic is how Get<MovieCollection>(params) knows how to
> convert
> the bytes returned from the endpoint to a MovieCollection. For this we
> create a MimeTypeRegistry:
>
> MimeTypeRegistry.Register<MovieCollection,
> MovieCollectionBuilder>("application/vnd.movies.movie-collection");
>
> which is equal to:
>
> MimeTypeRegistry.Register(typeof(MovieCollection),
> typeof(MovieCollectionBuilder),
> "application/vnd.movies.movie-collection");
>
> This means: when ever we must parse a specific mime type, we look up a
> builder in the registry and uses this to parse the returned ressource
> representation.
>
> The typed Get<MovieCollection>(params) method GETs the ressource data,
> instantiates the corresponding builder, verifies that the built object
> type
> matches the requested and returns the built object.
>
> Comments:
>
> - This is static typing which RESTafarians seems to shy away from. But the
> type depends on the returned ressource, _not_ the URL. So to my knowledge
> this is fine.
>
> - It is not required to use the type safe Get<T>(), you could also call
> Get() which returns an object. The actual returned type then depends
> solely
> on the mime type of the ressource, and it is up to the programmer to
> decide
> what to do with it.
>
> - I am quite sure you can write some pretty generic XML builders without
> much overhead.
>
> - This is not limited to XML, you could add image/jpeg and other well
> known
> mime types. You just need to supply a proper builder.
>
>
> *Code example - Getting sub-ressources*
> Now we want to get information about the director of the movie:
>
> // One of the returned self links from the search query
> Link movieLink = movies[0].Self;
>
> // Get the actual movie
> Movie movie = movieLink.Get<Movie>();
>
> // Get the director
> MoviePerson director = movie.Director.Get<MoviePerson>();
>
> Comments:
>
> - There are no hard coded links here.
>
> - The only apriori information we use is the knowledge of the types of the
> referenced ressources. These types are documented in the mime type in
> which
> the links are used.
>
>
> *Versioning*
> Now our wonderfull movie shop decides to be able to sell and rate movies.
> They do their own selling, but uses the fameous ratings.org service to
> rate their movies. So the shop creates a new version of the movie mime
> type:
>
> // Example "application/vnd.movies.movie.*v2*+xml"
> <Movie>
> <Self href="http://movies.org/movies/91"/>
> <Title>Strange Dawn</Title>
> <Category>Thriller</Category>
> <Director href="http://movies.org/persons/47"/>
> <Orders href="http://movies.org/movies/91/orders"/>
> <Ratings href=
> http://ratings.org/ratings?item=http%3a%2f%2fmovies.org%2fmovies%2f91/>
> </Movie>
>
> In order to service both old and new clients the shop decides to return
> the
> initial movie mime type by default. Never clients should use the Accept
> header to indicate that they want the new version. The same goes for the
> movies collection type.
>
> Our existing client code works happily as it did before.
>
>
> *Code example - A new client*
> The new client code would look like this:
>
> // A link (template). This should be fetched from a configuration file.
> Link MoviesSearchLink = new Link("
> http://movies.org/movies?category={category}");
>
> // Anonymous class with search parameters. Reflection is used to extract
> values.
> // This is about the simplest way to write a "fixed hashmap" in C#
> var movieSearchParameter = new { category = "thriller" };
>
> // Setting up the Accept header
> var movieSearchHeaders = new { Accept =
> "application/vnd.movies.movie-collection.v2" }
>
> // Get ressource stored at the link endpoint
> MovieCollection movies =
> MoviesSearchLink.Get<MovieCollection>(movieSearchParameter,
> movieSearchHeaders);
>
> // Iterate over all movies and print title
> foreach (Movie movie in movies)
> Console.WriteLine("Title: " + movie.Title);
>
> *Code example - Buying movies*
> Now we have a movie which has an embedded link to it's sales orders. To
> buy
> a movie we post a new order to the sales order collection:
>
> // One of the returned self links from the search query
> Link movieLink = movies[0].Self;
>
> // Get the actual movie
> Movie movie = movieLink.Get<Movie>();
>
> // Create a new order request
> MovieOrderRequest orderRequest = new MovieOrderRequest(movie.Self, 1 /*
> quantity */);
>
> // Post the order request to the order collection
> // Assume it returns the newly created order
> MovieOrder order = movie.Orders.Post(orderRequest);
>
> Comments:
>
> - The POST result in a redirect to the newly created order. The system
> GETs
> this new order and returns it. This means we loose the intermediate data
> returned from the POST.
>
>
> *Other verbs*
> The Link class is has built-in support for GET/PUT/POST/DELETE. Other
> verbs
> can be executed through a generic "Request" method:
>
> SomeType x = someLink.Request("SOMEVERB", somePayload);
>
>
> *Caching*
> The Link class and it's associted methods should of course respect ETag
> and
> if-not-modified-since etc. This would require the framework to be
> initialized with a cache implementation of some kind.
>
>
> *Error handling*
> I would suggest using execptions for error handling.
>
>
>
>
>
> For the trivial case, I don't see a reason why you should be having to set > the Accept header -- framework should do that for you. It should set the > content type on the way out properly, and set the accept header as well, > since it "knows" you want the Movie info. That's a good point. > The common headers should be First Class. I shouldn't have to set an > "If-Modified" myself, I should be able to link.setIfModifiedSince(myDate), > so I don't have to marshal the dates myself. It is my hope that such thing could be handled by a transparent caching layer. The cache will know what was returned last time, and insert that automatically in the headers. I do although not know if this is possible/practical. > Going back to the mime mapping, that should be controllable at a higher, > "global"/framework level, but also at the request level. Good point too. Thanks, J�rn ----- Original Message ----- From: "Will Hartung" <willh@...> To: "J�rn Wildt" <jw@...> Cc: "rest-discuss" <rest-discuss@yahoogroups.com> Sent: Thursday, December 10, 2009 12:48 AM Subject: Re: [rest-discuss] The "purist" C# REST client? > Just a couple of thoughts out loud. > > I like the mime-type mapping framework, that would be a nice, generic > piece > of code all on its own, and it should marshal both ways. > > I think you will need to use that in many places. Not simply for GETs, but > for pushing data as well. > > But another place that would be valuable is the fact that most any HTTP > result can have a typed body. So it's easy to envision when you get some > warning, or other error message (perhaps a redirect), that you can > leverage > the ability to send more interesting data than what is simply in the > headers, and have that data marshaled for you automagically. > > I think the LINK infrastructure should be aware of things like redirects, > and expose those things. If you hit a URI that gets a permanent or > temporary > redirect, it would be nice for the client to honor that at least somewhat > transparently. And by that I don't me silently following the redirect, but > I > mean if it sees you hitting it again, will simply jump straight to the > final > destination. > > For the trivial case, I don't see a reason why you should be having to set > the Accept header -- framework should do that for you. It should set the > content type on the way out properly, and set the accept header as well, > since it "knows" you want the Movie info. > > The common headers should be First Class. I shouldn't have to set an > "If-Modified" myself, I should be able to link.setIfModifiedSince(myDate), > so I don't have to marshal the dates myself. > > Going back to the mime mapping, that should be controllable at a higher, > "global"/framework level, but also at the request level. I can easily see > having to map "text/xml" to something different at the request level > depending on the link. > > So, anyway, just some quick thoughts. > > Regards, > > Will Hartung > (willh@...) >
You're absolutely right. It's been more than a year since Roy Fielding gave us a slap on the wrist about needing more dynamic binding and loose coupling. I don't think that we're there yet, but plenty of people are working to change that. Guys like Jim Webber, frameworks like Restfulie, and the work from plenty others (including myself) have done will change that. I've had quite a few interesting conversations in the last month or so, and I'm working on a solution... I'll pass it by this group in the near future. -Solomon On Thu, Dec 10, 2009 at 5:31 PM, faisalwaris <faisalwaris@...> wrote: > I am not sure if any of these are proper REST. I don't know about LinkedIn > but both netflix and flickr use private data structures. I suppose that > leads to early binding and tight coupling. > > --- In rest-discuss@yahoogroups.com, Solomon Duskis <sduskis@...> wrote: > > > > The Netflix API is pretty good: http://developer.netflix.com/. The > Flickr > > "REST" API works, and is worth checking out but isn't really REST in the > > official sense: http://www.flickr.com/services/api/. The LinkedIn REST > API > > is new and exciting: http://developer.linkedin.com/community/apis > > > > Does this help? > > > > -Solomon > > > > On Thu, Nov 26, 2009 at 3:45 PM, swschilke <steffen.schilke@...> wrote: > > > > > > > > > > > Dear *, > > > > > > can you kindly point me out some good examples of REST implementations. > > > Preferable well documented ;-) - I've read the O'Reilly book about the > > > Twitter API but I want to see more. > > > > > > Kind regards > > > > > > sws > > > > > > > > > > > > > >
On 26 Nov 2009, at 20:45, swschilke wrote: > Dear *, > > can you kindly point me out some good examples of REST implementations. Preferable well documented ;-) - I've read the O'Reilly book about the Twitter API but I want to see more. Try http://dbpedia.org and the growing Linked Data space http://linkeddata.org/ Henry > Kind regards > > > sws >
These seem like great examples of RDF and semantic web data linked data. I definitely will be looking into it. However, I'm not quite sure how that would apply to a business scenario which includes both linked data and operations and work flow. What's the sweet spot for RDF? What rules of thumb would you use to choose RDF vs. ATOM vs. XML/JSon API alternatives? -Solomon On Thu, Dec 10, 2009 at 7:29 PM, Story Henry <henry.story@...>wrote: > > > > On 26 Nov 2009, at 20:45, swschilke wrote: > > > Dear *, > > > > can you kindly point me out some good examples of REST implementations. > Preferable well documented ;-) - I've read the O'Reilly book about the > Twitter API but I want to see more. > > Try http://dbpedia.org and the growing Linked Data space > http://linkeddata.org/ > > Henry > > > Kind regards > > > > > > sws > > > > >
RDF is pretty sweet, especially when you discover turtle and n3. It definitely made my XML skills sharper. For enterprise, it's a tough sell because it's unknown with poor tool support. My 2c. On Thu, Dec 10, 2009 at 5:41 PM, Solomon Duskis <sduskis@...> wrote: > > > These seem like great examples of RDF and semantic web data linked data. I > definitely will be looking into it. However, I'm not quite sure how that > would apply to a business scenario which includes both linked data and > operations and work flow. > > What's the sweet spot for RDF? What rules of thumb would you use to choose > RDF vs. ATOM vs. XML/JSon API alternatives? > > -Solomon > > On Thu, Dec 10, 2009 at 7:29 PM, Story Henry <henry.story@...>wrote: > >> >> >> >> On 26 Nov 2009, at 20:45, swschilke wrote: >> >> > Dear *, >> > >> > can you kindly point me out some good examples of REST implementations. >> Preferable well documented ;-) - I've read the O'Reilly book about the >> Twitter API but I want to see more. >> >> Try http://dbpedia.org and the growing Linked Data space >> http://linkeddata.org/ >> >> Henry >> >> > Kind regards >> > >> > >> > sws >> > >> >> > > >
Our platform API for managing RDF storage is RESTful see http://n2.talis.com/wiki/API_Site_Map for the docs On Fri, Dec 11, 2009 at 12:29 AM, Story Henry <henry.story@...> wrote: > > On 26 Nov 2009, at 20:45, swschilke wrote: > >> Dear *, >> >> can you kindly point me out some good examples of REST implementations. Preferable well documented ;-) - I've read the O'Reilly book about the Twitter API but I want to see more. > > Try http://dbpedia.org and the growing Linked Data space http://linkeddata.org/ > > Henry > > >> Kind regards >> >> >> sws >> > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Jørn Wildt wrote: >> The common headers should be First Class. I shouldn't have to set an >> "If-Modified" myself, I should be able to link.setIfModifiedSince(myDate), >> so I don't have to marshal the dates myself. > > It is my hope that such thing could be handled by a transparent caching > layer. The cache will know what was returned last time, and insert that > automatically in the headers. I do although not know if this is > possible/practical. I made a stab at wrapping the WebRequest object with a CachedWebRequest object that applied such functionality and then presented the same interface known to all C# coders. I never took it very far, but was happy enough that it could certainly be done (all I wanted to know at the time). Allowing your framework to have it's means of obtaining a WebRequest overridden, may be all you need to allow for such cache handling as a separate component.
Jon Hanna wrote: > Allowing your framework to have it's means of obtaining a WebRequest > overridden, may be all you need to allow for such cache handling as a > separate component. Oh wait. Now I remember wanting to have caching at the level just above this (object caching) based on HTTP caching declarations. I never did entirely decide whether that would be a good idea or not.
My experience was that binding caching too tightly to the request object (burying it, really) was not a good idea. When I'm coding my HTTP apps, I want control over this, not auto-magic stuff. Also, sometimes I can count on an existing cache infrastructure between my app (client or server) and the WWW. When that happens, I need very little caching work done within the library itself. IMO, the really important item in this area is proper support for conditional requests. Supporting GET is pretty straight-forward, PUT gets messy. mca http://amundsen.com/blog/ On Fri, Dec 11, 2009 at 10:34, Jon Hanna <jon@...> wrote: > Jon Hanna wrote: >> Allowing your framework to have it's means of obtaining a WebRequest >> overridden, may be all you need to allow for such cache handling as a >> separate component. > > Oh wait. Now I remember wanting to have caching at the level just above > this (object caching) based on HTTP caching declarations. I never did > entirely decide whether that would be a good idea or not. > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Moore, Jonathan (CIM) wrote: > Users (myself included) are used to “Reply†going just to the author of > an email and “Reply All†going to everyone on the original. And of course if the original reply-to is not the same as the from there is no way to opt to mail just the author. > I have > always had the converse annoyance of intending to send a private reply > on a list configured contrary to this expectation and having that > accidentally go out to everyone. I once did this with a mail which when taken out of context of previous private mails, gave the strong impression that I was terminally ill. This was still far from the most embarrassing such mail I've seen sent to a list. The one downside to not munging though is that we each get the delivery failure notifications. Can somebody clean up the member list?
On Wed, Dec 9, 2009 at 12:30 AM, Jørn Wildt <jw@...> wrote: > Can anyone tell the reason for letting "reply" only reply to the posting > author on this list? The result is that most people do a "reply to all" > which results in two duplicate replies on my mail client most of the time. > Can it be changed? I think so, but there's a good reason most list management servers aren't set up this way anymore (see Jon's links), and I have no interest in relearning those (awkward/embarrassing/occasionally-career-limiting) lessons here. Mark.
Kris,
1: The 6.1.1.2. rel section has schema that looks like JSON instance
data I tend to use. What would the schema for this instance data look
like -
"links":[
{"rel":"inbox", "href":"http://example.org/ib.php?id=1"},
{"rel":"outbox", "href":"http://example.org/ob.php?id=3453"},
{"rel":"replies", "href":"http://example.org/rp.php?id=3453"},
{"rel":"homepage", "href":"http://example.org/home.php?id=3453"}
]
2: Can the rel "6.1.1.2. rel" support multiple values like HTML?
3: "However, we define these relation here ". I don't think a schema
document should define values for "rel". That would be the job of
another RFC
Bill
Kris Zyp wrote:
>
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> This JSON Schema media type has been submitted as an Internet Draft to
> the IETF, and I thought this might be of interest to REST advocates
> since a substantial portion of the specification is devoted to
> describing link relations in the JSON documents defined by JSON schema
> (intended to provide a more interoperable mechanism for hypertext
> navigation of JSON in a REST architecture):
> http://tools.ietf.org/html/draft-zyp-json-schema-01
> <http://tools.ietf.org/html/draft-zyp-json-schema-01>
>
> Any feedback is appreciated.
>
> Thanks,
>
> - --
> Kris Zyp
> SitePen
> (503) 806-1841
> http://sitepen.com <http://sitepen.com>
>
> - --
> Kris Zyp
> SitePen
> (503) 806-1841
> http://sitepen.com <http://sitepen.com>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (MingW32)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
> <http://enigmail.mozdev.org/>
>
> iEYEARECAAYFAksZ6wwACgkQ9VpNnHc4zAyNSQCcCYl9pcTz/xU9MtpCZ47O5zsX
> AUwAoKoZ8YAhHCBfc/GJs9C4aEQAapKh
> =soAw
> -----END PGP SIGNATURE-----
>
>
On Fri, Dec 11, 2009 at 7:45 AM, mike amundsen <mamund@...> wrote: > My experience was that binding caching too tightly to the request > object (burying it, really) was not a good idea. When I'm coding my > HTTP apps, I want control over this, not auto-magic stuff. Also, > sometimes I can count on an existing cache infrastructure between my > app (client or server) and the WWW. When that happens, I need very > little caching work done within the library itself. > > IMO, the really important item in this area is proper support for > conditional requests. Supporting GET is pretty straight-forward, PUT > gets messy. Observationally, it seems to me that the "Hard" part about REST via HTTP is at the protocol layer, especially since a lot of the operational burden is on the client. On the server, the logic is reasonably straightforward, especially since the transaction is stateless. But the client has to handle the task of behaving properly, and it is NOT stateless. A simple GET isn't necessarily so simple when you consider all of the possible result cases and caching. Similarly, POST and PUT have their struggles as well, such as redirects, conditional operations, and even things like 100-Continue. Amplifying this is the fact that most folks use so little of the HTTP protocol, and suddenly the "ubquitous", "oh it's just HTTP" simplicity become "Oh, you mean THAT HTTP, this is harder than I thought". Regards, Will Hartung (willh@...)
Jon Hanna wrote: > > The one downside to not munging though is that we each get the > delivery failure notifications. Can somebody clean up the member list? > +1 -Eric
Jan Algermissen wrote: > > 3) POST to an update-processor subresource, e.g. > POST /person/3344/updates and have server return > 303 See Other > Location: /person/3344 > What would be the response to a GET request for /person/3344/updates ? -Eric
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Bill de hOra wrote:
> Kris,
>
> 1: The 6.1.1.2. rel section has schema that looks like JSON
> instance data I tend to use. What would the schema for this
> instance data look like -
>
>
> "links":[ {"rel":"inbox", "href":"http://example.org/ib.php?id=1"},
> {"rel":"outbox", "href":"http://example.org/ob.php?id=3453"},
> {"rel":"replies", "href":"http://example.org/rp.php?id=3453"},
> {"rel":"homepage", "href":"http://example.org/home.php?id=3453"} ]
>
Hmm, the schema doesn't have a meta-definition for indicating instance
properties that specify relations. Perhaps that should be added (for
situations when instances have heterogeneous sets of relations).
However, the intent was that you could write a schema with
"links":[
{"rel":"inbox", "href":"http://example.org/ib.php?id={user_id}"},
{"rel":"outbox", "href":"http://example.org/ob.php?id={mailbox_id}"},
{"rel":"replies", "href":"http://example.org/rp.php?id={mailbox_id}"},
{"rel":"homepage", "href":"http://example.org/home.php?id={mailbox_id}"}
]
And then all the instances could be written succinctly:
{
"user_id": 1,
"mailbox_id": 3454,
.. rest of the data ..
}
And then using the schema, the instance could be interpreted as having
the appropriate "inbox", "outbox", "replies", and "homepage" relations.
Another option if you want a different sets relations per instance is
that you could lean on implied relations based on the JSON hierarchy
of the instance, and then define a "full" representation property that
could be used sub-objects. A schema could be:
{
"additionalProperties": {
"links":[
{
"rel":"full",
"href":"{href}"
}
]
}
And your instance could be:
{
... some data ...
"inbox":{"href":"http://example.org/ib.php?id=1"},
"outbox": {"href":"http://example.org/ob.php?id=3453"},
"replies": {"href":"http://example.org/rp.php?id=3453"},
"homepage": {"href":"http://example.org/home.php?id=3453"}
}
I realize that some might not like implying relationships by JSON
hierarchy, but it does fit well with JSON since it is structured as an
edge labeled graph like relational graphs (vs XML that is a
node/verticed label graph which it doesn't make sense to imply
relations from the structure).
> 2: Can the rel "6.1.1.2. rel" support multiple values like HTML?
The property itself is currently it is single valued, but we could
allow for an array relations, but it seems like it would be easy
enough to define multiple rel's for a given URI just by defining
multiple links:
"links":[
{"rel":"relationA", "href":"http://example.org/{id}"},
{"rel":"relationB", "href":"http://example.org/{id}"},
...
> 3: "However, we define these relation here ". I don't think a
> schema document should define values for "rel". That would be the
> job of another RFC
I could break that out into a separate draft, I hadn't realized that
media type RFCs weren't supposed to define relations. Is it
permissible to make non-normative recommendations of relations in a
media type specification?
Thanks for the suggestions,
- --
Kris Zyp
SitePen
(503) 806-1841
http://sitepen.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAksi23wACgkQ9VpNnHc4zAy64wCgnxrzn+qbCMk7vFyyNcPhgSf5
QM0An1uKNdTJgE/ehyD67GpxzcguXXFz
=CySr
-----END PGP SIGNATURE-----
On Dec 11, 2009, at 3:53 PM, Kris Zyp wrote: >> 3: "However, we define these relation here ". I don't think a >> schema document should define values for "rel". That would be the >> job of another RFC > > I could break that out into a separate draft, I hadn't realized that > media type RFCs weren't supposed to define relations. Is it > permissible to make non-normative recommendations of relations in a > media type specification? See http://tools.ietf.org/html/draft-nottingham-http-link-header-06#section-6.2 which establishes a process for registering new link relation types. Subbu
Can anyone help me on which http response code should be used if the client send some content with an unaccepted "content-type" to a resource? i.e. he does a PUT to /cities with content-type="vnd/something_else_but_city+xml". 406 does not seem to fit because it is not related to the "Accept" header, but the "Content-type" one. Regards Guilherme Silveira Caelum | Ensino e Inovação http://www.caelum.com.br/
i return 406 whenever the content-type is not supported, for all methods. mca http://amundsen.com/blog/ On Sat, Dec 12, 2009 at 16:36, Guilherme Silveira <guilherme.silveira@...> wrote: > Can anyone help me on which http response code should be used if the > client send some content with an unaccepted "content-type" to a > resource? i.e. he does a PUT to /cities with > content-type="vnd/something_else_but_city+xml". > > 406 does not seem to fit because it is not related to the "Accept" > header, but the "Content-type" one. > > Regards > > Guilherme Silveira > Caelum | Ensino e Inovação > http://www.caelum.com.br/ > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Guilherme, 415 Unsupported Media Type Jan On Dec 12, 2009, at 10:36 PM, Guilherme Silveira wrote: > Can anyone help me on which http response code should be used if the > client send some content with an unaccepted "content-type" to a > resource? i.e. he does a PUT to /cities with > content-type="vnd/something_else_but_city+xml". > > 406 does not seem to fit because it is not related to the "Accept" > header, but the "Content-type" one. > > Regards > > Guilherme Silveira > Caelum | Ensino e Inovação > http://www.caelum.com.br/ > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 12, 2009, at 11:05 PM, mike amundsen wrote: > i return 406 whenever the content-type is not supported, for all > methods. > 406 is used when the server cannot send any of the content-types accepted (Accept header) by the client. Jan > mca > http://amundsen.com/blog/ > > > > > On Sat, Dec 12, 2009 at 16:36, Guilherme Silveira > <guilherme.silveira@...m.br> wrote: >> Can anyone help me on which http response code should be used if the >> client send some content with an unaccepted "content-type" to a >> resource? i.e. he does a PUT to /cities with >> content-type="vnd/something_else_but_city+xml". >> >> 406 does not seem to fit because it is not related to the "Accept" >> header, but the "Content-type" one. >> >> Regards >> >> Guilherme Silveira >> Caelum | Ensino e Inovação >> http://www.caelum.com.br/ >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
!Doh! thanks Jan. i don't know what i was doing. mca http://amundsen.com/blog/ On Sat, Dec 12, 2009 at 17:06, Jan Algermissen <algermissen1971@mac.com> wrote: > Guilherme, > > 415 Unsupported Media Type > > Jan > > On Dec 12, 2009, at 10:36 PM, Guilherme Silveira wrote: > >> Can anyone help me on which http response code should be used if the >> client send some content with an unaccepted "content-type" to a >> resource? i.e. he does a PUT to /cities with >> content-type="vnd/something_else_but_city+xml". >> >> 406 does not seem to fit because it is not related to the "Accept" >> header, but the "Content-type" one. >> >> Regards >> >> Guilherme Silveira >> Caelum | Ensino e Inovação >> http://www.caelum.com.br/ >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@acm.org > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On 7 Dec 2009, at 18:59, Jørn Wildt wrote: > > Consider a movie collection ressource with content type application/ > vnd.movies+xml: I can search this collection by posting a search > query (and get redirected to the result) - but how do the service > destinguish between posting a search query and posting a new movie? > By simply looking at the posted data? A search query would normally be a GET request on the collection, rather than a POST, unless you are creating new query resource. But in general, you can only tell what type of resource is being created by its media type. You cant generally use the contents of the resource, as this might get changed. So if you want different sorts of object to be created you had better give them different media types, like a search query type, or the movie type, or a purchase order or whatever. Justin
On 8 Dec 2009, at 21:38, Will Hartung wrote: > I think to be pedantic, you would use PATCH instead of PUT for this, > but that's just because it seems to have found favor (I don't know the > origin for PATCH, as it's not one of HTTP verbs, though WebDAV uses > PROPPATCH, so there's likely some inspiration from that). > PATCH just got approved by the IETF, https://datatracker.ietf.org/drafts/draft-dusseault-http-patch/ There is some history in the document - it was in some original drafts. short blog post: http://blog.technologyofcontent.com/2009/12/smart-resources-or-why-you-should-care-about-http-patch/ Justin
On Sat, Dec 12, 2009 at 5:08 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Dec 12, 2009, at 11:05 PM, mike amundsen wrote: > > > i return 406 whenever the content-type is not supported, for all > > methods. > > > > 406 is used when the server cannot send any of the content-types > accepted (Accept header) by the client. > I'd like to extend this question to be, what if the "POSTed" content was correct but the acceptable formats indicated in the Accept header aren't supported? In other words, client posts some data that's valid but in their POST, they have an Accept header that can't be supported by the server. It's a contrived test case I suppose, but should the original POST succeed? what should be returned? Thanks, --tim
Hello Tim,
> I'd like to extend this question to be, what if the "POSTed" content
> was correct but the acceptable formats indicated in the Accept header
> aren't supported?
It seems that 406 is the response for that:
"The resource identified by the request is only capable of generating
response entities which have content characteristics not acceptable
according to the accept headers sent in the request.
Unless it was a HEAD request, the response SHOULD include an entity
containing a list of available entity characteristics and location(s)
from which the user or user agent can choose the one most appropriate.
The entity format is specified by the media type given in the
Content-Type header field. Depending upon the format and the
capabilities of the user agent, selection of the most appropriate
choice MAY be performed automatically. However, this specification
does not define any standard for such automatic selection.
Note: HTTP/1.1 servers are allowed to return responses which are
not acceptable according to the accept headers sent in the
request. In some cases, this may even be preferable to sending a
406 response. User agents are encouraged to inspect the headers of
an incoming response to determine if it is acceptable.
If the response could be unacceptable, a user agent SHOULD temporarily
stop receipt of more data and query the user for a decision on further
actions."
Regards
> In other words, client posts some data that's valid
> but in their POST, they have an Accept header that can't be supported
> by the server. It's a contrived test case I suppose, but should the
> original POST succeed? what should be returned?
>
> Thanks,
> --tim
>
I think there was some consensus here that it might be ok to allow the POST but send back an unacceptable content type, with the notion that the client will need to check that it can handle the response anyway. To be completely conservative, you could send a 406 and not let the POST occur. Another option here would be to send a 201 (Created) or 204 (No Content) with a Content-Location pointing to the new resource and no response body; this way the server can indicate success without sending a response body the client won't be able to handle. The client can go do content negotiation with the new resource itself in the usual way if desired. Jon ........ Jon Moore Comcast Interactive Media -----Original Message----- From: rest-discuss@yahoogroups.com on behalf of Tim Williams Sent: Mon 12/14/2009 7:50 AM To: Jan Algermissen Cc: mike amundsen; Guilherme Silveira; rest-discuss Subject: Re: [rest-discuss] invalid content type On Sat, Dec 12, 2009 at 5:08 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Dec 12, 2009, at 11:05 PM, mike amundsen wrote: > > > i return 406 whenever the content-type is not supported, for all > > methods. > > > > 406 is used when the server cannot send any of the content-types > accepted (Accept header) by the client. > I'd like to extend this question to be, what if the "POSTed" content was correct but the acceptable formats indicated in the Accept header aren't supported? In other words, client posts some data that's valid but in their POST, they have an Accept header that can't be supported by the server. It's a contrived test case I suppose, but should the original POST succeed? what should be returned? Thanks, --tim
On Dec 14, 2009, at 2:12 PM, Moore, Jonathan (CIM) wrote: > I think there was some consensus here that it might be ok to allow > the POST but send back an unacceptable content type, with the notion > that the client will need to check that it can handle the response > anyway. Yes, I'd do that, too. The important information is that the POST succeeded and where the new resource is (for 201). Jan > > To be completely conservative, you could send a 406 and not let the > POST occur. > > Another option here would be to send a 201 (Created) or 204 (No > Content) with a Content-Location pointing to the new resource and no > response body; this way the server can indicate success without > sending a response body the client won't be able to handle. The > client can go do content negotiation with the new resource itself in > the usual way if desired. > > Jon > ........ > Jon Moore > Comcast Interactive Media > > > > -----Original Message----- > From: rest-discuss@yahoogroups.com on behalf of Tim Williams > Sent: Mon 12/14/2009 7:50 AM > To: Jan Algermissen > Cc: mike amundsen; Guilherme Silveira; rest-discuss > Subject: Re: [rest-discuss] invalid content type > > On Sat, Dec 12, 2009 at 5:08 PM, Jan Algermissen > <algermissen1971@...> wrote: >> >> On Dec 12, 2009, at 11:05 PM, mike amundsen wrote: >> >>> i return 406 whenever the content-type is not supported, for all >>> methods. >>> >> >> 406 is used when the server cannot send any of the content-types >> accepted (Accept header) by the client. >> > > I'd like to extend this question to be, what if the "POSTed" content > was correct but the acceptable formats indicated in the Accept header > aren't supported? In other words, client posts some data that's valid > but in their POST, they have an Accept header that can't be supported > by the server. It's a contrived test case I suppose, but should the > original POST succeed? what should be returned? > > Thanks, > --tim > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 14, 2009, at 2:12 PM, Moore, Jonathan (CIM) wrote: > 204 (No Content) with a Content-Location Content-Location without a response entity does not make sense. (Though RFC 2616 does not forbid it explicitly) Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Sun, Dec 13, 2009 at 7:29 AM, Justin Cormack <justin@...> wrote: > short blog post: > http://blog.technologyofcontent.com/2009/12/smart-resources-or-why-you-should-care-about-http-patch/ Thanks for the link Justin. I think the note about server side patching via XSLT or Javascript code is quite novel. It basically turns PATCH in to the similar of an UPDATE SQL statement, where the logic is executed server side. Most UPDATE statements are actually quite simple. But some are not (e.g. UPDATE mytable SET (value) = select thing from othertable WHERE id = 1). Something to think about IMHO. Regards, Will Hartung (willh@...)
Stefan Tilkov wrote: > > Slightly OT, but what are people experiences in implementing custom > client-side media type handlers? I mean registering a helper > application to handle, say, content of type > application/vnd.my-cool-stuff when a representation of this type is > returned in response to a browser request. > The same as all my other experience developing to REST's Uniform Interface - I got used to not doing it that way. Instead of using RPC-via-POST to implement my own application methods, I stick with GET-HEAD-PUT-POST-PATCH-DELETE-OPTIONS as per: " REST enables intermediate processing by constraining messages to be self-descriptive: interaction is stateless between requests, standard methods and media types are used to indicate semantics and exchange information, and responses explicitly indicate cacheability. " http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_1 The key concept being the use of _standard_ methods and media types. In a nutshell, my advice on creating a mime type for your REST system is, don't! Back up, and figure out how to do it with existing standard methods and media types -- this is the requirement for serendipitous re-use. When I find myself defining new media types or HTTP request methods to solve a problem, I consider that I've gone off the rails. If I insist to myself that I am right, this does call for a new method and/or mime type, I go ahead and do it. But, I include an asterisk stating that the system will only be RESTful pending standardization of the non- standard methods and/or mime types I've used. Mostly, though, I've found that I can achieve my purpose without violating the "self- descriptive messaging" constraint. So, consider using custom extensions to a well-known mime type like application/atom+xml, or perhaps a subset like 'application/something +xml'. Better to use RELAX NG + Schematron to define an ontology that's compatible with application/xml. Mostly, though, I like good ol' XHTML, with @class and @id a la microformats, plus RDFa these days. Sometimes wrapped in Atom, defined using RELAX NG + Schematron (starting with James Clark's RELAX NG interpretation of XHTML modularization). (Instead of defining a custom table structure in XML with a cell of <widget_price>1.00</widget_price>, use XHTML tables and <td class='widget_price'>1.00</td>. Clients that don't know that a widget costs a dollar can still render a human-readable table, while others may execute an XML PI to transform the whole thing to SVG for display. Or an agent could use something like GRDDL or RDFa, to extract that knowledge (using rel='transformation' and XSLT for GRDDL).) My point is, so much can be done these days using standard media types, particularly HTML and XML types, that I fail to understand why so many REST systems insist on using non-standard media types to solve problems already addressed through the standardization process. +1 to Noah: > > On 03.12.2009, at 21:51, Noah Campbell wrote: > > > > > Why not reuse an existing media-type? > > > > I fail to see how that would help me, as I'm trying to invoke a > particular proprietary handler. > If you absolutely can't represent your resources using standard media types, then you're looking at implementing REST's optional Code-on- Demand constraint: > > On 03.12.2009, at 17:46, Mark Baker wrote: > > > Have you looked at JAF? > > > > http://java.sun.com/javase/technologies/desktop/javabeans/jaf/downloads/index.html > > > > I first used it when it came out in '98 IIRC, but haven't kept up > > with it. > > > > Thanks, but AFAICT, that would only work within a Java environment, > i.e. I can register types within my Java program to be invoked once a > particular media type shows up. > Exactly. Your use case requires "client functionality to be extended by downloading and executing code in the form of [an applet]" designed to handle your non-standard media type. If you're using Java, then any java-enabled browser can GET a representation of the standard media type application/java, perhaps by following a link in an <object> tag in an HTML document. Now, your non-standard media type may be manipulated in a RESTful fashion, using CoD, at the cost of some visibility. -Eric
Eric,
On Dec 15, 2009, at 8:30 AM, Eric J. Bowman wrote:
> My point is, so much can be done these days using standard media
> types,
> particularly HTML and XML types, that I fail to understand why so many
> REST systems insist on using non-standard media types to solve
> problems
> already addressed through the standardization process.
Representations serve two purposes:
1. Representations are used to express the available transitions from
the current application state of the user agent
2. Representations provide information to the client, such as the
title of an HTML document, the name of an Atom entries' author,
etc.
Case one can be addressed with a generic linking mechanism as found in
HTML or Atom (or with the forthcoming Link header) and the appropriate
link relations.
Addressing case two in any scenario that goes beyond the document
formats
in use on the Web there is no way around using more or less domain
specific formats. For example, you cannot convey to the client number of
a leasing contract without a markup for contracts (or even leasing
contracts). The domain model has to propagate into the media types -
that
is unavoidable.
Jan
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
Ian Davis wrote: > > Our platform API for managing RDF storage is RESTful see > http://n2.talis.com/wiki/API_Site_Map for the docs > Well, it's a fine HTTP API, but I wouldn't go any further than that, sorry. I would suggest reading Roy's blog post, here: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven Particularly egregious are the Changeset Protocol and Store OAI Service. While it's good to see Content Negotiation in action, it is not good to see it made part of a query URL. Your supported mime types each have unique filename extensions, why not use those, plus Content- Location? Or the Alternates header? Or the OPTIONS method? Or, if not a filename extension, why not a URI parameter? Anything but using URI queries. In a hypertext-driven API, use <link rel='alternate'/>. I could go on. For hours, after spending 30 minutes reviewing your site. Please don't promote this as a good example of a REST implementation. -Eric
Hello Eric, Can you check if I understood correctly? By using well-known media-types (as xhtml and atom): - (positive) intermediate layers are able to understand its information and act accordingly, although it does not know the meaning of a, i.e., class="contract" within a div - (positive) classes can represent what in other custom formats would be an xml element - (positive) we can use schematron to validate it on the server and client side - are intermediate layers schematron-aware? (might be negative?) - (positive) no custom media types - intermediate layers are able to understand everything which passes by - (neutral) the "human readable" way is "humans using browsers" All these positive items are related to the gain of visibility, is there something else that we benefit from? Any negative points that you have seen so far by using xhtml/atom+xml/subset+xml? Regards Guilherme Silveira Caelum | Ensino e Inovação http://www.caelum.com.br/ On Tue, Dec 15, 2009 at 5:30 AM, Eric J. Bowman <eric@...> wrote: > > > > Stefan Tilkov wrote: > > > > Slightly OT, but what are people experiences in implementing custom > > client-side media type handlers? I mean registering a helper > > application to handle, say, content of type > > application/vnd.my-cool-stuff when a representation of this type is > > returned in response to a browser request. > > > > The same as all my other experience developing to REST's Uniform > Interface - I got used to not doing it that way. Instead of using > RPC-via-POST to implement my own application methods, I stick with > GET-HEAD-PUT-POST-PATCH-DELETE-OPTIONS as per: > > " > REST enables intermediate processing by constraining messages to be > self-descriptive: interaction is stateless between requests, standard > methods and media types are used to indicate semantics and exchange > information, and responses explicitly indicate cacheability. > " > > http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_1 > > The key concept being the use of _standard_ methods and media types. > In a nutshell, my advice on creating a mime type for your REST system > is, don't! Back up, and figure out how to do it with existing standard > methods and media types -- this is the requirement for serendipitous > re-use. > > When I find myself defining new media types or HTTP request methods to > solve a problem, I consider that I've gone off the rails. If I insist > to myself that I am right, this does call for a new method and/or mime > type, I go ahead and do it. But, I include an asterisk stating that > the system will only be RESTful pending standardization of the non- > standard methods and/or mime types I've used. Mostly, though, I've > found that I can achieve my purpose without violating the "self- > descriptive messaging" constraint. > > So, consider using custom extensions to a well-known mime type like > application/atom+xml, or perhaps a subset like 'application/something > +xml'. Better to use RELAX NG + Schematron to define an ontology that's > compatible with application/xml. Mostly, though, I like good ol' > XHTML, with @class and @id a la microformats, plus RDFa these days. > Sometimes wrapped in Atom, defined using RELAX NG + Schematron > (starting with James Clark's RELAX NG interpretation of XHTML > modularization). > > (Instead of defining a custom table structure in XML with a cell of > <widget_price>1.00</widget_price>, use XHTML tables and <td > class='widget_price'>1.00</td>. Clients that don't know that a widget > costs a dollar can still render a human-readable table, while others > may execute an XML PI to transform the whole thing to SVG for display. > Or an agent could use something like GRDDL or RDFa, to extract that > knowledge (using rel='transformation' and XSLT for GRDDL).) > > My point is, so much can be done these days using standard media types, > particularly HTML and XML types, that I fail to understand why so many > REST systems insist on using non-standard media types to solve problems > already addressed through the standardization process. +1 to Noah: > > > > > On 03.12.2009, at 21:51, Noah Campbell wrote: > > > > > > > > Why not reuse an existing media-type? > > > > > > > I fail to see how that would help me, as I'm trying to invoke a > > particular proprietary handler. > > > > If you absolutely can't represent your resources using standard media > types, then you're looking at implementing REST's optional Code-on- > Demand constraint: > > > > > On 03.12.2009, at 17:46, Mark Baker wrote: > > > > > Have you looked at JAF? > > > > > > http://java.sun.com/javase/technologies/desktop/javabeans/jaf/downloads/index.html > > > > > > I first used it when it came out in '98 IIRC, but haven't kept up > > > with it. > > > > > > > Thanks, but AFAICT, that would only work within a Java environment, > > i.e. I can register types within my Java program to be invoked once a > > particular media type shows up. > > > > Exactly. Your use case requires "client functionality to be extended > by downloading and executing code in the form of [an applet]" designed > to handle your non-standard media type. If you're using Java, then any > java-enabled browser can GET a representation of the standard media type > application/java, perhaps by following a link in an <object> tag in an > HTML document. Now, your non-standard media type may be manipulated in > a RESTful fashion, using CoD, at the cost of some visibility. > > -Eric >
Eric, While I agree with you about the "missing link" of the RDF "RESTful" API, your statement don't supply example of a hypertext constraint compliant API, simply a link to a rant. There are degress of HATEOAS compliance in various APIs, but nothing that strikes me as particularly fully featured. Some more examples that I've seen that approach HATEOASness are: - Jim Weber's work with <atom:link> and rel values to express workflow. I think his work a great start, but doesn't go far enough. - Sun's JSon based Kenai (Cloud Management API). Again it has elements of HATEOASness, but doesn't really have a "you only use in-band communication" feel IMHO, There are plenty of great success stories with non-HATEOAS "REST" APIs, but I still haven't seen anything that resembles Roy's REST in what we're calling REST APIs. -Solomon 2009/12/15 Eric J. Bowman <eric@...t> > > > Ian Davis wrote: > > > > Our platform API for managing RDF storage is RESTful see > > http://n2.talis.com/wiki/API_Site_Map for the docs > > > > Well, it's a fine HTTP API, but I wouldn't go any further than that, > sorry. I would suggest reading Roy's blog post, here: > > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > > Particularly egregious are the Changeset Protocol and Store OAI > Service. While it's good to see Content Negotiation in action, it is > not good to see it made part of a query URL. Your supported mime types > each have unique filename extensions, why not use those, plus Content- > Location? Or the Alternates header? Or the OPTIONS method? Or, if > not a filename extension, why not a URI parameter? Anything but using > URI queries. In a hypertext-driven API, use <link rel='alternate'/>. > > I could go on. For hours, after spending 30 minutes reviewing your > site. Please don't promote this as a good example of a REST > implementation. > > -Eric > >
On Dec 15, 2009, at 3:22 PM, Solomon Duskis wrote: > [...] but I still haven't seen anything that resembles Roy's REST in > what we're calling REST APIs. AtomPub is a good example. OpenSearch as well. Jan > > -Solomon > > 2009/12/15 Eric J. Bowman <eric@...> > > > Ian Davis wrote: > > > > Our platform API for managing RDF storage is RESTful see > > http://n2.talis.com/wiki/API_Site_Map for the docs > > > > Well, it's a fine HTTP API, but I wouldn't go any further than that, > sorry. I would suggest reading Roy's blog post, here: > > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > > Particularly egregious are the Changeset Protocol and Store OAI > Service. While it's good to see Content Negotiation in action, it is > not good to see it made part of a query URL. Your supported mime types > each have unique filename extensions, why not use those, plus Content- > Location? Or the Alternates header? Or the OPTIONS method? Or, if > not a filename extension, why not a URI parameter? Anything but using > URI queries. In a hypertext-driven API, use <link rel='alternate'/>. > > I could go on. For hours, after spending 30 minutes reviewing your > site. Please don't promote this as a good example of a REST > implementation. > > -Eric > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
At Tue, 15 Dec 2009 01:57:01 -0700, Eric J. Bowman wrote: > > Ian Davis wrote: > > > > Our platform API for managing RDF storage is RESTful see > > http://n2.talis.com/wiki/API_Site_Map for the docs > > Well, it's a fine HTTP API, but I wouldn't go any further than that, > sorry. I would suggest reading Roy's blog post, here: > > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > > Particularly egregious are the Changeset Protocol and Store OAI > Service. While it's good to see Content Negotiation in action, it is > not good to see it made part of a query URL. Your supported mime types > each have unique filename extensions, why not use those, plus Content- > Location? Or the Alternates header? Or the OPTIONS method? Or, if > not a filename extension, why not a URI parameter? Anything but using > URI queries. In a hypertext-driven API, use <link rel='alternate'/>. > > I could go on. For hours, after spending 30 minutes reviewing your > site. Please don't promote this as a good example of a REST > implementation. If you are referring to the OAI-PMH service described at [1], it is dictated by an existing protocol, OAI-PMH, and any deficiencies it in cannot be blamed on Talis. 1. http://n2.talis.com/wiki/API_Site_Map best, Erik Hetzner
IMHO, Roy's post below must be taken with a bit of reality mixed. Like most things in software, it is not an absolute standard to measure "goodness" of RESTful web services. Most publicly visible web services are meant for mashing up data. Communicating URIs in representations is one thing, but using them to drive application flow is an entirely different beast. Most mashup scenarios require fair bit of control on the flow. Take Flickr for example. Even if it is fixed to use HTTP correctly, making it hypermedia driven for application flow does not get Flickr very far. Of course, using hypermedia to drive application flow makes sense when the server can control the flow. Subbu On Dec 15, 2009, at 12:57 AM, Eric J. Bowman wrote: > Ian Davis wrote: >> >> Our platform API for managing RDF storage is RESTful see >> http://n2.talis.com/wiki/API_Site_Map for the docs >> > > Well, it's a fine HTTP API, but I wouldn't go any further than that, > sorry. I would suggest reading Roy's blog post, here: > > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > > Particularly egregious are the Changeset Protocol and Store OAI > Service. While it's good to see Content Negotiation in action, it is > not good to see it made part of a query URL. Your supported mime types > each have unique filename extensions, why not use those, plus Content- > Location? Or the Alternates header? Or the OPTIONS method? Or, if > not a filename extension, why not a URI parameter? Anything but using > URI queries. In a hypertext-driven API, use <link rel='alternate'/>. > > I could go on. For hours, after spending 30 minutes reviewing your > site. Please don't promote this as a good example of a REST > implementation. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > >
I can't help it: I see no possible way to implement a non-human-driven client for a service without (in one way or another) classifying the resources the service provides. For example, consider a helpdesk ticket system: When writing a client that searches for tickets and then updates the foo:status of the individual tickets contained in the result set, I need to make the assumption that the result set contains tickets (and not just resources). In order to being able to make such an assumptions, the classification information must be made available by the service. In addition, when client developers should be enabled to develop clients before the services exist this information is needed as some form of service type description. The specification of application/atomsrv+xml is a good example of such a service type description. But however this is approached, it essentially comes down to telling the client what kinds of resources (IOW: kinds of application states) to expect on the server. I just cannot code to update the resource foo:status when I have now clue that this user goal is applicable to the resource in the first place. Does anyone have an idea how to align this (IMHO fact) with the constraint that no information about resource types must be made available to clients in RESTful systems? Jan P.S. In human driven interactions the situation is different: We still have knowledge of the resource type iin general (we know a trouble ticket when we see one) but we are not dependent on knowing that the result of some interaction will be a trouble ticket. We can allways follow some human-targeted links and make a few hops to reach the trouble ticket resource we expect should be 'somwehere'. M2M clients do not have that luxury (unless we apply some form of AI I guess).
Jan:
If I understand your description, you are talking about creating a
client that can search for helpdesk tickets (at some known URI, I
assume) and, if one or more tickets come back in the response
representation, are then able to perform some action on the tickets
(change status, etc).
I think this can be done by documenting a media-type constraint that
includes information to identify tickets.
<link href="...." rel="http://www.example.org/rels/ticket" />
Alternately, a similar approach could be used when the response
representation includes more than just links, but actual tickets.
<tickets>
<ticket>
<link href="..." rel="edit" />
...
</ticket>
</tickets>
In both cases, the client can be coded to search the representation
for the proper elements and act accordingly.
All this information can be documented the media-type used with the
service including any special element names, rel values, etc. viable
actions on these links, etc.
<snip>
> Does anyone have an idea how to align this (IMHO fact) with the
> constraint that no information about resource types must be made
> available to clients in RESTful systems?
</snip>
Not sure I understand this last statement. Do you mean media-types?
mca
http://amundsen.com/blog/
On Wed, Dec 16, 2009 at 10:21, Jan Algermissen <algermissen1971@...> wrote:
> I can't help it: I see no possible way to implement a non-human-driven
> client for a service without (in one way or another) classifying the
> resources the service provides.
>
> For example, consider a helpdesk ticket system: When writing a client
> that searches for tickets and then updates the foo:status of the
> individual tickets contained in the result set, I need to make the
> assumption that the result set contains tickets (and not just
> resources). In order to being able to make such an assumptions, the
> classification information must be made available by the service. In
> addition, when client developers should be enabled to develop clients
> before the services exist this information is needed as some form of
> service type description. The specification of application/atomsrv+xml
> is a good example of such a service type description.
>
> But however this is approached, it essentially comes down to telling
> the client what kinds of resources (IOW: kinds of application states)
> to expect on the server. I just cannot code to update the resource
> foo:status when I have now clue that this user goal is applicable to
> the resource in the first place.
>
> Does anyone have an idea how to align this (IMHO fact) with the
> constraint that no information about resource types must be made
> available to clients in RESTful systems?
>
> Jan
>
> P.S. In human driven interactions the situation is different: We still
> have knowledge of the resource type iin general (we know a trouble
> ticket when we see one) but we are not dependent on knowing that the
> result of some interaction will be a trouble ticket. We can allways
> follow some human-targeted links and make a few hops to reach the
> trouble ticket resource we expect should be 'somwehere'. M2M clients
> do not have that luxury (unless we apply some form of AI I guess).
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Hi all, I've spent the last couple of months designing a system to support data-sharing for a malaria research community, and much of this has involved getting to grips with the atom publishing protocol as our standard of choice for metadata persistence services. While this has worked well for the development of initial prototypes, we're soon going to have to deal with a number of issues in moving to a production system, and we'd really appreciate some help in making the right decisions to move forward. I've written up some of these issues at: http://alimanfoo.wordpress.com/2009/12/15/rest-not-so-easy-data-sharing-networks-and-the-atom-publishing-protocol/ Ed Summers kindly pointed me at the rest-discuss list, and I see I have a lot of catching up to do! In the mean time, any help or suggestions of places to start would be extremely gratefully received. We need to move fairly quickly, so pointers to working solutions and existing open-source implementations would be especially useful. Many thanks, Alistair -- Alistair Miles Centre for Genomics and Global Health <http://cggh.org> The Wellcome Trust Centre for Human Genetics Roosevelt Drive Oxford OX3 7BN United Kingdom Web: http://purl.org/net/aliman Email: alimanfoo@... Tel: +44 (0)1865 287669
On Dec 16, 2009, at 4:58 PM, mike amundsen wrote: > Jan: > > If I understand your description, you are talking about creating a > client that can search for helpdesk tickets (at some known URI, I > assume) and, if one or more tickets come back in the response > representation, are then able to perform some action on the tickets > (change status, etc). Yes. But it was really only meant as an example. The point is that the client makes assumptions about what comes back. It assumes it is e.g. a ticket (or an entry as in the case of AtomPub, or an order etc.). IOW: the client does not simply assume the items are resources. > > I think this can be done by documenting a media-type constraint that > includes information to identify tickets. > <link href="...." rel="http://www.example.org/rels/ticket" /> Yes. But when coding the client, you need to make use of the assumption that you should look for that rel and this comes down to the assumption that the items in the collection are 'tickets' (and not just resources). You cannot code the client without knowing that you are going to 'interact with' tickets. > > Alternately, a similar approach could be used when the response > representation includes more than just links, but actual tickets. > <tickets> > <ticket> > <link href="..." rel="edit" /> > ... > </ticket> > </tickets> > > In both cases, the client can be coded to search the representation > for the proper elements and act accordingly. Yes. But (see above) you code based on the assumptions that such 'kinds' of resources exist on the server. When you have a human facing client it is different, because then you just turn the links etc. found in the representations into buttons (e.g. [edit]) and let the human click on it. In these human driven interactions, the human makes the same kinds of assumptions (e.g. when I interact with Amazon I assume I can select items and then order them) but the assumptions do not manifets themselves in code. For M2M clients they do and my point is that coding based on such assumptions is inevitably based on the server describing what *kinds* of resource to expect. (The AtomPub spec has a section 'Resource Classification' that does axactly this). > > All this information can be documented the media-type used with the > service including any special element names, rel values, etc. viable > actions on these links, etc. > > <snip> >> Does anyone have an idea how to align this (IMHO fact) with the >> constraint that no information about resource types must be made >> available to clients in RESTful systems? > </snip> > > Not sure I understand this last statement. Do you mean media-types? No, resource types (e.g. AtomPub's collection, member, media-entry...) Another way to view this is to ask the question: Assuming we had a bunch of media types for aonline shopping, could you code a machine client for an online shop without knowing that (or assuming that) from the representation of an item there will be a transition that you can follow to order the item and that this will somehow result in an order you can then modify or cancel? In pseudo code GET /item/3 orderURI = ...find order link or form... POST orderURI The key issue is the '...find order link or form...' because it manifests the assumption that such a thing MAY/SHOULD/MUST be findable and you cannot possibly base this assumption on knowing that /item/3 is a resource. You assme it is an orderable item. This assumption is equivalent to a 'resource type' Jan > > mca > http://amundsen.com/blog/ > > > > > On Wed, Dec 16, 2009 at 10:21, Jan Algermissen <algermissen1971@... > > wrote: >> I can't help it: I see no possible way to implement a non-human- >> driven >> client for a service without (in one way or another) classifying the >> resources the service provides. >> >> For example, consider a helpdesk ticket system: When writing a client >> that searches for tickets and then updates the foo:status of the >> individual tickets contained in the result set, I need to make the >> assumption that the result set contains tickets (and not just >> resources). In order to being able to make such an assumptions, the >> classification information must be made available by the service. In >> addition, when client developers should be enabled to develop clients >> before the services exist this information is needed as some form of >> service type description. The specification of application/atomsrv >> +xml >> is a good example of such a service type description. >> >> But however this is approached, it essentially comes down to telling >> the client what kinds of resources (IOW: kinds of application states) >> to expect on the server. I just cannot code to update the resource >> foo:status when I have now clue that this user goal is applicable to >> the resource in the first place. >> >> Does anyone have an idea how to align this (IMHO fact) with the >> constraint that no information about resource types must be made >> available to clients in RESTful systems? >> >> Jan >> >> P.S. In human driven interactions the situation is different: We >> still >> have knowledge of the resource type iin general (we know a trouble >> ticket when we see one) but we are not dependent on knowing that the >> result of some interaction will be a trouble ticket. We can allways >> follow some human-targeted links and make a few hops to reach the >> trouble ticket resource we expect should be 'somwehere'. M2M clients >> do not have that luxury (unless we apply some form of AI I guess). >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
<snip> > Another way to view this is to ask the question: Assuming we had a bunch of > media types for aonline shopping, could you code a machine client for an > online shop without knowing that (or assuming that) from the representation > of an item there will be a transition that you can follow to order the item > and that this will somehow result in an order you can then modify or cancel? > > In pseudo code > > GET /item/3 > > orderURI = ...find order link or form... > > POST orderURI > > > The key issue is the '...find order link or form...' because it manifests > the assumption that such a thing MAY/SHOULD/MUST be findable and you cannot > possibly base this assumption on knowing that /item/3 is a resource. You > assme it is an orderable item. This assumption is equivalent to a 'resource > type' </snip> Yep, if we want a machine client to seek a goal (shop online, etc.), we have to have enough out-of-band information ahead of time in order to program it accordingly. However, I don't think it's the _resources_ that need to be documented. Instead, I think the key ingredient is a media-type that has sufficiently documented hypermedia constraints (link elements and rel values, along w/ important data elements) to communicate the semantics involved. In addition, I see no reason why the media-type needs to be scoped down to the resource. AFAICT, Atom has enough support for link-rels to make this work for a goal-seeking client. XHTML certainly contains enough of the parts as well. Clients to not need to know all the possible transition states, only the semantic information in the representations returned. And that can be encapsulated in the hypermedia (e.g. link elements and rel values). IOW, as long as a service properly decorates and documents the links in it's media-type representations out-of-band, it should be possible to build a state-engine client to do the work. If a group of similar service providers (online stores) can agree on the same out-of-band documentation, that state-engine client is now more valuable. mca http://amundsen.com/blog/ On Wed, Dec 16, 2009 at 11:36, Jan Algermissen <algermissen1971@...> wrote: > > On Dec 16, 2009, at 4:58 PM, mike amundsen wrote: > >> Jan: >> >> If I understand your description, you are talking about creating a >> client that can search for helpdesk tickets (at some known URI, I >> assume) and, if one or more tickets come back in the response >> representation, are then able to perform some action on the tickets >> (change status, etc). > > Yes. But it was really only meant as an example. The point is that > the client makes assumptions about what comes back. It assumes it is > e.g. a ticket (or an entry as in the case of AtomPub, or an order etc.). > IOW: the client does not simply assume the items are resources. > >> >> I think this can be done by documenting a media-type constraint that >> includes information to identify tickets. >> <link href="...." rel="http://www.example.org/rels/ticket" /> > > Yes. But when coding the client, you need to make use of the assumption that > you should look for that rel and this comes down to the assumption that the > items in the collection are 'tickets' (and not just resources). You cannot > code the client without knowing that you are going to 'interact with' > tickets. > >> >> Alternately, a similar approach could be used when the response >> representation includes more than just links, but actual tickets. >> <tickets> >> <ticket> >> <link href="..." rel="edit" /> >> ... >> </ticket> >> </tickets> >> >> In both cases, the client can be coded to search the representation >> for the proper elements and act accordingly. > > Yes. But (see above) you code based on the assumptions that such 'kinds' of > resources exist on the server. When you have a human facing client it is > different, because then you just turn the links etc. found in the > representations into buttons (e.g. [edit]) and let the human click on it. In > these human driven interactions, the human makes the same kinds of > assumptions (e.g. when I interact with Amazon I assume I can select items > and then order them) but the assumptions do not manifets themselves in code. > For M2M clients they do and my point is that coding based on such > assumptions is inevitably based on the server describing what *kinds* of > resource to expect. (The AtomPub spec has a section 'Resource > Classification' that does axactly this). > > >> >> All this information can be documented the media-type used with the >> service including any special element names, rel values, etc. viable >> actions on these links, etc. >> >> <snip> >>> >>> Does anyone have an idea how to align this (IMHO fact) with the >>> constraint that no information about resource types must be made >>> available to clients in RESTful systems? >> >> </snip> >> >> Not sure I understand this last statement. Do you mean media-types? > > No, resource types (e.g. AtomPub's collection, member, media-entry...) > > > Another way to view this is to ask the question: Assuming we had a bunch of > media types for aonline shopping, could you code a machine client for an > online shop without knowing that (or assuming that) from the representation > of an item there will be a transition that you can follow to order the item > and that this will somehow result in an order you can then modify or cancel? > > In pseudo code > > GET /item/3 > > orderURI = ...find order link or form... > > POST orderURI > > > The key issue is the '...find order link or form...' because it manifests > the assumption that such a thing MAY/SHOULD/MUST be findable and you cannot > possibly base this assumption on knowing that /item/3 is a resource. You > assme it is an orderable item. This assumption is equivalent to a 'resource > type' > > Jan > > > > > > > > > >> >> mca >> http://amundsen.com/blog/ >> >> >> >> >> On Wed, Dec 16, 2009 at 10:21, Jan Algermissen <algermissen1971@...> >> wrote: >>> >>> I can't help it: I see no possible way to implement a non-human-driven >>> client for a service without (in one way or another) classifying the >>> resources the service provides. >>> >>> For example, consider a helpdesk ticket system: When writing a client >>> that searches for tickets and then updates the foo:status of the >>> individual tickets contained in the result set, I need to make the >>> assumption that the result set contains tickets (and not just >>> resources). In order to being able to make such an assumptions, the >>> classification information must be made available by the service. In >>> addition, when client developers should be enabled to develop clients >>> before the services exist this information is needed as some form of >>> service type description. The specification of application/atomsrv+xml >>> is a good example of such a service type description. >>> >>> But however this is approached, it essentially comes down to telling >>> the client what kinds of resources (IOW: kinds of application states) >>> to expect on the server. I just cannot code to update the resource >>> foo:status when I have now clue that this user goal is applicable to >>> the resource in the first place. >>> >>> Does anyone have an idea how to align this (IMHO fact) with the >>> constraint that no information about resource types must be made >>> available to clients in RESTful systems? >>> >>> Jan >>> >>> P.S. In human driven interactions the situation is different: We still >>> have knowledge of the resource type iin general (we know a trouble >>> ticket when we see one) but we are not dependent on knowing that the >>> result of some interaction will be a trouble ticket. We can allways >>> follow some human-targeted links and make a few hops to reach the >>> trouble ticket resource we expect should be 'somwehere'. M2M clients >>> do not have that luxury (unless we apply some form of AI I guess). >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >>> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
You can create a small API that is generic enough to allow job registration,
but your jobs are specific-resource aware:
class DangerSLAJob
def self.can_handle(r)
is_ticket(r) && r.status=='waiting_response' && r.date < Time.now-2.days
end
def self.is_ticket(r)
r.kind_of? Ticket
end
def initialize(r)
@resource = r
end
def execute
send_mail_about @resource
end
end
# engine code
jobs = Jobs.list
while true
resources = Restfulie.at(uri).get
resources.each do |resource|
jobs.for(resource).execute
end
end
So now its just up to you to create your jobs and register them in your
queue
Regards
Guilherme Silveira
Caelum | Ensino e Inovação
http://www.caelum.com.br/
2009/12/16 mike amundsen <mamund@...>
>
>
> <snip>
>
> > Another way to view this is to ask the question: Assuming we had a bunch
> of
> > media types for aonline shopping, could you code a machine client for an
> > online shop without knowing that (or assuming that) from the
> representation
> > of an item there will be a transition that you can follow to order the
> item
> > and that this will somehow result in an order you can then modify or
> cancel?
> >
> > In pseudo code
> >
> > GET /item/3
> >
> > orderURI = ...find order link or form...
> >
> > POST orderURI
> >
> >
> > The key issue is the '...find order link or form...' because it manifests
> > the assumption that such a thing MAY/SHOULD/MUST be findable and you
> cannot
> > possibly base this assumption on knowing that /item/3 is a resource. You
> > assme it is an orderable item. This assumption is equivalent to a
> 'resource
> > type'
> </snip>
>
> Yep, if we want a machine client to seek a goal (shop online, etc.),
> we have to have enough out-of-band information ahead of time in order
> to program it accordingly.
>
> However, I don't think it's the _resources_ that need to be
> documented. Instead, I think the key ingredient is a media-type that
> has sufficiently documented hypermedia constraints (link elements and
> rel values, along w/ important data elements) to communicate the
> semantics involved. In addition, I see no reason why the media-type
> needs to be scoped down to the resource. AFAICT, Atom has enough
> support for link-rels to make this work for a goal-seeking client.
> XHTML certainly contains enough of the parts as well.
>
> Clients to not need to know all the possible transition states, only
> the semantic information in the representations returned. And that can
> be encapsulated in the hypermedia (e.g. link elements and rel values).
> IOW, as long as a service properly decorates and documents the links
> in it's media-type representations out-of-band, it should be possible
> to build a state-engine client to do the work. If a group of similar
> service providers (online stores) can agree on the same out-of-band
> documentation, that state-engine client is now more valuable.
>
>
> mca
> http://amundsen.com/blog/
>
> On Wed, Dec 16, 2009 at 11:36, Jan Algermissen <algermissen1971@...<algermissen1971%40mac.com>>
> wrote:
> >
> > On Dec 16, 2009, at 4:58 PM, mike amundsen wrote:
> >
> >> Jan:
> >>
> >> If I understand your description, you are talking about creating a
> >> client that can search for helpdesk tickets (at some known URI, I
> >> assume) and, if one or more tickets come back in the response
> >> representation, are then able to perform some action on the tickets
> >> (change status, etc).
> >
> > Yes. But it was really only meant as an example. The point is that
> > the client makes assumptions about what comes back. It assumes it is
> > e.g. a ticket (or an entry as in the case of AtomPub, or an order etc.).
> > IOW: the client does not simply assume the items are resources.
> >
> >>
> >> I think this can be done by documenting a media-type constraint that
> >> includes information to identify tickets.
> >> <link href="...." rel="http://www.example.org/rels/ticket" />
> >
> > Yes. But when coding the client, you need to make use of the assumption
> that
> > you should look for that rel and this comes down to the assumption that
> the
> > items in the collection are 'tickets' (and not just resources). You
> cannot
> > code the client without knowing that you are going to 'interact with'
> > tickets.
> >
> >>
> >> Alternately, a similar approach could be used when the response
> >> representation includes more than just links, but actual tickets.
> >> <tickets>
> >> <ticket>
> >> <link href="..." rel="edit" />
> >> ...
> >> </ticket>
> >> </tickets>
> >>
> >> In both cases, the client can be coded to search the representation
> >> for the proper elements and act accordingly.
> >
> > Yes. But (see above) you code based on the assumptions that such 'kinds'
> of
> > resources exist on the server. When you have a human facing client it is
> > different, because then you just turn the links etc. found in the
> > representations into buttons (e.g. [edit]) and let the human click on it.
> In
> > these human driven interactions, the human makes the same kinds of
> > assumptions (e.g. when I interact with Amazon I assume I can select items
> > and then order them) but the assumptions do not manifets themselves in
> code.
> > For M2M clients they do and my point is that coding based on such
> > assumptions is inevitably based on the server describing what *kinds* of
> > resource to expect. (The AtomPub spec has a section 'Resource
> > Classification' that does axactly this).
> >
> >
> >>
> >> All this information can be documented the media-type used with the
> >> service including any special element names, rel values, etc. viable
> >> actions on these links, etc.
> >>
> >> <snip>
> >>>
> >>> Does anyone have an idea how to align this (IMHO fact) with the
> >>> constraint that no information about resource types must be made
> >>> available to clients in RESTful systems?
> >>
> >> </snip>
> >>
> >> Not sure I understand this last statement. Do you mean media-types?
> >
> > No, resource types (e.g. AtomPub's collection, member, media-entry...)
> >
> >
> > Another way to view this is to ask the question: Assuming we had a bunch
> of
> > media types for aonline shopping, could you code a machine client for an
> > online shop without knowing that (or assuming that) from the
> representation
> > of an item there will be a transition that you can follow to order the
> item
> > and that this will somehow result in an order you can then modify or
> cancel?
> >
> > In pseudo code
> >
> > GET /item/3
> >
> > orderURI = ...find order link or form...
> >
> > POST orderURI
> >
> >
> > The key issue is the '...find order link or form...' because it manifests
> > the assumption that such a thing MAY/SHOULD/MUST be findable and you
> cannot
> > possibly base this assumption on knowing that /item/3 is a resource. You
> > assme it is an orderable item. This assumption is equivalent to a
> 'resource
> > type'
> >
> > Jan
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >>
> >> mca
> >> http://amundsen.com/blog/
> >>
> >>
> >>
> >>
> >> On Wed, Dec 16, 2009 at 10:21, Jan Algermissen <algermissen1971@...<algermissen1971%40mac.com>
> >
> >> wrote:
> >>>
> >>> I can't help it: I see no possible way to implement a non-human-driven
> >>> client for a service without (in one way or another) classifying the
> >>> resources the service provides.
> >>>
> >>> For example, consider a helpdesk ticket system: When writing a client
> >>> that searches for tickets and then updates the foo:status of the
> >>> individual tickets contained in the result set, I need to make the
> >>> assumption that the result set contains tickets (and not just
> >>> resources). In order to being able to make such an assumptions, the
> >>> classification information must be made available by the service. In
> >>> addition, when client developers should be enabled to develop clients
> >>> before the services exist this information is needed as some form of
> >>> service type description. The specification of application/atomsrv+xml
> >>> is a good example of such a service type description.
> >>>
> >>> But however this is approached, it essentially comes down to telling
> >>> the client what kinds of resources (IOW: kinds of application states)
> >>> to expect on the server. I just cannot code to update the resource
> >>> foo:status when I have now clue that this user goal is applicable to
> >>> the resource in the first place.
> >>>
> >>> Does anyone have an idea how to align this (IMHO fact) with the
> >>> constraint that no information about resource types must be made
> >>> available to clients in RESTful systems?
> >>>
> >>> Jan
> >>>
> >>> P.S. In human driven interactions the situation is different: We still
> >>> have knowledge of the resource type iin general (we know a trouble
> >>> ticket when we see one) but we are not dependent on knowing that the
> >>> result of some interaction will be a trouble ticket. We can allways
> >>> follow some human-targeted links and make a few hops to reach the
> >>> trouble ticket resource we expect should be 'somwehere'. M2M clients
> >>> do not have that luxury (unless we apply some form of AI I guess).
> >>>
> >>>
> >>> ------------------------------------
> >>>
> >>> Yahoo! Groups Links
> >>>
> >>>
> >>>
> >>>
> >
> > --------------------------------------
> > Jan Algermissen
> >
> > Mail: algermissen@... <algermissen%40acm.org>
> > Blog: http://algermissen.blogspot.com/
> > Home: http://www.jalgermissen.com
> > --------------------------------------
> >
> >
> >
> >
>
>
>
On Wed, Dec 16, 2009 at 9:11 AM, mike amundsen <mamund@...> wrote: > > Another way to view this is to ask the question: Assuming we had a bunch of > > media types for aonline shopping, could you code a machine client for an > > online shop without knowing that (or assuming that) from the representation > > of an item there will be a transition that you can follow to order the item > > and that this will somehow result in an order you can then modify or cancel? > > > > In pseudo code > > > > GET /item/3 > > > > orderURI = ...find order link or form... > > > > POST orderURI > > > > > > The key issue is the '...find order link or form...' because it manifests > > the assumption that such a thing MAY/SHOULD/MUST be findable and you cannot > > possibly base this assumption on knowing that /item/3 is a resource. You > > assme it is an orderable item. This assumption is equivalent to a 'resource > > type' > </snip> > > Yep, if we want a machine client to seek a goal (shop online, etc.), > we have to have enough out-of-band information ahead of time in order > to program it accordingly. > > However, I don't think it's the _resources_ that need to be > documented. Instead, I think the key ingredient is a media-type that > has sufficiently documented hypermedia constraints (link elements and > rel values, along w/ important data elements) to communicate the > semantics involved. This. From earlier, Jan wrote: > Does anyone have an idea how to align this (IMHO fact) with the > constraint that no information about resource types must be made > available to clients in RESTful systems? Where does this expectation come from? That somehow you can go in to a system "blind" and be able to do anything whatsoever with it? The "out of band" information is about making assumptions about resources that the hypermedia doesn't specifically allow for. Basically, the resource should have links telling the client what the next steps are that can be taken. Just because you "know" about some URI sequence, if you didn't extract this from a representation, then you shouldn't be using it. The only URIs etc that you should "know" are published, and documented endpoints. Anything else is organic, and should only be used in the context presented with the resource. For example. I've been to sites, with Search functions or whatever. When I see the results, they show, say, 10 items and offer "next" buttons. Being lazy, I see in the actual URL, there's something like "pageSize=10". So, more than once I've simply replaced the 10 with, say, 50, and many times the site replies with a new page with 50 items per page instead of 10. My "hacking" of that URL is "out of band" information. The result did not have an actual link for "view 50 items per page". I dissected and hacked the URL which should be treated as opaque. But the fact that there's a "rel='next'" link on that page, how would I as a client "know" that means "go to the next page"? I wouldn't, I can't intuit that, it needs to be documented somewhere, and that awareness needs to be coded in to my document. The key, as Mike mentioned, is the media-types. Those are what are needed to be "understood" by the clients. And this understanding is out of band information. The media-type would have informed me what the "rel='next'" link means. How do you think you can interpret XML or HTML or JPEG or anything at all? Because you have code that understands these media types and what you can do with them. Now, if your media-type is application/vnd.order+xml, and you understand that media type, then your client "knows" how links are described, what "rel" tags to follow to do what, etc. This information will ALWAYS have to be conveyed "out of band", notably for MtoM transactions. Regards, Will Hartung (willh@...)
On Dec 16, 2009, at 7:39 PM, Will Hartung wrote: > The media-type would have informed me what > the "rel='next'" link means. Sure. But how did you know that it makes sense to write code that looks for the rel="next" in the first place? Or: how did you know it makes sense to expect that the response would be available in application/atom+xml? As a human, you do the GET, see what is returned and if you understand it. Then, if you do, you know what you can do next. All fine. When coding a client, however, you need to know at design time that it makes sense to expect application/atom+xml to be returned. There might be other possible media types available, too, but you need some source of information that is the source for this expectation. AtomPub, for example, does state that collections are available as application/atom+xml feed documents. And it needs to do so because otherwise, building a client would not be possible. AtomPub cannot make that statement without using the term 'collection' and specifying how a client knows that a resource is a (sic!!) collection[1]. An AtomPub collection is a resource kind (or class or category or role type or type or whatever you name it). Jan [1] And it does so (more or less implicitly) by saying that whatever URIs you find in the href attributes of a service document's collection elements refers to collections. When I see <collection href="http://foo/orders"> I know a bunch of things about the resource identfied by http://foo/orders. E.g. I know that a GET on it will return an Atom feed.
<snip> ...how did you know that it makes sense to write code that looks for the rel="next" in the first place? </snip> The same way developers building Web browser clients know to write code that looks for the rel="stylesheet" in the <link> element [1], [2], [3]. It sounds like this line of questioning is about how to go about properly documenting media type semantics in a way that is helpful at design time for those building clients. Subbu published an article one year ago today on this very subject [4]. While I differ slightly on the details, the major parts are there. Anyone else have a best practice on documenting media types and link rels? mca http://amundsen.com/blog/ [1] http://www.w3.org/TR/html4/struct/links.html#edef-LINK [2] http://www.w3.org/TR/html4/types.html#h-6.12 [3] http://www.w3.org/TR/html4/present/styles.html#style-external [4] http://www.infoq.com/articles/subbu-allamaraju-rest On Wed, Dec 16, 2009 at 15:27, Jan Algermissen <algermissen1971@...> wrote: > > On Dec 16, 2009, at 7:39 PM, Will Hartung wrote: > >> The media-type would have informed me what >> the "rel='next'" link means. > > Sure. But how did you know that it makes sense to write code that looks for > the rel="next" in the first place? Or: how did you know it makes sense to > expect that the response would be available in application/atom+xml? > > As a human, you do the GET, see what is returned and if you understand it. > Then, if you do, you know what you can do next. All fine. > > When coding a client, however, you need to know at design time that it makes > sense to expect application/atom+xml to be returned. There might be other > possible media types available, too, but you need some source of information > that is the source for this expectation. > > AtomPub, for example, does state that collections are available as > application/atom+xml feed documents. And it needs to do so because > otherwise, building a client would not be possible. > > AtomPub cannot make that statement without using the term 'collection' and > specifying how a client knows that a resource is a (sic!!) collection[1]. An > AtomPub collection is a resource kind (or class or category or role type or > type or whatever you name it). > > Jan > > > [1] And it does so (more or less implicitly) by saying that whatever URIs > you find in the href attributes of a service document's collection elements > refers to collections. When I see > > <collection href="http://foo/orders"> I know a bunch of things about the > resource identfied by http://foo/orders. E.g. I know that a GET on it will > return an Atom feed. > > > > > >
But if the page is in Chinese, I wouldn't know what to do as I don't understand Chinese. I'd be guessing if I clicked on any links. I know what to do because I understand English (assuming the page is in English) and I expect a page in English to show up. So I have an information store of English words that I refer too when I see the page, this "aids" me in deciding the next state transition i.e. the next thing to click on. Even in the human web, there is a "form" of coupling. Not so sure why it's any different anywhere else.
On Dec 16, 2009, at 9:45 PM, amaeze77 wrote: > But if the page is in Chinese, I wouldn't know what to do as I don't > understand Chinese. I'd be guessing if I clicked on any links. I > know what to do because I understand English (assuming the page is > in English) and I expect a page in English to show up. So I have an > information store of English words that I refer too when I see the > page, this "aids" me in deciding the next state transition i.e. the > next thing to click on. > > Even in the human web, there is a "form" of coupling. Not so sure > why it's any different anywhere else. The problem is not different at all, but its effect is. While humans can react flexibly enough to preserve REST's 'promise' of independent evolvability machine clients create a coupling that is IMHO very easy to overlook. Jan > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 16, 2009, at 9:45 PM, mike amundsen wrote: > <snip> > ...how did you know that it makes sense to write code that looks for > the rel="next" in the first place? > </snip> > > The same way developers building Web browser clients know to write > code that looks for the rel="stylesheet" in the <link> element [1], > [2], [3]. > > It sounds like this line of questioning is about how to go about > properly documenting media type semantics in a way that is helpful at > design time for those building clients. Yep. And AtomPub does a good job and serves as a lucid example. My point is that such media type semantics involve classification of resources and that this is something that is not RESTful. While AtomPub is not very constraining because it emphazises server flexibility for other problem spaces there might be a need for relatively strict specifications. I suspect that the resource classifications used in such specifications will create a kind of coupling that brings us very close to non-uniform interface style coupling. Viewed from another angle: Someone who is in charge of evolving a service is free to change the service in any way, as long as it meets all constraints defined in the used hypermedia specs. No client will break, ever. I think that for M2M scenarios you either cannot build any clients at all or the hypermedia specs inevitably contain very strict rules. Jan > Subbu published an article one > year ago today on this very subject [4]. While I differ slightly on > the details, the major parts are there. > > Anyone else have a best practice on documenting media types and link > rels? > > mca > http://amundsen.com/blog/ > > [1] http://www.w3.org/TR/html4/struct/links.html#edef-LINK > [2] http://www.w3.org/TR/html4/types.html#h-6.12 > [3] http://www.w3.org/TR/html4/present/styles.html#style-external > [4] http://www.infoq.com/articles/subbu-allamaraju-rest > > > On Wed, Dec 16, 2009 at 15:27, Jan Algermissen <algermissen1971@... > > wrote: >> >> On Dec 16, 2009, at 7:39 PM, Will Hartung wrote: >> >>> The media-type would have informed me what >>> the "rel='next'" link means. >> >> Sure. But how did you know that it makes sense to write code that >> looks for >> the rel="next" in the first place? Or: how did you know it makes >> sense to >> expect that the response would be available in application/atom+xml? >> >> As a human, you do the GET, see what is returned and if you >> understand it. >> Then, if you do, you know what you can do next. All fine. >> >> When coding a client, however, you need to know at design time that >> it makes >> sense to expect application/atom+xml to be returned. There might be >> other >> possible media types available, too, but you need some source of >> information >> that is the source for this expectation. >> >> AtomPub, for example, does state that collections are available as >> application/atom+xml feed documents. And it needs to do so because >> otherwise, building a client would not be possible. >> >> AtomPub cannot make that statement without using the term >> 'collection' and >> specifying how a client knows that a resource is a (sic!!) >> collection[1]. An >> AtomPub collection is a resource kind (or class or category or role >> type or >> type or whatever you name it). >> >> Jan >> >> >> [1] And it does so (more or less implicitly) by saying that >> whatever URIs >> you find in the href attributes of a service document's collection >> elements >> refers to collections. When I see >> >> <collection href="http://foo/orders"> I know a bunch of things >> about the >> resource identfied by http://foo/orders. E.g. I know that a GET on >> it will >> return an Atom feed. >> >> >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
I think we're mostly in agreement, but I'm still not getting this: <snip> My point is that such media type semantics involve classification of resources and that this is something that is not RESTful. </snip> No need to elaborate, I think it's me<g>. I'll think on this some more and may ask you about it again later. mca http://amundsen.com/blog/ On Wed, Dec 16, 2009 at 16:10, Jan Algermissen <algermissen1971@...> wrote: > > On Dec 16, 2009, at 9:45 PM, mike amundsen wrote: > >> <snip> >> ...how did you know that it makes sense to write code that looks for >> the rel="next" in the first place? >> </snip> >> >> The same way developers building Web browser clients know to write >> code that looks for the rel="stylesheet" in the <link> element [1], >> [2], [3]. >> >> It sounds like this line of questioning is about how to go about >> properly documenting media type semantics in a way that is helpful at >> design time for those building clients. > > Yep. And AtomPub does a good job and serves as a lucid example. My point is > that such media type semantics involve classification of resources and that > this is something that is not RESTful. > > While AtomPub is not very constraining because it emphazises server > flexibility for other problem spaces there might be a need for relatively > strict specifications. I suspect that the resource classifications used in > such specifications will create a kind of coupling that brings us very close > to non-uniform interface style coupling. > > Viewed from another angle: > > Someone who is in charge of evolving a service is free to change the service > in any way, as long as it meets all constraints defined in the used > hypermedia specs. No client will break, ever. I think that for M2M scenarios > you either cannot build any clients at all or the hypermedia specs > inevitably contain very strict rules. > > Jan > > >> Subbu published an article one >> year ago today on this very subject [4]. While I differ slightly on >> the details, the major parts are there. >> >> Anyone else have a best practice on documenting media types and link rels? >> >> mca >> http://amundsen.com/blog/ >> >> [1] http://www.w3.org/TR/html4/struct/links.html#edef-LINK >> [2] http://www.w3.org/TR/html4/types.html#h-6.12 >> [3] http://www.w3.org/TR/html4/present/styles.html#style-external >> [4] http://www.infoq.com/articles/subbu-allamaraju-rest >> >> >> On Wed, Dec 16, 2009 at 15:27, Jan Algermissen <algermissen1971@...> >> wrote: >>> >>> On Dec 16, 2009, at 7:39 PM, Will Hartung wrote: >>> >>>> The media-type would have informed me what >>>> the "rel='next'" link means. >>> >>> Sure. But how did you know that it makes sense to write code that looks >>> for >>> the rel="next" in the first place? Or: how did you know it makes sense to >>> expect that the response would be available in application/atom+xml? >>> >>> As a human, you do the GET, see what is returned and if you understand >>> it. >>> Then, if you do, you know what you can do next. All fine. >>> >>> When coding a client, however, you need to know at design time that it >>> makes >>> sense to expect application/atom+xml to be returned. There might be other >>> possible media types available, too, but you need some source of >>> information >>> that is the source for this expectation. >>> >>> AtomPub, for example, does state that collections are available as >>> application/atom+xml feed documents. And it needs to do so because >>> otherwise, building a client would not be possible. >>> >>> AtomPub cannot make that statement without using the term 'collection' >>> and >>> specifying how a client knows that a resource is a (sic!!) collection[1]. >>> An >>> AtomPub collection is a resource kind (or class or category or role type >>> or >>> type or whatever you name it). >>> >>> Jan >>> >>> >>> [1] And it does so (more or less implicitly) by saying that whatever URIs >>> you find in the href attributes of a service document's collection >>> elements >>> refers to collections. When I see >>> >>> <collection href="http://foo/orders"> I know a bunch of things about the >>> resource identfied by http://foo/orders. E.g. I know that a GET on it >>> will >>> return an Atom feed. >>> >>> >>> >>> >>> >>> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
"Or: how did you know it makes sense to expect that the response would be available in application/atom+xml?" When you made the request, the accept header would have specified which media-type it understands. http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html. If you say */* you get anything back. Once you have the content from the server you do need to follow the strict definition. Consider media-types like jpeg or png. There isn't much wiggle room when interpreting these. I believe the same applies to xml, xhtml, atom+xml, atomsvc+xml, etc. as well. -Noah On Wed, Dec 16, 2009 at 12:27 PM, Jan Algermissen <algermissen1971@...>wrote: > > On Dec 16, 2009, at 7:39 PM, Will Hartung wrote: > > > The media-type would have informed me what > > the "rel='next'" link means. > > Sure. But how did you know that it makes sense to write code that > looks for the rel="next" in the first place? Or: how did you know it > makes sense to expect that the response would be available in > application/atom+xml? > > As a human, you do the GET, see what is returned and if you understand > it. Then, if you do, you know what you can do next. All fine. > > When coding a client, however, you need to know at design time that it > makes sense to expect application/atom+xml to be returned. There might > be other possible media types available, too, but you need some source > of information that is the source for this expectation. > > AtomPub, for example, does state that collections are available as > application/atom+xml feed documents. And it needs to do so because > otherwise, building a client would not be possible. > > AtomPub cannot make that statement without using the term 'collection' > and specifying how a client knows that a resource is a (sic!!) > collection[1]. An AtomPub collection is a resource kind (or class or > category or role type or type or whatever you name it). > > Jan > > > [1] And it does so (more or less implicitly) by saying that whatever > URIs you find in the href attributes of a service document's > collection elements refers to collections. When I see > > <collection href="http://foo/orders"> I know a bunch of things about > the resource identfied by http://foo/orders. E.g. I know that a GET on > it will return an Atom feed. > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Wed, Dec 16, 2009 at 12:27 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Dec 16, 2009, at 7:39 PM, Will Hartung wrote: > >> The media-type would have informed me what >> the "rel='next'" link means. > > Sure. But how did you know that it makes sense to write code that looks for > the rel="next" in the first place? Or: how did you know it makes sense to > expect that the response would be available in application/atom+xml? Part of the problem here is simply atom+xml. The problem is that it's too generic, and describes mostly a payload rather than strict semantics. That's what you're bumping in to here. Link relationships in atom are just that, relationships, But, in theory, when the media type is documented, the semantics of the rels will be defined. Deciding to look for a particular link type on your machine varies. For example, if you have a media-type that has links for paging, then it's not untoward to have a client coded to check for links tagged with 'next' rels if it wants more of the resource being served. But you can see how this is an optional link -- there may well not be any more available. As for how I know to expect a response to be available in response to the link, the link can/should have a type associated with it telling me that's what to expect. > As a human, you do the GET, see what is returned and if you understand it. > Then, if you do, you know what you can do next. All fine. > > When coding a client, however, you need to know at design time that it makes > sense to expect application/atom+xml to be returned. There might be other > possible media types available, too, but you need some source of information > that is the source for this expectation. The entry points of the service need to be externally documented, part of that documentation is the media type being returned, or available. If you want alternate forms, then in theory you should be able to fall back on content negotiation. And if you end up at an impasse where the server supplies one type but the client only accepts another, then that's what you have -- impasse. The client gets to explode spectacularly and start paging operators, or whatever it's failure mode is. In this sense, yea, clients are tightly coupled to their interpretations of the media types. If it's an extensible media type, then ideally you have an extensible client in the sense that the client won't lost functionality in the long term, but it won't necessarily be able to leverage any new capability as manifested by new entries in an updated media type. If the underlying media type changes dramatically, then the media type should change, and the client should end up at an impasse (what it can process vs what the server can provide are incompatible). You can not eliminate this coupling. All you can do is make the coupling less painful by using extensible media types that accept change more readily and ensure backward compatibility and by implementing friendly services that don't simply "go dark" without solid warning, announcements, etc. and themselves maintaining some modicum of backward compatibility (i.e. still accepting the old protocols during some transition period). In the end, the server can send a 406 Does not accept with a nice description of "Yea, we changed this 2 years ago, here's a link to the new documentation". So, how does REST then differ from some other mechanism? Where's the promise? The promise comes from the fact that the media types are the details of the system, but those details are actually pretty high level. The hard details, the links, the host names, etc. those are gone. Those can change however and whenever you want, and compliant clients will continue to function. If you add a new media type to your system, then once the clients actually understand that media type, then it's back in the game. For example, you can have a new media type, application/yourapp+xml, and it changed to application/yournewapp+xml. If a client understands both of those, then as you roll out this new media type throughout your infrastructure, the clients will work with both. You could have a new payload link to a service that provides the old payload, and the client will say "okey dokey" because it understands them both. And when you finally upgrade that older service to the new type, the client doesn't change -- it already knows the new payload, all you did was change the new part to instead of pointing to an older server, it points to a modern service. The client doesn't know anything about what host or app or whatever is being called, it know opaque URIs and media types, so this contract holds. Combining uniform interface with "well known" media types lets the underlying infrastructure remain nimble. But it also highlights how you should go about selecting and designing the media types for your application. Regards, Will Hartung (willh@...)
On Dec 16, 2009, at 11:48 PM, Noah Campbell wrote: > "Or: how did you know it makes sense to expect that the response > would be available in application/atom+xml?" > > When you made the request, the accept header would have specified > which media-type it understands. http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html > . If you say */* you get anything back. > > Once you have the content from the server you do need to follow the > strict definition. Consider media-types like jpeg or png. There > isn't much wiggle room when interpreting these. I believe the same > applies to xml, xhtml, atom+xml, atomsvc+xml, etc. as well. > Umm...but that means I can only implement a client for the service when the service already exists. This would make it impossible to discover services at runtime or to replace existing services. In both cases, the client could only be implemented by inspecting the service. Might be sufficient on the Web, yes. But not in an enterprise context. Jan > -Noah > > On Wed, Dec 16, 2009 at 12:27 PM, Jan Algermissen <algermissen1971@... > > wrote: > > On Dec 16, 2009, at 7:39 PM, Will Hartung wrote: > > > The media-type would have informed me what > > the "rel='next'" link means. > > Sure. But how did you know that it makes sense to write code that > looks for the rel="next" in the first place? Or: how did you know it > makes sense to expect that the response would be available in > application/atom+xml? > > As a human, you do the GET, see what is returned and if you understand > it. Then, if you do, you know what you can do next. All fine. > > When coding a client, however, you need to know at design time that it > makes sense to expect application/atom+xml to be returned. There might > be other possible media types available, too, but you need some source > of information that is the source for this expectation. > > AtomPub, for example, does state that collections are available as > application/atom+xml feed documents. And it needs to do so because > otherwise, building a client would not be possible. > > AtomPub cannot make that statement without using the term 'collection' > and specifying how a client knows that a resource is a (sic!!) > collection[1]. An AtomPub collection is a resource kind (or class or > category or role type or type or whatever you name it). > > Jan > > > [1] And it does so (more or less implicitly) by saying that whatever > URIs you find in the href attributes of a service document's > collection elements refers to collections. When I see > > <collection href="http://foo/orders"> I know a bunch of things about > the resource identfied by http://foo/orders. E.g. I know that a GET on > it will return an Atom feed. > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
> The problem is not different at all, but its effect is. While humans > can react flexibly enough to preserve REST's 'promise' of independent > evolvability machine clients create a coupling that is IMHO very >easy > to overlook. Please expound on this "react flexibly" concept? What does that mean?
> Umm...but that means I can only implement a client for the service > when the service already exists. This would make it impossible to > discover services at runtime or to replace existing services. In both > cases, the client could only be implemented by inspecting the service. > > Might be sufficient on the Web, yes. But not in an enterprise context. Does a browser discover services at runtime? I think it does. It has no preconceived notion of any services per se. It just understands how handle a class of media types and as a long a service that is discovered delivers a media type it can understand, it works. Or is this different?
On Wed, Dec 16, 2009 at 4:13 PM, amaeze77 <amaeze@...> wrote: > Does a browser discover services at runtime? I think it does. It has no preconceived > notion of any services per se. It just understands how handle a class of media types > and as a long a service that is discovered delivers a media type it can understand, > it works. Or is this different? No, a browser doesn't do anything. All a browser does is render content. A human being discovers services by typing "buy hiking boots" in to the Google Search Service (which has a known endpoint), and then following links. Browsers do nothing with services save render or downloading content. Humans can use the browser to consume services though. There's this fantasy that a REST system offers the same discoverability and adaptability of the Human web to machine clients performing MtoM (machine to machine) transactions. That's the conflict Jan is dealing with. Regards, Will Hartung (willh@...)
On Wed, Dec 16, 2009 at 3:32 PM, Jan Algermissen <algermissen1971@...> wrote: > Umm...but that means I can only implement a client for the service when the > service already exists. This would make it impossible to discover services > at runtime or to replace existing services. In both cases, the client could > only be implemented by inspecting the service. If you understand the media type and it's semantics, and have an endpoint, then you should have enough to leverage the service. A media type will convey semantics, unless you're using something generic. For example, you can get an Atom Feed document from a service that doesn't support AtomPub. But if you get an AtomPub Service document, then anything that document leads you to can be expected to understand the semantics of the AtomPub protocol. How do you know you can get an AtomPub Service document from a URL? Someone gave you the endpoint, or you went through some other discovery service that publishes and aggregates such things (using their own media types and semantics). Not every media type participates in a single protocol, so you can't necessarily assume anything just because you found an endpoint, and that endpoint responds with a certain payload. As for not being able to implement a client without a service, that's not necessarily true. You can implement a client using the specification. I can theoretically go out and write an AtomPub client right now using the RFC and AP spec. Can I TEST it? No. But, I can't test the server without some kind of client either. Once you have a spec compliant AtomPub client, it should ideally be compatible with any AtomPub endpoint. And AtomPub and Atom are both extensible protocols, in that if any new elements are added, they are, by specification, to be skipped over, but retained by older clients. That allows the protocol to advance along a certain axis, yet remain compliant, to some extent, with older clients. Regards, Will Hartung (willh@...)
On Dec 17, 2009, at 1:24 AM, Will Hartung wrote: > There's this fantasy that a REST system offers the same > discoverability and adaptability of the Human web to machine clients > performing MtoM (machine to machine) transactions. That's the conflict > Jan is dealing with. Yep, exactly. Though I would not call it a fantasy :-) I just think that it is necessary to be more precise about the kind and amount of coupling that is present in an M2M RESTful system (mostly due to the fact that the client side code is a state machine of its own that needs to mimick the service's state machine in some way). REST still offers a heap of advantages over other styles in any case (e.g. simplicity, visibility, availability of a free, high quality software stack without vendor lock in) but I am interested in the question of exactly how much of the property of independent evolvability applies in M2M interactions that are not as 'server friendly' as AtomPub. And besides - "follow your nose" is great but doesn't exactly sell to CxOs that well :-) Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 17, 2009, at 1:28 AM, Will Hartung wrote: > As for not being able to implement a client without a service, that's > not necessarily true. You can implement a client using the > specification. I can theoretically go out and write an AtomPub client > right now using the RFC and AP spec. Yes, right. I am trying to argue (maybe not that well, sorry) that the reason you can do this based on RFC 5023 is that the RFC includes a resource classification (collection, member, entry, media entry,..) and also constrains the server regarding the media types it must use in certain cases. When I do a GET on a collection (a resource that *is a*! collection) I expect an Atom feed back (there might be other representations available, but I'll at least be able to get an Atom feed). This expectation enables me to code the client and by coding that into the client, the server is coupled in a way that REST actually is aiming to avoid. In that sense it is a 'fantasy' that the server is free to evolve independently and it is also a 'fantasy' that the client does not differentiate between kind of resources. In the human Web the same problem exists but the capabilities of the human brain to react to change (and follow previously unexpected links) does put the server in the position to evolve much more independently. I am sure that Amazon could mess around with the whole shop and the way ordering works and the user would still be able to buy a book. This is the huge benefit of REST - it just does not apply that easily to the M2M case. And I think this needs to be said clearly and honestly and it needs to be theoretically captured. Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Jan Algermissen wrote: > > Addressing case two in any scenario that goes beyond the document > formats > in use on the Web there is no way around using more or less domain > specific formats. For example, you cannot convey to the client number > of a leasing contract without a markup for contracts (or even leasing > contracts). The domain model has to propagate into the media types - > that > is unavoidable. > I disagree. If I were designing a REST system to deal with leasing contracts, the first thing I would do is research whether or not there exists a standard for representing leasing contracts. Upon finding none, my options as a REST developer are to re-use an existing media type by extending it to handle the specifics of a lease-contract representation, or create a new media type. If I do create a new media type, I wouldn't call the result RESTful unless and until that media type is standardized. I would simply use XHTML as the media type. If the client number is important information, then I would identify it as such using RDFa or a microformat approach, i.e. assign @id='client_number' thereby giving clients the ability to glean a client number from any XHTML representation of a lease contract. Clients that aren't interested in the client number will gracefully degrade, displaying a human-readable document. Whereas, if I create a new media type, I'd be reinventing a whole bunch of different wheels -- headings, paragraphs, boldface, italics, links, link relations, HTTP methods and the whole #!. Clients that don't understand this new markup language will ignore the markup they don't recognize and display the representation as a big puddle of text. My rule of thumb remains: Don't create a new media type, when the technology exists to extend any number of existing media types to solve the problem. Creating media types is hard, hard work -- if done properly. Unfortunately, I mostly see new media types being created for the sole purpose of serving as a "token" to guide server behavior (mostly by redefining method semantics). Such media types aren't really media types at all -- they're another way to send instructions to a server, instead of following REST by sending a representation of an application state to the server. -Eric
> No, a browser doesn't do anything. All a browser does is render >content. > > A human being discovers services by typing "buy hiking boots" in to > the Google Search Service (which has a known endpoint), and then > following links. > > Browsers do nothing with services save render or downloading content. > Humans can use the browser to consume services though. > > There's this fantasy that a REST system offers the same > discoverability and adaptability of the Human web to machine clients > performing MtoM (machine to machine) transactions. That's the conflict > Jan is dealing with. Well a browser is told what to go look up, the discoverability I was looking at is that it really knows nothing about what it's been asked to look up but can render/download as long as its an acceptable media type. Anyway, I get what you are saying.
> In the human Web the same problem exists but the capabilities of >the > human brain to react to change (and follow previously unexpected > links) does put the server in the position to evolve much more > independently. I am sure that Amazon could mess around with the >whole > shop and the way ordering works and the user would still be able to > buy a book. This is the huge benefit of REST - it just does not >apply > that easily to the M2M case. > > And I think this needs to be said clearly and honestly and it needs >to > be theoretically captured. Yeah, as I was typing a previous response, it occurred to me that what you were getting at was the power of human brain/mind. From that perspective, it's not even a fair comparison. :) However, in both cases there is a "contract", in the human web case, it's (possibly significantly) looser. Would it be fair to say that certain kinds of server evolution cannot be handled seamlessly in M2M scenarios? But there are others that can be tolerated? Eb
Guilherme Silveira wrote: > > Hello Eric, > > Can you check if I understood correctly? By using well-known > media-types (as xhtml and atom): > > - (positive) intermediate layers are able to understand its > information and act accordingly, although it does not know the meaning > of a, i.e., class="contract" within a div > Correct, self-descriptive messaging refers to the HTTP headers involved, including a well-known method and media type. The entity itself is ignored. The media type is irrelevant to intermediaries, unless of course content negotiation is involved. The reason for using well- known media types is for visibility (I can't tell by looking what 'application/vnd.*' means, like I can with, say, 'text/html') and serendipitous re-use. If, to use your service, I have to code a client to some completely new media type versus just coding a client to handle extensions to a well-known media type, I'm much less likely to bother with your service. > > - (positive) classes can represent what in other custom formats would > be an xml element > Or attribute. Don't forget that the success of the Web is predicated upon displaying all sorts of documents using the limited set of elements and attributes provided by HTML. Instead of creating new elements, figure out which existing elements can serve the purpose, and decorate them with @id, @class, @role etc. to provide specific semantics not offered by the host language (HTML, SVG, Atom etc.). > > - (positive) we can use schematron to validate it on the server and > client side > You can create your own media type that's just as easily validated by RELAX NG + Schematron, or XSD. If you're extending XHTML, then first, your document should validate normally. Then, instead of creating an entirely new schema for an entirely new markup language, you just need to flesh out an existing schema to account for extension attributes and their allowable values. Much easier for others to serendipitously re-use. > > - are intermediate layers schematron-aware? (might be negative?) > The only validating intermediaries I know of are specific to wireless mobile networks. If your representations aren't well-formed XML, they'll be run through Tidy (or somesuch) to make the content well- formed before being sent on to XML-based clients. Mostly, though, intermediaries could care less about payload content / media type. > > - (positive) no custom media types - intermediate layers are able to > understand everything which passes by > Not so much for intermediaries, but for existing clients and developers. > > Any negative points that you have seen so far by using > xhtml/atom+xml/subset+xml? > I can't even imagine any. The only time I've seen a negative impact result from choice of media type, is when using media types that aren't well-known. -Eric
On Wed, Dec 16, 2009 at 4:54 PM, Jan Algermissen <algermissen1971@...> wrote: > When I do a GET on a collection (a resource that *is a*! collection) I > expect an Atom feed back (there might be other representations available, > but I'll at least be able to get an Atom feed). This expectation enables me > to code the client and by coding that into the client, the server is coupled > in a way that REST actually is aiming to avoid. > > In that sense it is a 'fantasy' that the server is free to evolve > independently and it is also a 'fantasy' that the client does not > differentiate between kind of resources. No, the server CAN evolve. It can say "atom feeds are the suck, don't use them, now we have Neutrino feeds!". If it's a kind server it will retain Atom compatibility if the client requests it. If you have atom+xml in your Accept, you get Atom. If you have */* or something else, you might get neutrino+xml instead. If it's a mean, nasty server, then it shuts you out cold with a 406 and a list of "support these or there's the door" media types. Part of the process is empowering things like Con neg so that the servers and clients can agree on content. Yea, the client can not "evolve" to supporting the new content until it's been coded. But that doesn't mean that servers can not be good citizens and be backward, even if deprecated, compatible. It puts a burden on server developers, but that's just the truth of it. At least con neg is an OPTION that CAN be supported. And why can't the client discern resources? if the client sees the Atom feed, it goes one way. If it sees the Neutrino feed, another way. Properly developed, the client can jump back and forth across both types. Heck, say you had load balanced servers, and one supported atom and the other neutrino -- you hadn't updated the second on yet. The client can transparently jump back and forth between the formats as it bounces across the servers, because the client IS leveraging the media types, and because the server is providing the links to move forward, rather than the client trying to shove Atom links down the throat of a Neutrino server. So, in that sense, I think evolution can be handled pretty elegantly. > In the human Web the same problem exists but the capabilities of the human > brain to react to change (and follow previously unexpected links) does put > the server in the position to evolve much more independently. I am sure that > Amazon could mess around with the whole shop and the way ordering works and > the user would still be able to buy a book. This is the huge benefit of REST > - it just does not apply that easily to the M2M case. But think about that. That "mess around", from the users pov, be cosmetic. They rearranged the screen, the "add to cart" button in on the left now, and "checkout button" below it, or whatever. The links those buttons go to are immaterial. Nobody cares. Now the content sent to those links, those matter. If amazon renamed "itemNo" to "productUUID" then, you know, shame on them. Your M2M client is toast. But the semantics conveyed by the link rels "add-to-cart", "checkout", those haven't changed (unless they renamed those as well -- more sillyness). They could add "add-to-wish-list", and your client may not know what that is, but it probably doesn't are either. > And I think this needs to be said clearly and honestly and it needs to be > theoretically captured. In an M2M scenario, ALL APIs are "tightly coupled". That's just the fact of it. APIs are contacts. Change the contract, bad things happen. Design APIs with growth and flexibility in mind, and you can have a more forgiving client/server experience. By using media types and HATEOAS, the clients retain a bit of discoverability. It's not so much discoverability, as it is state awareness. It can "know" where it is at any point of the process, and it "knows" where to go from there. If it follows the links given with the types specified, the client will be told where to go next. This is key. The client isn't "waiting to do the next thing". It's not got a "list of things to do", and going through them one by one. Rather it has a list of guideposts that it's told to follow, and the actual PATH it takes isn't known to the client until it reaches a goal post. Now you can code all of that in to the client, it "knows" where to go, it build URLs, and when things change, the client breaks. Because the client is a stupid client and while it functioned, it did it all the wrong way. So, that's, to me, where some of the robustness of the whole thing comes from, even in a M2M world. Regards, Will Hartung (willh@...)
On Wed, Dec 16, 2009 at 4:55 PM, Eric J. Bowman <eric@...> wrote: > I disagree. If I were designing a REST system to deal with leasing > contracts, the first thing I would do is research whether or not there > exists a standard for representing leasing contracts. Upon finding > none, my options as a REST developer are to re-use an existing media > type by extending it to handle the specifics of a lease-contract > representation, or create a new media type. > > If I do create a new media type, I wouldn't call the result RESTful > unless and until that media type is standardized. I would simply use > XHTML as the media type. If the client number is important > information, then I would identify it as such using RDFa or a > microformat approach, i.e. assign @id='client_number' thereby giving > clients the ability to glean a client number from any XHTML > representation of a lease contract. If everything is XHTML, how do you know what kind of representation that the service wants? Sure, XHTML. So "<html><body>This is my Lease for the White Van. It's 4 weeks.</body></html>", is that a valid Lease for your system? > Clients that aren't interested in > the client number will gracefully degrade, displaying a human-readable > document. curl or Firefox may degrade gracefully, that python script someone wrote will go "Uh, what's this" and likely abort. No degradation there at all. > Whereas, if I create a new media type, I'd be reinventing a whole bunch > of different wheels -- headings, paragraphs, boldface, italics, links, > link relations, HTTP methods and the whole #!. Clients that don't > understand this new markup language will ignore the markup they don't > recognize and display the representation as a big puddle of text. My point is that XHTML simply isn't specific enough and doesn't offer enough clarity to a client as to what it is seeing and what to expect. You could make a "smart" client that goes crawling through payload looking for its markers and microformats. That's fine, but when you say your client accepts "XHTML", it's a bit misleading, because it wants XHTML that's properly formatted, with proper embed extension vocabularies or microformats. Otherwise, it's just gibberish to the client. You're still defining your own formats, payloads, and semantics, but now it just has a catch all media type of XHTML, oh, and it renders in a browser. Just because it's in XHTML doesn't make it any more interoperable. I can't write a shopping client that "just works" with Amazon and Best Buy, and they both use HTML. > My rule of thumb remains: Don't create a new media type, when the > technology exists to extend any number of existing media types to solve > the problem. Creating media types is hard, hard work -- if done > properly. If you're extending a data type and not changing the semantics, then that's a fine idea. If you are changing the semantics, then telling folks it's a application/xyz+xml when that's half truth, or perhaps even wrong (depending on the kind extension) doesn't really help anyone, does it? Publishing atom feeds where the bulk of your information is in your own namespace, is that really helpful? Is that really using "atom" then? Maybe if you're leveraging some other atom tool suite to publish the atom and your extensions, then ok. But then you're using atom as a wrapper to the real meat, which is your actual data -- which isn't atom at all. Regards, Will Hartung (willh@...)
<snip> > If you're extending a data type and not changing the semantics, then > that's a fine idea. If you are changing the semantics, then telling > folks it's a application/xyz+xml when that's half truth, or perhaps > even wrong (depending on the kind extension) doesn't really help > anyone, does it? </snip> There are a few efforts to improve the data semantics for common types (RDFa comes to mind). I think it's time to do some serious work to improve the operational semantics, too. I think a lot can be done to add semantic value to existing media types just by adding LINKS w/ rel values. No matter the media-type (XHTML, XML, Atom, etc.) a custom client that has access to clearly documented LINK+rel values can understand and process quite a bit. Taking an approach that uses the XHTML LINK element + rel values may be a viable way to increase the semantic value of these common types w/o destroying the original meaning and value for Web browsers and other existing clients. mca http://amundsen.com/blog/ On Wed, Dec 16, 2009 at 21:52, Will Hartung <willh@...> wrote: > On Wed, Dec 16, 2009 at 4:55 PM, Eric J. Bowman <eric@...> wrote: >> I disagree. If I were designing a REST system to deal with leasing >> contracts, the first thing I would do is research whether or not there >> exists a standard for representing leasing contracts. Upon finding >> none, my options as a REST developer are to re-use an existing media >> type by extending it to handle the specifics of a lease-contract >> representation, or create a new media type. >> >> If I do create a new media type, I wouldn't call the result RESTful >> unless and until that media type is standardized. I would simply use >> XHTML as the media type. If the client number is important >> information, then I would identify it as such using RDFa or a >> microformat approach, i.e. assign @id='client_number' thereby giving >> clients the ability to glean a client number from any XHTML >> representation of a lease contract. > > If everything is XHTML, how do you know what kind of representation > that the service wants? Sure, XHTML. > > So "<html><body>This is my Lease for the White Van. It's 4 > weeks.</body></html>", is that a valid Lease for your system? > >> Clients that aren't interested in >> the client number will gracefully degrade, displaying a human-readable >> document. > > curl or Firefox may degrade gracefully, that python script someone > wrote will go "Uh, what's this" and likely abort. No degradation there > at all. > >> Whereas, if I create a new media type, I'd be reinventing a whole bunch >> of different wheels -- headings, paragraphs, boldface, italics, links, >> link relations, HTTP methods and the whole #!. Clients that don't >> understand this new markup language will ignore the markup they don't >> recognize and display the representation as a big puddle of text. > > My point is that XHTML simply isn't specific enough and doesn't offer > enough clarity to a client as to what it is seeing and what to expect. > You could make a "smart" client that goes crawling through payload > looking for its markers and microformats. That's fine, but when you > say your client accepts "XHTML", it's a bit misleading, because it > wants XHTML that's properly formatted, with proper embed extension > vocabularies or microformats. Otherwise, it's just gibberish to the > client. > > You're still defining your own formats, payloads, and semantics, but > now it just has a catch all media type of XHTML, oh, and it renders in > a browser. Just because it's in XHTML doesn't make it any more > interoperable. I can't write a shopping client that "just works" with > Amazon and Best Buy, and they both use HTML. > >> My rule of thumb remains: Don't create a new media type, when the >> technology exists to extend any number of existing media types to solve >> the problem. Creating media types is hard, hard work -- if done >> properly. > > If you're extending a data type and not changing the semantics, then > that's a fine idea. If you are changing the semantics, then telling > folks it's a application/xyz+xml when that's half truth, or perhaps > even wrong (depending on the kind extension) doesn't really help > anyone, does it? Publishing atom feeds where the bulk of your > information is in your own namespace, is that really helpful? Is that > really using "atom" then? Maybe if you're leveraging some other atom > tool suite to publish the atom and your extensions, then ok. But then > you're using atom as a wrapper to the real meat, which is your actual > data -- which isn't atom at all. > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Wed, Dec 16, 2009 at 7:17 PM, mike amundsen <mamund@...> wrote: > There are a few efforts to improve the data semantics for common types > (RDFa comes to mind). I think it's time to do some serious work to > improve the operational semantics, too. > > I think a lot can be done to add semantic value to existing media > types just by adding LINKS w/ rel values. No matter the media-type > (XHTML, XML, Atom, etc.) a custom client that has access to clearly > documented LINK+rel values can understand and process quite a bit. > > Taking an approach that uses the XHTML LINK element + rel values may > be a viable way to increase the semantic value of these common types > w/o destroying the original meaning and value for Web browsers and > other existing clients. But then aren't we going backwards here? First, let me be clear, my primary interest and slant on this in the MtoM space of code talking to code, and the only time any reads this stuff is if they're debugging it. But basically, now the media type doesn't mean anything at all. Embedded RDF, HTML Link Rel tags, whatever. I can't tell anything whatsoever from the media type now. Media type may as well be application/octet-stream. Because in order to get any value out of it, I now need to introspect the payloads to see what's what, and whether it's useful to me at all. I can try con neg, but it may well simply give three different kinds of XHTML. Not helpful at all. If we're going to get rid of the mime type, then I'm sure there's all sorts of wonderful things that can be crammed in the payload that affect semantics and behavior, but I think that's been done already. Now, as part of, perhaps, the Semantic Web, which is effectively trying to automate and make discoverable the Human consumed web. Fine. There you're pretty much stuck with piggy backing on top of HTML to get any thing with any traction (since sending XSLT links with XML probably won't really take off...). Regards, Will Hartung (willh@...)
<snip> I can't tell anything whatsoever from the media type now. </snip> I hear ya. My point in this line of reasoning is not to downplay the importance of the registered media-type name but, rather, to highlight the notion that operational semantics that can be embedded in a response in several ways. I agree that there is a point where the media-type string can be no longer helpful in predicting an application's understanding of the associated message body. I deal with this disconnect every day in my own programming against any number of existing web services "APIs." At the same time, I have seen enough examples of custom LINK information in existing media-types (XHTML and Atom have this support in their design) to think there are additional opportunities to improve the semantic value of existing media-types without significantly degrading the value of the media-type control data. mca http://amundsen.com/blog/ On Wed, Dec 16, 2009 at 23:51, Will Hartung <willh@...> wrote: > On Wed, Dec 16, 2009 at 7:17 PM, mike amundsen <mamund@...> wrote: >> There are a few efforts to improve the data semantics for common types >> (RDFa comes to mind). I think it's time to do some serious work to >> improve the operational semantics, too. >> >> I think a lot can be done to add semantic value to existing media >> types just by adding LINKS w/ rel values. No matter the media-type >> (XHTML, XML, Atom, etc.) a custom client that has access to clearly >> documented LINK+rel values can understand and process quite a bit. >> >> Taking an approach that uses the XHTML LINK element + rel values may >> be a viable way to increase the semantic value of these common types >> w/o destroying the original meaning and value for Web browsers and >> other existing clients. > > But then aren't we going backwards here? > > First, let me be clear, my primary interest and slant on this in the > MtoM space of code talking to code, and the only time any reads this > stuff is if they're debugging it. > > But basically, now the media type doesn't mean anything at all. > Embedded RDF, HTML Link Rel tags, whatever. I can't tell anything > whatsoever from the media type now. Media type may as well be > application/octet-stream. Because in order to get any value out of it, > I now need to introspect the payloads to see what's what, and whether > it's useful to me at all. > > I can try con neg, but it may well simply give three different kinds > of XHTML. Not helpful at all. > > If we're going to get rid of the mime type, then I'm sure there's all > sorts of wonderful things that can be crammed in the payload that > affect semantics and behavior, but I think that's been done already. > > Now, as part of, perhaps, the Semantic Web, which is effectively > trying to automate and make discoverable the Human consumed web. Fine. > There you're pretty much stuck with piggy backing on top of HTML to > get any thing with any traction (since sending XSLT links with XML > probably won't really take off...). > > Regards, > > Will Hartung > (willh@...) >
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > I can't help it: I see no possible way to implement a non-human-driven > client for a service without (in one way or another) classifying the > resources the service provides. > > For example, consider a helpdesk ticket system: When writing a client > that searches for tickets and then updates the foo:status of the > individual tickets contained in the result set, I need to make the > assumption that the result set contains tickets (and not just > resources). In order to being able to make such an assumptions, the > classification information must be made available by the service. In > addition, when client developers should be enabled to develop clients > before the services exist this information is needed as some form of > service type description. The specification of application/atomsrv+xml > is a good example of such a service type description. > > But however this is approached, it essentially comes down to telling > the client what kinds of resources (IOW: kinds of application states) > to expect on the server. I just cannot code to update the resource > foo:status when I have now clue that this user goal is applicable to > the resource in the first place. > > Does anyone have an idea how to align this (IMHO fact) with the > constraint that no information about resource types must be made > available to clients in RESTful systems? > You are making the mistake of starting with the service. You need to start with the client... tell me more about this client. What event causes a search for tickets to occur? Where does the data that goes into the search parameters come from? Where does the new value for the status come from? What happens after the status is updated? The hypermedia format drives the client. How can you define your hypermedia format without first understanding and defining your client? Regards, Andrew
Will Hartung wrote: > > On Wed, Dec 16, 2009 at 4:55 PM, Eric J. Bowman wrote: > > I disagree. If I were designing a REST system to deal with leasing > > contracts, the first thing I would do is research whether or not > > there exists a standard for representing leasing contracts. Upon > > finding none, my options as a REST developer are to re-use an > > existing media type by extending it to handle the specifics of a > > lease-contract representation, or create a new media type. > > > > If I do create a new media type, I wouldn't call the result RESTful > > unless and until that media type is standardized. I would simply use > > XHTML as the media type. If the client number is important > > information, then I would identify it as such using RDFa or a > > microformat approach, i.e. assign @id='client_number' thereby giving > > clients the ability to glean a client number from any XHTML > > representation of a lease contract. > > If everything is XHTML, how do you know what kind of representation > that the service wants? Sure, XHTML. > That's an oversimplification of my argument. There exist plenty of standard media types which aren't XHTML, like SVG. The requirement is for hypertext of a well-known media type. I also allow for the possibility of new media types becoming well-known as they become standardized. No well-known media type exists for a lease contract, so I choose the well-known media type best suited to the task -- in this case, that isn't SVG or anything else. If my intention is to display a hypertext document on a client of any sort, then I'll stick to the standards and libraries defined within application/xhtml+xml (even if I ultimately serve XHTML 1.0 as text/html). A lease contract is intended to be a human-readable document, which may be browsed online or printed. At the same time, it is expected to be machine-readable to reliably extract certain information, regardless of how the document itself evolves over time. > > So "<html><body>This is my Lease for the White Van. It's 4 > weeks.</body></html>", is that a valid Lease for your system? > Of course not, as that is not valid XHTML. ;-) The proper question is for you: What is it about a lease contract can't be displayed in a browser using XHTML + CSS + JS, or printed using an alternate stylesheet? In simpler terms, why _not_ use XHTML as a starting point? A lease contract will have various levels of headings followed by paragraphs. There may be tables displaying various rates. Why is it better to re-invent the wheel of marking up a table that can be understood as such by a machine that groks either text/html or application/xhtml+xml? I always maintain that it doesn't matter whether the REST application is driven by a human or a program, the media type for most tasks is XHTML, optionally wrapped in Atom. Which is why I like RDFa as opposed to dealing with straight RDF, as an extension to other host languages (not just XHTML). > > > Clients that aren't interested in > > the client number will gracefully degrade, displaying a > > human-readable document. > > curl or Firefox may degrade gracefully, that python script someone > wrote will go "Uh, what's this" and likely abort. No degradation there > at all. > Then the system wasn't designed properly, i.e. by applying REST constraints. My design goal is a self-documenting API (the hypertext constraint) utilizing a uniform interface (the self-descriptive messaging constraint), for either a human or a machine to interact with a lease contract. If a client is coded to the API I describe below, any client coded to it is immune from the documents' evolution over time. > > You're still defining your own formats, payloads, and semantics, but > now it just has a catch all media type of XHTML, oh, and it renders in > a browser. Just because it's in XHTML doesn't make it any more > interoperable. I can't write a shopping client that "just works" with > Amazon and Best Buy, and they both use HTML. > It renders in a browser because it's a lease contract we want humans to read. The reason documents render in browsers is because, mostly, they can be expressed using the semantics of XHTML as a widely-known base. An inline table is an inline table, which can be written accessibly, such that the document works not only for the sighted who can read a browser display or a printed page, but others as well (because accessibility in HTML equates to machine readability in a very standardized fashion). There's no standard for shopping-cart implementations. Don't blame that on me, please... However, it is a simple matter to define a self- documenting API for the hypothetical lease contract... let's not change horses mid-stream... > > > Whereas, if I create a new media type, I'd be reinventing a whole > > bunch of different wheels -- headings, paragraphs, boldface, > > italics, links, link relations, HTTP methods and the whole #!. > > Clients that don't understand this new markup language will ignore > > the markup they don't recognize and display the representation as a > > big puddle of text. > > My point is that XHTML simply isn't specific enough and doesn't offer > enough clarity to a client as to what it is seeing and what to expect. > That's why it's extensible. But that is of no concern at the protocol level. The protocol level is where the uniform interface resides, the requirement (constraint) there is self-descriptive messaging. If I'm using application/xhtml+xml, then any client (or intermediary that cares) knows that a <table> is a table and can parse it as such. If I'm using text/html, then any client (or intermediary that cares) knows that a <table> is a table. Clients developed to my self-documenting API merely extend or implement known libraries. If I want machine-readable tabular data, the state-of-the-art there stands as HTML 4.01, which also happens to be the state-of-the-art markup language for human-readable tabular data. Why create a new media type if it in any way needs to incorporate tabular data? Same with lists. By constraining a <dl> to have only one <dd> per <dt> using a schema, you have the semantics of a list defining a series of name-value pairs. Or I suppose I could use JSON for name-value pairs. If I need a machine-readable chart instead of a table, then SVG. > > You could make a "smart" client that goes crawling through payload > looking for its markers and microformats. That's fine, but when you > say your client accepts "XHTML", it's a bit misleading, because it > wants XHTML that's properly formatted, with proper embed extension > vocabularies or microformats. Otherwise, it's just gibberish to the > client. > Actually, parsing microformats is incredibly difficult, as a specific parser needs to be written for each microformat a client supports. RDFa solves this problem very cleanly. When I curl my lease contract, I see a Content-Type of application/xhtml+xml, which tells me the XML toolchain is in play. In the document <head>, there is a <link rel= 'transformation'/> pointing to a application/xslt+xml resource. That's as far as I care to go into the markup. I now curl the .xsl file and wash the last representation through it using an XSLT 2.1 transformer, XSLT 2 being defined by the media type, and 2.1 being introspected from the .xsl hypertext representation of a GRDDL transformation, which extracts the client number (and other data) from the lease contract's RDFa markup, specifying XHTML or JSON or SVG output as required. Documentation-wise, I describe my API not in terms of URIs, but in terms of media types and link relations. "The lease contract's XHTML includes a link with the relation of 'transformation' to the XSLT hypertext you can use to generate a JSON list of name-value pairs exposing the RDFa metadata of the calling document." Nothing different from what I've inferred via curl + elbow-grease, just formalized. The location (URL) of the lease contract may be changed without breaking the API. Contracts for an interface are written using the same self-documenting hypertext that drives the application -- media types themselves are _not_ contracts. URIs are opaque -- they don't need to be specified as anything other than "whatever an implementation of this API says they are". This is all I need to program a custom client using standard libraries that is capable of sorting a collection of links to lease contracts based on attributes of those lease contracts, such as age, or time-to-expire, in ascending or descending order, using name-value pairs provided in the JSON or XHTML output of a GRDDL transformation. This client I've coded can evolve independently of the server. The server can evolve at any time to include new metadata, or upgrade to HTML 5, or use WAI-ARIA to add a digital-signature-capture form to the lease-contract document, as other clients are also evolving independently. If I don't update my client, there's no reason why the API described needs to be changed -- there's always a path to follow from the document to a metadata view consisting of a <dl> with one <dt>client_number</dt> with a <dd> value that matches a pattern and type described in a schema (an argument against using JSON for a list of name-value pairs as GRDDL output, in favor of XHTML). > > > My rule of thumb remains: Don't create a new media type, when the > > technology exists to extend any number of existing media types to > > solve the problem. Creating media types is hard, hard work -- if > > done properly. > There's no need at any step of the way in this lease-contract scenario where the standard media types used aren't fine-grained enough to specify to a client, just exactly what to expect. The nature of extensible media types is that they represent the opposite of a contract -- a client can't tell from the media type exactly which capabilities are needed, because any representation may also be a container for other resources, like images in a document, or charts in SVG. Nor can intermediaries grasp the full nature of a representation by its media type, no matter how fine-grained and application-specific that media type may be. > > My point is that XHTML simply isn't specific enough and doesn't offer > enough clarity to a client as to what it is seeing and what to expect. > A client that doesn't understand RDFa, hasn't been coded to introspect for GRDDL transformations, isn't compatible with the forms markup used and only understands GET, can still render and style the document cleanly, while ignoring the attributes and elements it doesn't understand. The generality provided by the underlying document semantics adhering to XHTML allows a variety of levels of client understanding. A client that knows RDFa doesn't really need to run a GRDDL transformation. None of this is of interest to intermediaries, simply the fact that this is a defined subset of XML known as XHTML will do. The API is self-documented within the realm of known media types, with clarity and specificity, while allowing graceful degradation, without sacrificing human readability over the Web. > > If you're extending a data type and not changing the semantics, then > that's a fine idea. If you are changing the semantics, then telling > folks it's a application/xyz+xml when that's half truth, or perhaps > even wrong (depending on the kind extension) doesn't really help > anyone, does it? Publishing atom feeds where the bulk of your > information is in your own namespace, is that really helpful? Is that > really using "atom" then? Maybe if you're leveraging some other atom > tool suite to publish the atom and your extensions, then ok. But then > you're using atom as a wrapper to the real meat, which is your actual > data -- which isn't atom at all. > Your data doesn't have to be Atom at all, or XHTML or HTML, for Atom to do exactly what it is supposed to do, which is provide a fine wrapper for publishing any data online, thanks to its extensibility. Same with XHTML. There exist many standardized means to extend XHTML, none of which require a media type other than the extensible application/xhtml +xml. When I see that, I know that whatever else I don't understand, I know that a <table> is a table and I understand the semantics of a <dl>. I'm not changing any semantics, I'm extending them. Sometimes, as in the case of my GRDDL output above, the raw semantics of the media type will do nicely with the application of a schema or two. Bear in mind, this is in the same spirit as not defining a new HTTP method for each of your underlying system's methods. Apply a uniform interface by adhering to the well-defined semantics of known methods. Use well-known media types that allow for extension, with or without sub-typing, preferably without. That's what makes it easy to decipher an API using nothing but curl, plus standard toolchains, over the wire. The specifics of your application belong in the hypertext of a known type, not your brand-new media type. -Eric
Jan Algermissen wrote: > On Dec 16, 2009, at 9:45 PM, mike amundsen wrote: > > >> <snip> >> ...how did you know that it makes sense to write code that looks for >> the rel="next" in the first place? >> </snip> >> >> The same way developers building Web browser clients know to write >> code that looks for the rel="stylesheet" in the <link> element [1], >> [2], [3]. >> >> It sounds like this line of questioning is about how to go about >> properly documenting media type semantics in a way that is helpful at >> design time for those building clients. >> > > Yep. And AtomPub does a good job and serves as a lucid example. My > point is that such media type semantics involve classification of > resources > No, those semantics are classifying link relations not resources - which is completely different because the resource's significance, within an application, is derived from the context in which its state was retrieved/transfered i.e. the 'application flow' leading up to it. - Mike
Craig McClanahan wrote: > > Many RESTafarians frown at doing "partial updates" (i.e. only update > the fields that are actually included in the request body) with a PUT > -- I tend towards the pragmatic view and used this in several APIs -- > but when you're doing a POST I don't see a reason why it should not > make sense. Letting the client change whatever combination of fields > they need to in *one* request (and therefore probably a single > database transaction) would seem reasonable to me. > Ack! Failing to make your messages self-descriptive isn't pragmatic. If I have a distributed hypermedia system, and I want it to gain the benefits of REST, then falling short of REST in the implementation is anti-pragmatic because what I'm left with is some other architectural style that isn't guaranteed to exhibit the desirable properties I was after when I chose REST to meet them. Consensus on this list for years, has been that PUT is not used for partial updates. While the server isn't required to honor everything in a PUT, for example a server might not change an atom:id even if it's updated in a PUT, this is not some loophole that allows PUT to be used for partial updates. In a REST system, the only thing that matters regarding methods is that they are used according to their definitions. PUT has update-by- replacement (or creation) semantics, PATCH has partial-update semantics and has been in HTTP 1.1 from the beginning (look at the obsolete RFCs, then the comments by RFC 2616's authors about how the lack of inclusion of PATCH was due to time constraints and lack of implementation, but was not meant to suggest that PATCH had been removed from HTTP), and is now reinforced by its own RFC. So, to overlook the method with the required semantics of partial- update (PATCH) and assign those semantics to PUT which has different semantics entirely, is to use PUT other than what it was intended for. This means that out-of-band information is driving your PUT transaction, and the semantics of the interaction are not visible because the messaging is not self-descriptive. Since the semantics of POST are generic, assigning it to cover for PATCH is acceptable. But not PUT -- doing that is failing to apply the uniform interface constraint. -Eric
On Dec 17, 2009, at 11:41 AM, Mike Kelly wrote: > Jan Algermissen wrote: >> On Dec 16, 2009, at 9:45 PM, mike amundsen wrote: >> >> >>> <snip> >>> ...how did you know that it makes sense to write code that looks for >>> the rel="next" in the first place? >>> </snip> >>> >>> The same way developers building Web browser clients know to write >>> code that looks for the rel="stylesheet" in the <link> element [1], >>> [2], [3]. >>> >>> It sounds like this line of questioning is about how to go about >>> properly documenting media type semantics in a way that is helpful >>> at >>> design time for those building clients. >>> >> >> Yep. And AtomPub does a good job and serves as a lucid example. My >> point is that such media type semantics involve classification of >> resources >> > > No, those semantics are classifying link relations not resources - > which > is completely different No, not really. Saying that any resource that is the target of a 'foo' link has certain properties is essentially expressing a type. For example, AtomPub specifies that resources that are listed in a collection are member resources. Being listed in the feed document is the hypermedia semantic and 'member' is the type. AtomPub then defines a number of things that clients can expect to do with a member (essentially AtomPub defines the state transitions that are available after GETing a member resource. If AtomPub did not establish the member type, it could not describe the expectations the client can make. (And coding the client would be impossible). > because the resource's significance, within an > application, is derived from the context in which its state was > retrieved/transfered i.e. the 'application flow' leading up to it. > Yes, but that context is essentially a type. See http://algermissen.blogspot.com/2009/09/hypermedia-context.html Jan > - Mike > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Will, On Dec 17, 2009, at 3:12 AM, Will Hartung wrote: >> > > No, the server CAN evolve. It can say "atom feeds are the suck, don't > use them, now we have Neutrino feeds!". > > If it's a kind server it will retain Atom compatibility if the client > requests it. If you have atom+xml in your Accept, you get Atom. If you > have */* or something else, you might get neutrino+xml instead. > > If it's a mean, nasty server, then it shuts you out cold with a 406 > and a list of "support these or there's the door" media types. > But what enables you to say the former is kind and the latter is mean? You say this based on AtomPub saying that collections are represented as Atom feed documents. Otherwise, there would simply be no expectation of receiving Atom feeds as a response to a GET to the collection. How could you express that expectation in a spec without first saying that there are resources that are collections? The spec itself needs a resource classification to build upon. So, AtomPub says that some resources are collections, how a client determines what resources are collections and that clients can retrieve collections as Atom feeds. Machine clients use this classification information to code something like this: - retrieve service doc - pick a collection (e.g. based on category information) - GET collection and expect at least application/atom+xml feed doc (this line of code manifests the client assumption described above) With human driven clients the situation is different: you write the user agent to process the service doc and display the list of collections. Then, when the user clicks on one the user agent does a GET and the returned representation is dispatched to the media type handler available for *whatever* is returned. There need not be any assumption about the response being a feed because the user agent simply hands it to the next level which is the human user, deciding only then the next action to take. M2M clients need to decide what action to take at implementation time. Even if there are a number of expectations and the matching one is picked at run time, you still need to make the decision which expectations to support at implementation time. Jan > Part of the process is empowering things like Con neg so that the > servers and clients can agree on content. Yea, the client can not > "evolve" to supporting the new content until it's been coded. But that > doesn't mean that servers can not be good citizens and be backward, > even if deprecated, compatible. > > It puts a burden on server developers, but that's just the truth of > it. At least con neg is an OPTION that CAN be supported. > > And why can't the client discern resources? if the client sees the > Atom feed, it goes one way. If it sees the Neutrino feed, another way. > Properly developed, the client can jump back and forth across both > types. Heck, say you had load balanced servers, and one supported atom > and the other neutrino -- you hadn't updated the second on yet. The > client can transparently jump back and forth between the formats as it > bounces across the servers, because the client IS leveraging the media > types, and because the server is providing the links to move forward, > rather than the client trying to shove Atom links down the throat of a > Neutrino server. > > So, in that sense, I think evolution can be handled pretty elegantly. > >> In the human Web the same problem exists but the capabilities of >> the human >> brain to react to change (and follow previously unexpected links) >> does put >> the server in the position to evolve much more independently. I am >> sure that >> Amazon could mess around with the whole shop and the way ordering >> works and >> the user would still be able to buy a book. This is the huge >> benefit of REST >> - it just does not apply that easily to the M2M case. > > But think about that. > > That "mess around", from the users pov, be cosmetic. They rearranged > the screen, the "add to cart" button in on the left now, and "checkout > button" below it, or whatever. > > The links those buttons go to are immaterial. Nobody cares. > > Now the content sent to those links, those matter. If amazon renamed > "itemNo" to "productUUID" then, you know, shame on them. Your M2M > client is toast. But the semantics conveyed by the link rels > "add-to-cart", "checkout", those haven't changed (unless they renamed > those as well -- more sillyness). They could add "add-to-wish-list", > and your client may not know what that is, but it probably doesn't are > either. > >> And I think this needs to be said clearly and honestly and it needs >> to be >> theoretically captured. > > In an M2M scenario, ALL APIs are "tightly coupled". That's just the > fact of it. APIs are contacts. Change the contract, bad things happen. > Design APIs with growth and flexibility in mind, and you can have a > more forgiving client/server experience. > > By using media types and HATEOAS, the clients retain a bit of > discoverability. It's not so much discoverability, as it is state > awareness. It can "know" where it is at any point of the process, and > it "knows" where to go from there. If it follows the links given with > the types specified, the client will be told where to go next. > > This is key. The client isn't "waiting to do the next thing". It's not > got a "list of things to do", and going through them one by one. > Rather it has a list of guideposts that it's told to follow, and the > actual PATH it takes isn't known to the client until it reaches a goal > post. > > Now you can code all of that in to the client, it "knows" where to go, > it build URLs, and when things change, the client breaks. Because the > client is a stupid client and while it functioned, it did it all the > wrong way. > > So, that's, to me, where some of the robustness of the whole thing > comes from, even in a M2M world. > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 17, 2009, at 3:12 AM, Will Hartung wrote: > In an M2M scenario, ALL APIs are "tightly coupled". That's just the > fact of it. APIs are contacts. Change the contract, bad things happen. > Design APIs with growth and flexibility in mind, and you can have a > more forgiving client/server experience. Agreed. I just think that we are making a mistake when we claim that REST magically makes M2M interaction have the same amount of loose coupling than human to machine interactions. Much of the reluctance against REST in an enterprise context IMO results from the actually existing contract in M2M scenarios notoriously being talked away. If you tell Joe developer to evolve that service, you better be able to tell him what exactly he can do and what not. He should not have to call the client owners because not having to bring the client and server owners together when evolving is one of *the* top advantages of REST. > > By using media types and HATEOAS, the clients retain a bit of > discoverability. It's not so much discoverability, as it is state > awareness. It can "know" where it is at any point of the process, and > it "knows" where to go from there. If it follows the links given with > the types specified, the client will be told where to go next. Yes, that is true. But it also conflicts with the state machine that the client itself has. It is not entirely driven by the service (as the human user is). It at least makes use of a set of partially ordered goals (e.g. you must order before you cancel an order, you must order before you pay, etc.). This set of partially ordered goals is in a way exactly what e.g. AtomPub establishes. The goal order is specified by saying what outgoing transitions (== next available goals) to expect after completing a certain goal. > > This is key. The client isn't "waiting to do the next thing". It's not > got a "list of things to do", and going through them one by one. > Rather it has a list of guideposts that it's told to follow, and the > actual PATH it takes isn't known to the client until it reaches a goal > post. Yeah - good line of thought. OTH, I have not managed to code a client that does not eventually have its own state machine that inevitably drives the clients program flow. No matter how much you make the client to be driven by the server. > > Now you can code all of that in to the client, it "knows" where to go, > it build URLs, and when things change, the client breaks. Because the > client is a stupid client and while it functioned, it did it all the > wrong way. Suppose you code a client to an AtomPub server that has a collection of orders and you want the client to calculate the average order amount. Can you show me how you do that without expecting the GET on the order collection to return an Atom feed (or any other *previously* known media type)? GET /service-doc ... pick order collection based on category ... GET /orders And now - how do you code from here without relying on the fact that AtomPub tells you that collections come as Atom feeds? Jan > > So, that's, to me, where some of the robustness of the whole thing > comes from, even in a M2M world. -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Solomon Duskis wrote: > > While I agree with you about the "missing link" of the RDF "RESTful" > API, your statement don't supply example of a hypertext constraint > compliant API, simply a link to a rant. There are degress of HATEOAS > compliance in various APIs, but nothing that strikes me as > particularly fully featured. > Yes, but Roy's rant is, IMHO, the most helpful thing I've read -- it really made all the pieces fall together for me. My approach, instead of snarkily linking to well-developed GET/POST-only HTML 4.01 websites that get all the fundamentals correct, has been to develop what I call the "REST Discipline" (thread eventually, including nifty single-image chart), which is a method of getting any REST project started off on the right foot by emphasizing the identification of reources, and providing an iterative process to discover what those resources are in the design phase -- which is used as the key production guideline going forward. My thinking on this evolves from two foundational posts I've made, meant to be taken as a pair, now: http://tech.groups.yahoo.com/group/rest-discuss/message/13322 http://tech.groups.yahoo.com/group/rest-discuss/message/13543 When I look at a new REST claimant, the fundamental problems I see can mostly be explained by having started off with URI design, and as a consequence, have no real notion of what their resources actually _are_. The nifty chart for the REST Discipline is meant as a guide in the identification, but not naming, of resources. Only by getting a good idea of link relations, resource "types", media types and methods to be used, is it possible to discover what the actual resources of a system are. Only then can work progress to the URI allocation scheme, i.e. the _naming_ of the identified resources. > > Some more examples that I've seen that approach HATEOASness are: > > - Jim Weber's work with <atom:link> and rel values to express > workflow. I think his work a great start, but doesn't go far enough. > - Sun's JSon based Kenai (Cloud Management API). Again it has > elements of HATEOASness, but doesn't really have a "you only use > in-band communication" feel > Actually, although it's been a few months since I looked at it, Kenai was my inspiration for working out how I go about developing a REST system, because it's so completely far away from what I come up with working on a hypothetical cloud API that I don't know where to start offering any help. Other than to say, try it again using the REST Discipline approach, and see if we don't wind up pretty close to one another, working independently. See if that result doesn't just scale better, make more intuitive sense, and quickly get adopted by multiple vendors plus a swarm of open-source projects. This evolved into using a hypothetical cloud API as the example for the REST Discipline. Following my method resulted in the discovery that the application transcends clouds to include all types of web hosting plans, be they cloud, VPS, dedicated or collocated server. Shouldn't I be able to reboot my collocated server using the same API I use to reboot my VPS as I use to reboot my cloud instance as I use to reboot a zone inside my cloud instance? So you can't call the central resource type a "box" or a "server" or even a "virtual machine"... This example illustrates the reasons for, and a method of, creating a new XML subtype. Which I define by co-opting various XHTML modules for paragraphs, lists, links, xforms, tables and such, while adding several block-level elements while adapting WAI-ARIA attributes (like role) into the mix to support accessibility. Also, a dual-root-element a la Atom whereby the central resource type may stand alone, or be listed as members of a collection. The new block-level elements are mostly taken from existing vendor-specific media types, a couple from VMware, etc.. The application/xhtml+xml media type is out, because of the non-XHTML elements introduced which are specific to a general webhosting API. So a new media type is proposed, application/webhost+xml. Besides describing a new type of document, the media type introduces a new HTTP method: RESET. Initially, this was REBOOT, which is an operation which just plain doesn't model well with any existing HTTP method. But, REBOOT is too application-specific, whereas RESET, like PATCH, stands alone and provides a useful new generic-interface semantic that cleanly encompasses a variety of existing or upcoming needs like remote- power-cycling vs. remote-resetting of a webhost of some sort. An upcoming need may be to RESET a representation itself... HTML 5 adds some interesting new features to the client side. As opposed to reloading a page, the user intent may be to clear the application cache (in which case the RESET method is targeted at the client's cache connector itself, rather than the server) and re-start all scripts without checking for fresh content. Who knows? My point is, RESET isn't limited to use in a media type specific to the webhosting problem area. So, my iterative REST Discipline approach (which I'll eventually start a forum about, and link to it here) is all about going through an entire process of using standard methods and media types. But, it uses as an example a system which exposes itself (through my process) as one which requires both a new standard method, and a new standard media type. The examples illustrate the derivation of the new method and media type through implementation of a simulator for a mythical webhosting operation offering VPS, dedicated, collocated and cloud hosting accounts, which can be manipulated by administrators or customers in various ways according to privilege using HTTP-Digest authentication. But, the method which leads to the derivation of a new media type also treats that path as a last-ditch approach. Bear in mind, the new media type is only one of many media types used in the resulting API. I'm from the old school: I have always had a server in my office, and attached to it has always been a spiral-bound notebook where I've inked in everything I've ever done to it. So, a media type for virtual hosts should also include an administrator's personal log. This task is delegated to Atom and Atom Protocol in the API via the appropriate <link> elements embedded in the new media type. A collection of all the different webhosts and their IP addresses contained on a physical server would be nice, it could show status and allow individual or bulk shutdown, or Allow: RESET on the collection to power-cycle the entire physical server, and/or Allow: RESET on a specific IP, with the entity body determining reboot vs. poweroff-wait- poweron. A collection could be a mashup across different providers, even. But, I ramble. > > IMHO, There are plenty of great success stories with non-HATEOAS > "REST" APIs, but I still haven't seen anything that resembles Roy's > REST in what we're calling REST APIs. > Don't anybody take this the wrong way, it's just a lighthearted attempt at humor regarding non-hypertext-driven APIs: http://www.youtube.com/watch?v=C7OJvv4LG9M In the end, the Wright Brothers get it Right, Brother! By discovering the fundamental architectural constraints that define airplanes to this day. If you're missing something crucial like an elevator or a rudder, it might take off, but it just won't fly. Exactly why I've undertaken the task not just of creating my API, but documenting the exact thoughts I have and process I go through as I create it. My thoughts and processes don't change from project to project, only the results do, as a function of being applied to different problem areas. So if I sit down to write a Cloud API one day, then drop it for a year and start over without looking at my previous work, the result will be the same because I have a disciplined approach to REST development which I consistently apply. I hope that, when it's done, the quality of REST APIs will progress in the proper direction as a result of following a method (the REST Discipline) that's strongly grounded in the fundamentals, and gets these APIs started off on the right foot by fanatically avoiding any discussion of URI allocation scheme until the project is off the drawing board and into the prototype. -Eric
Jan Algermissen wrote: > On Dec 17, 2009, at 11:41 AM, Mike Kelly wrote: > > >> Jan Algermissen wrote: >> >>> On Dec 16, 2009, at 9:45 PM, mike amundsen wrote: >>> >>> >>> >>>> <snip> >>>> ...how did you know that it makes sense to write code that looks for >>>> the rel="next" in the first place? >>>> </snip> >>>> >>>> The same way developers building Web browser clients know to write >>>> code that looks for the rel="stylesheet" in the <link> element [1], >>>> [2], [3]. >>>> >>>> It sounds like this line of questioning is about how to go about >>>> properly documenting media type semantics in a way that is helpful >>>> at >>>> design time for those building clients. >>>> >>>> >>> Yep. And AtomPub does a good job and serves as a lucid example. My >>> point is that such media type semantics involve classification of >>> resources >>> >>> >> No, those semantics are classifying link relations not resources - >> which >> is completely different >> > > No, not really. Saying that any resource that is the target of a 'foo' > link has certain properties is essentially expressing a type. Yes - a type of link relation, which is not the same thing as a typed target resource. > (essentially AtomPub defines the state transitions that are available > after GETing a member resource.) > A resource should only be considered a 'member' if it has been linked as such within the context of a given application flow, not because it has an intrinsic type. > If AtomPub did not establish the member type, it could not describe > the expectations the client can make. (And coding the client would be > impossible). > An application protocol can describe application flow via link relations, so there is no requirement for typed resources. Isn't this essential to the hypertext constraint? > >> because the resource's significance, within an >> application, is derived from the context in which its state was >> retrieved/transfered i.e. the 'application flow' leading up to it. >> >> > > Yes, but that context is essentially a type. > The context is provided by your client's application state, not a typed resource. - Mike
On Dec 17, 2009, at 3:14 PM, Mike Kelly wrote: > >> >>> because the resource's significance, within an >>> application, is derived from the context in which its state was >>> retrieved/transfered i.e. the 'application flow' leading up to it. >>> >>> >> >> Yes, but that context is essentially a type. >> > > The context is provided by your client's application state, not a > typed resource. > Well....but the server sent the representation in the firt place, so it is effectively telling me, for example: "/customers/776 is a member resource". The server tells me the type (or kind or class or category or rdf:type or whatever you name it). And M2M client code relies on such classification information when coding for the next available transitions (goals). Jan > - Mike -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Thu, Dec 17, 2009 at 5:03 AM, Jan Algermissen <algermissen1971@...> wrote: > But what enables you to say the former is kind and the latter is mean? You > say this based on AtomPub saying that collections are represented as Atom > feed documents. Otherwise, there would simply be no expectation of receiving > Atom feeds as a response to a GET to the collection. No, I was addressing the evolutionary capability of the service. The former service is "kind" because it honors it's existing contracts with older clients that aren't up to speed with the service change over from atom+xml to neutrino+xml. The premise being that the actual service being performed and made available with the neutrino+xml is similar enough to an atom+xml, that the atom+xml version was still worth supporting. Supporting both lets service evolve towards new functionality while leaving a working network system behind by supporting the now old, deprecated types. A nice feature is that the client can tell the service what it is getting (via Content-type), while at the same time telling the server what it can expect back (via Accept). This con neg ability lets you build both robust clients and servers. While both are implementing strict interpretations of the content types, the ability to support multiple types for similar domains, notably evolving domains, along with being able to send along where you stand in the evolution of the service (i.e. are you running atom or neutrino) gives, I think, and overall more robust system, especially when communicating with multiple peers which are at different levels of implementation. If the server, in this case, simply shuts down the atom support in favor of neutrino support, then it just unceremoniously cut out a lot of existing clients. Obviously any external communication regarding the service change isn't part of this discussion. All sorts of valid reasons for a hard cut off. But, IMHO, combining the negotiation aspect with "upwardly" (at least at a meta level) compatible formats adds robustness to the system, and make changes less disruptive. Regards, Will Hartung (willh@...)
"And now - how do you code from here without relying on the fact that AtomPub tells you that collections come as Atom feeds?" When interrogating the service document, check out the accept element in the collection: http://bitworking.org/projects/atom/rfc5023.html#rfc.section.8.3.4. You mentioned categories which is another means, but it may be a red herring in the M2M example. Also, when you say GET /orders, you're implying an accept header of */*. You need the equivalent of a human brain to process whatever returned. Instead, do the following: GET /orders Accept: application/order+xml; q=0.8, image/png Without those constraints on the request, I could understand the line of questioning about how a service can get a document it know's how to process. -Noah On Thu, Dec 17, 2009 at 5:21 AM, Jan Algermissen <algermissen1971@...>wrote: > > On Dec 17, 2009, at 3:12 AM, Will Hartung wrote: > > In an M2M scenario, ALL APIs are "tightly coupled". That's just the >> fact of it. APIs are contacts. Change the contract, bad things happen. >> Design APIs with growth and flexibility in mind, and you can have a >> more forgiving client/server experience. >> > > Agreed. I just think that we are making a mistake when we claim that REST > magically makes M2M interaction have the same amount of loose coupling than > human to machine interactions. Much of the reluctance against REST in an > enterprise context IMO results from the actually existing contract in M2M > scenarios notoriously being talked away. > > If you tell Joe developer to evolve that service, you better be able to > tell him what exactly he can do and what not. He should not have to call the > client owners because not having to bring the client and server owners > together when evolving is one of *the* top advantages of REST. > > > >> By using media types and HATEOAS, the clients retain a bit of >> discoverability. It's not so much discoverability, as it is state >> awareness. It can "know" where it is at any point of the process, and >> it "knows" where to go from there. If it follows the links given with >> the types specified, the client will be told where to go next. >> > > Yes, that is true. But it also conflicts with the state machine that the > client itself has. It is not entirely driven by the service (as the human > user is). It at least makes use of a set of partially ordered goals (e.g. > you must order before you cancel an order, you must order before you pay, > etc.). > > This set of partially ordered goals is in a way exactly what e.g. AtomPub > establishes. The goal order is specified by saying what outgoing transitions > (== next available goals) to expect after completing a certain goal. > > > >> This is key. The client isn't "waiting to do the next thing". It's not >> got a "list of things to do", and going through them one by one. >> Rather it has a list of guideposts that it's told to follow, and the >> actual PATH it takes isn't known to the client until it reaches a goal >> post. >> > > Yeah - good line of thought. OTH, I have not managed to code a client that > does not eventually have its own state machine that inevitably drives the > clients program flow. No matter how much you make the client to be driven by > the server. > > > >> Now you can code all of that in to the client, it "knows" where to go, >> it build URLs, and when things change, the client breaks. Because the >> client is a stupid client and while it functioned, it did it all the >> wrong way. >> > > Suppose you code a client to an AtomPub server that has a collection of > orders and you want the client to calculate the average order amount. Can > you show me how you do that without expecting the GET on the order > collection to return an Atom feed (or any other *previously* known media > type)? > > > GET /service-doc > > ... pick order collection based on category ... > > GET /orders > > And now - how do you code from here without relying on the fact that > AtomPub tells you that collections come as Atom feeds? > > > > > Jan > > > > > > > >> So, that's, to me, where some of the robustness of the whole thing >> comes from, even in a M2M world. >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
On Dec 18, 2009, at 8:16 AM, Noah Campbell wrote: > "And now - how do you code from here without relying on the fact > that AtomPub tells you that collections come as Atom feeds?" > > When interrogating the service document, check out the accept > element in the collection: http://bitworking.org/projects/atom/rfc5023.html#rfc.section.8.3.4 > . You mentioned categories which is another means, but it may be a > red herring in the M2M example. > > Also, when you say GET /orders, you're implying an accept header of > */*. You need the equivalent of a human brain to process whatever > returned. Instead, do the following: > > GET /orders > Accept: application/order+xml; q=0.8, image/png Hmm - but how do I know that it makes sense to ask for application/ order+xml??? And likewise: how do I know that it makes sense to asp for application/ atom+xml? Jan > > Without those constraints on the request, I could understand the > line of questioning about how a service can get a document it know's > how to process. > > -Noah > > On Thu, Dec 17, 2009 at 5:21 AM, Jan Algermissen <algermissen1971@... > > wrote: > > On Dec 17, 2009, at 3:12 AM, Will Hartung wrote: > > In an M2M scenario, ALL APIs are "tightly coupled". That's just the > fact of it. APIs are contacts. Change the contract, bad things happen. > Design APIs with growth and flexibility in mind, and you can have a > more forgiving client/server experience. > > Agreed. I just think that we are making a mistake when we claim that > REST magically makes M2M interaction have the same amount of loose > coupling than human to machine interactions. Much of the reluctance > against REST in an enterprise context IMO results from the actually > existing contract in M2M scenarios notoriously being talked away. > > If you tell Joe developer to evolve that service, you better be able > to tell him what exactly he can do and what not. He should not have > to call the client owners because not having to bring the client and > server owners together when evolving is one of *the* top advantages > of REST. > > > > By using media types and HATEOAS, the clients retain a bit of > discoverability. It's not so much discoverability, as it is state > awareness. It can "know" where it is at any point of the process, and > it "knows" where to go from there. If it follows the links given with > the types specified, the client will be told where to go next. > > Yes, that is true. But it also conflicts with the state machine that > the client itself has. It is not entirely driven by the service (as > the human user is). It at least makes use of a set of partially > ordered goals (e.g. you must order before you cancel an order, you > must order before you pay, etc.). > > This set of partially ordered goals is in a way exactly what e.g. > AtomPub establishes. The goal order is specified by saying what > outgoing transitions (== next available goals) to expect after > completing a certain goal. > > > > This is key. The client isn't "waiting to do the next thing". It's not > got a "list of things to do", and going through them one by one. > Rather it has a list of guideposts that it's told to follow, and the > actual PATH it takes isn't known to the client until it reaches a goal > post. > > Yeah - good line of thought. OTH, I have not managed to code a > client that does not eventually have its own state machine that > inevitably drives the clients program flow. No matter how much you > make the client to be driven by the server. > > > > Now you can code all of that in to the client, it "knows" where to go, > it build URLs, and when things change, the client breaks. Because the > client is a stupid client and while it functioned, it did it all the > wrong way. > > Suppose you code a client to an AtomPub server that has a collection > of orders and you want the client to calculate the average order > amount. Can you show me how you do that without expecting the GET on > the order collection to return an Atom feed (or any other > *previously* known media type)? > > > GET /service-doc > > ... pick order collection based on category ... > > GET /orders > > And now - how do you code from here without relying on the fact that > AtomPub tells you that collections come as Atom feeds? > > > > > Jan > > > > > > > > So, that's, to me, where some of the robustness of the whole thing > comes from, even in a M2M world. > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
If a user goes to a "personal information" web page with a whole bunch of forms: 1) update name (textbox + button) 2) update email (textbox + button) 3) update address (textboxes + button) and a user updates only one of those forms (text + button click)... That would still be a legitimate RESTful interaction I don't get the gist of your argument. -Solomon On Thu, Dec 17, 2009 at 5:42 AM, Eric J. Bowman <eric@...>wrote: > > > Craig McClanahan wrote: > > > > Many RESTafarians frown at doing "partial updates" (i.e. only update > > the fields that are actually included in the request body) with a PUT > > -- I tend towards the pragmatic view and used this in several APIs -- > > but when you're doing a POST I don't see a reason why it should not > > make sense. Letting the client change whatever combination of fields > > they need to in *one* request (and therefore probably a single > > database transaction) would seem reasonable to me. > > > > Ack! Failing to make your messages self-descriptive isn't pragmatic. > If I have a distributed hypermedia system, and I want it to gain the > benefits of REST, then falling short of REST in the implementation is > anti-pragmatic because what I'm left with is some other architectural > style that isn't guaranteed to exhibit the desirable properties I was > after when I chose REST to meet them. > > Consensus on this list for years, has been that PUT is not used for > partial updates. While the server isn't required to honor everything > in a PUT, for example a server might not change an atom:id even if it's > updated in a PUT, this is not some loophole that allows PUT to be used > for partial updates. > > In a REST system, the only thing that matters regarding methods is that > they are used according to their definitions. PUT has update-by- > replacement (or creation) semantics, PATCH has partial-update semantics > and has been in HTTP 1.1 from the beginning (look at the obsolete RFCs, > then the comments by RFC 2616's authors about how the lack of inclusion > of PATCH was due to time constraints and lack of implementation, but > was not meant to suggest that PATCH had been removed from HTTP), and is > now reinforced by its own RFC. > > So, to overlook the method with the required semantics of partial- > update (PATCH) and assign those semantics to PUT which has different > semantics entirely, is to use PUT other than what it was intended for. > This means that out-of-band information is driving your PUT > transaction, and the semantics of the interaction are not visible > because the messaging is not self-descriptive. Since the semantics of > POST are generic, assigning it to cover for PATCH is acceptable. But > not PUT -- doing that is failing to apply the uniform interface > constraint. > > -Eric > > >
Solomon Duskis wrote: > > > If a user goes to a "personal information" web page with a whole bunch > of forms: > > 1) update name (textbox + button) > 2) update email (textbox + button) > 3) update address (textboxes + button) > > and a user updates only one of those forms (text + button click)... That > would still be a legitimate RESTful interaction Yes, but if this were done with PUT it would be done by each of them updating a single resource. Doing so may also update part of another resource—there may be all manner of interesting relationships between resources—but each would still be a full update of a URI-identified resource.
Hello guys, The question is somehow related so I added in the same thread: If I have a resource (all clients) which is a set of my clients, its hypermedia representation either contains only a set of links, a set of links with extra client information or all information with no hypermedia. The third option does not make sense, I can get it. What about the other two? The first one is a huge set of metadata and the second one is a huge set of metadata with extra client information: updates to the each client resource will affect this resource - loss of visibility? If, in the second option, every client has its own rel="self" link, then there is no such loss? Regards Guilherme Silveira Caelum | Ensino e Inovação http://www.caelum.com.br/ On Fri, Dec 18, 2009 at 9:45 AM, Jon Hanna <jon@...> wrote: > > > Solomon Duskis wrote: > > > > > > If a user goes to a "personal information" web page with a whole bunch > > of forms: > > > > 1) update name (textbox + button) > > 2) update email (textbox + button) > > 3) update address (textboxes + button) > > > > and a user updates only one of those forms (text + button click)... That > > would still be a legitimate RESTful interaction > > Yes, but if this were done with PUT it would be done by each of them > updating a single resource. Doing so may also update part of another > resource—there may be all manner of interesting relationships between > resources—but each would still be a full update of a URI-identified > resource. > > >
Jan Algermissen wrote: > > On Dec 17, 2009, at 3:14 PM, Mike Kelly wrote: > >> >>> >>>> because the resource's significance, within an >>>> application, is derived from the context in which its state was >>>> retrieved/transfered i.e. the 'application flow' leading up to it. >>>> >>>> >>> >>> Yes, but that context is essentially a type. >>> >> >> The context is provided by your client's application state, not a >> typed resource. >> > > Well....but the server sent the representation in the firt place, so > it is effectively telling me, for example: "/customers/776 is a member > resource". The server tells me the type (or kind or class or category > or rdf:type or whatever you name it). > > And M2M client code relies on such classification information when > coding for the next available transitions (goals). > > Jan The tranisitions are represented as link relations, and are classified by the hypermedia that forms your application. That is not the same thing as classifying resources. Drawing resources into an application flow will imply that the resource has a certain set of characteristics when approached from a particular context, and that is all that is necessary to treat it as such. The resource doesn't need an inherent/intrinsic type because the link relation leading to its state transfer provided everything necessary to 'classify' it. A client should start from an entry point and advance its state by following links. If your client wishes to persist a reference to a particular resource or application state, it should do so by storing the URI against the 'classification' derived from *its own application state* with respect to the protocol in question, and not derived from a type identified by the resource itself. E.g: State 1: GET /entry-point <link rel="blog" href="/123asdf" /> State 2: GET /123asdf <link rel="post" href="/4560456456uiop" /> State 3: GET /4560456456uiop <title>Hello world</title> <content>Foo Bar</content> There is no inherent type of resource /4560456456uiop, and yet a machine client with understanding of this simple blog protocol will know exactly what 'classification' it is - from the context implied by the flow in my blog application; (entry-point) -> blog -> post. - Mike
[By now: sorry to keep hammering on this - I am not just trying to be difficult] On Dec 18, 2009, at 1:34 PM, Mike Kelly wrote: > > The tranisitions are represented as link relations, and are > classified by the hypermedia that forms your application. That is > not the same thing as classifying resources. > > Drawing resources into an application flow will imply that the > resource has a certain set of characteristics when approached from a > particular context, and that is all that is necessary to treat it as > such. The resource doesn't need an inherent/intrinsic type because > the link relation leading to its state transfer provided everything > necessary to 'classify' it. Yep, sure. I never suggested anything like inherent/intrinsic (I'd use 'intentional') typing. I was talking (or at least tried to) about the classification by context. (Along the lines how AtomPub defines member resources to be those resource whose identifier is listed in a collection). So we seem to be in agreement about that. My point is that REST forbids that clients make any assumptions based on such 'classification'. So, client should not rely on an assumption that AtomPub collections respond to GET with (at least) application/ atom+xml. But if there is not such asn assumption - you cannot code the GET and subsequent operations on the response body. You just could not code - GET /entries - iterate over entries and do this and that because no prior assumption about the collection is allowed. Could be that the collection allways returns image/jpeg with a picture of the collection. Yes, AtomPub does say that a feed is returned - but that violates REST. jan > > A client should start from an entry point and advance its state by > following links. If your client wishes to persist a reference to a > particular resource or application state, it should do so by storing > the URI against the 'classification' derived from *its own > application state* with respect to the protocol in question, and not > derived from a type identified by the resource itself. > > E.g: > > State 1: > GET /entry-point > <link rel="blog" href="/123asdf" /> > > State 2: > GET /123asdf > <link rel="post" href="/4560456456uiop" /> > > State 3: > GET /4560456456uiop > <title>Hello world</title> > <content>Foo Bar</content> > > > There is no inherent type of resource /4560456456uiop, and yet a > machine client with understanding of this simple blog protocol will > know exactly what 'classification' it is - from the context implied > by the flow in my blog application; (entry-point) -> blog -> post. > > - Mike -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Hello guys, Safari sends the following accept header, due to webkit's code [1]: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 I could not find in section 12 or the Accept header definition in http 1.1 what to do if there is more than one content-type with the same q-value. In the above example, it seems like the server is free to decide whether to send application/xml, application/xhtml+xml or text/html. Any opinions on that? Should it be followed left to right (application/xml first)? Should the server decide? Regards [1] http://www.newmediacampaigns.com/page/webkit-team-admits-accept-header-error Guilherme Silveira Caelum | Ensino e Inovação http://www.caelum.com.br/
On Fri, Dec 18, 2009 at 8:10 AM, Guilherme Silveira <guilherme.silveira@...> wrote: > Hello guys, > > Safari sends the following accept header, due to webkit's code [1]: > application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 > > I could not find in section 12 or the Accept header definition in http > 1.1 what to do if there is more than one content-type with the same > q-value. In the above example, it seems like the server is free to > decide whether to send application/xml, application/xhtml+xml or > text/html. > > Any opinions on that? Should it be followed left to right > (application/xml first)? Should the server decide? Does this, from section 14, not apply here? "If more than one media range applies to a given type, the most specific reference has precedence. "[1] In which case, your precedence would be: 1) application/xhtml+xml 2) application/xml 3) text/html 4) text/plain 5) image/png 6) */* --tim [1] - http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1
On Dec 18, 2009, at 2:27 PM, Tim Williams wrote: > On Fri, Dec 18, 2009 at 8:10 AM, Guilherme Silveira > <guilherme.silveira@...> wrote: >> Hello guys, >> >> Safari sends the following accept header, due to webkit's code [1]: >> application/xml,application/xhtml+xml,text/html;q=0.9,text/ >> plain;q=0.8,image/png,*/*;q=0.5 >> >> I could not find in section 12 or the Accept header definition in >> http >> 1.1 what to do if there is more than one content-type with the same >> q-value. In the above example, it seems like the server is free to >> decide whether to send application/xml, application/xhtml+xml or >> text/html. >> >> Any opinions on that? Should it be followed left to right >> (application/xml first)? Should the server decide? > > Does this, from section 14, not apply here? No, that refers to text/html;level=1 having precedence over text/html because the former is more specific than the latter. Jan > > "If more than one media range applies to a given type, the most > specific reference has precedence. "[1] > > In which case, your precedence would be: > 1) application/xhtml+xml > 2) application/xml > 3) text/html > 4) text/plain > 5) image/png > 6) */* > > > --tim > > [1] - http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1 > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Check out the mimeparse project [1]. The last thread there talks about just this case, handling clients where more than one media-type has the same q-value [2]. The most recent build orders the acceptable items in q-value, alpha order and takes the first. FWIW, I have a mod that also lets the server decide based on a resource preference (the default for the resource, if one is given). I also have a mod (in a mess, right now) that handles some "broken" agents. For example, MS-Excel sends an accept header that prefers HTML over CSV and I usually ignore that and send CSV anyway. mca http://amundsen.com/blog/ [1] http://code.google.com/p/mimeparse/ [2] http://groups.google.com/group/mimeparse-dev/browse_thread/thread/2ec7e38517fad9be mca http://amundsen.com/blog/ On Fri, Dec 18, 2009 at 08:10, Guilherme Silveira <guilherme.silveira@...> wrote: > Hello guys, > > Safari sends the following accept header, due to webkit's code [1]: > application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 > > I could not find in section 12 or the Accept header definition in http > 1.1 what to do if there is more than one content-type with the same > q-value. In the above example, it seems like the server is free to > decide whether to send application/xml, application/xhtml+xml or > text/html. > > Any opinions on that? Should it be followed left to right > (application/xml first)? Should the server decide? > > Regards > > [1] http://www.newmediacampaigns.com/page/webkit-team-admits-accept-header-error > > Guilherme Silveira > Caelum | Ensino e Inovação > http://www.caelum.com.br/ > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Folks - There seem to be two schools of thought (I think) emerging with regards to the use of media types. One school of thought suggests coarse grained media types meaning using application/xml, text/html etc etc. While the other promotes and shows examples of refined media types that reflect the domain in which they are being used in e.g. application/vnd.order+xml. I see possible pros/cons to both approaches but I'm not sure I've seen an actual discussion targeted at discussing this matter (other than certain threads on other topics delving into the issue). Anyone want to take the first shot at backing a certain approach publicly? Thanks. Eb
On Fri, Dec 18, 2009 at 8:31 AM, Jan Algermissen <algermissen1971@...> wrote: > > On Dec 18, 2009, at 2:27 PM, Tim Williams wrote: > >> On Fri, Dec 18, 2009 at 8:10 AM, Guilherme Silveira >> <guilherme.silveira@...> wrote: >>> >>> Hello guys, >>> >>> Safari sends the following accept header, due to webkit's code [1]: >>> >>> application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 >>> >>> I could not find in section 12 or the Accept header definition in http >>> 1.1 what to do if there is more than one content-type with the same >>> q-value. In the above example, it seems like the server is free to >>> decide whether to send application/xml, application/xhtml+xml or >>> text/html. >>> >>> Any opinions on that? Should it be followed left to right >>> (application/xml first)? Should the server decide? >> >> Does this, from section 14, not apply here? > > > No, that refers to text/html;level=1 having precedence over text/html > because the former is more specific than the latter. I thought that was just an example. I read it to be "the most specific reference has precedence" - is "most specific" not defined by the media type itself here? For example, I read rfc3023 to mean that a type with a +xml should be considered 'more specific' than the generic xml. At least, it indicates that in section 7, but further confusing me it says in the appendix that they should be considered opaque and independent. If you have pointers to something that explains this better, I'd appreciate it... Thanks, --tim
--- In rest-discuss@yahoogroups.com, "amaeze77" <amaeze@...> wrote: > > Folks - > > There seem to be two schools of thought (I think) emerging with regards to the use of media types. One school of thought suggests coarse grained media types meaning using application/xml, text/html etc etc. While the other promotes and shows examples of refined media types that reflect the domain in which they are being used in e.g. application/vnd.order+xml. > > I see possible pros/cons to both approaches but I'm not sure I've seen an actual discussion targeted at discussing this matter (other than certain threads on other topics delving into the issue). > > Anyone want to take the first shot at backing a certain approach publicly? > > Thanks. > > Eb > http://tech.groups.yahoo.com/group/rest-discuss/message/6596 ;-) A key problem with using application/xml is that if everyone did that then how could you do any content negotiation between two distinct but XML-based formats? But I don't think you need to be overly specific with the media types either. i.e. application/vnd.store+xml rather than application/vnd.customer+xml, application/vnd.order+xml and application/vnd.product+xml and so on is too fine grained IMO. It becomes quite onerous to extend your application at that level of media type granularity. Where to draw the line? If it is important to make the type distinction in an intermediary or a connector, or in content negotiation then differentiate in the headers (use two distinct media types). If the distinction is only important when you are processing the document anyways (e.g. in the client, server or an intermediary that does deep content processing such as format translation) then you don't need to make the distinction in your headers. That's my take anyways. I'm curious to hear what others think. Regards, Andrew
wahbedahbe wrote: > --- In rest-discuss@yahoogroups.com, "amaeze77" <amaeze@...> wrote: > >> Folks - >> >> There seem to be two schools of thought (I think) emerging with regards to the use of media types. One school of thought suggests coarse grained media types meaning using application/xml, text/html etc etc. While the other promotes and shows examples of refined media types that reflect the domain in which they are being used in e.g. application/vnd.order+xml. >> >> I see possible pros/cons to both approaches but I'm not sure I've seen an actual discussion targeted at discussing this matter (other than certain threads on other topics delving into the issue). >> >> Anyone want to take the first shot at backing a certain approach publicly? >> >> Thanks. >> >> Eb >> >> > > http://tech.groups.yahoo.com/group/rest-discuss/message/6596 ;-) > > A key problem with using application/xml is that if everyone did that then how could you do any content negotiation between two distinct but XML-based formats? > Perform conneg server side using other request headers as well as Accept, and use the Vary mechanism in the response to describe what's going on. - Mike
On Fri, Dec 18, 2009 at 4:58 AM, Jan Algermissen <algermissen1971@...> wrote: > But if there is not such asn assumption - you cannot code the GET and > subsequent operations on the response body. You just could not code > > - GET /entries > - iterate over entries and do this and that > > because no prior assumption about the collection is allowed. Could be that > the collection allways returns image/jpeg with a picture of the collection. > > > Yes, AtomPub does say that a feed is returned - but that violates REST. Can you reference this viloation? Just curious where this is coming from. Regards, Will Hartung (willh@...)
On Fri, Dec 18, 2009 at 6:58 AM, Jan Algermissen <algermissen1971@...> wrote: > My point is that REST forbids that clients make any assumptions based > on such 'classification'. Not trying to argue, just to understand: Where/how does REST forbid such assumptions? (I just re-scanned Roy's dissertation, and might have missed something, but did not see anything quite that hard and fast.) And what is the functional difference between prior assumptions and reacting to or requesting media type?
In my experience there are a two of things going on w/ dealing with media-types: - determining which data format to use (XML, JSON, JPEG, ZIP, etc.) - determining the semantics of the media-type in hand I think these two things get mistakenly conflated in some discussions. Atom has clear semantics and agents can make solid predictions about whether they can support these semantics w/o looking inside the body. It is possible that additional semantic meaning will appear in an Atom body via rel values on LINK elements, but agents cannot guarantee they will understand these added semantics w/o looking inside the body. (X)HTML has clear semantics that are more general than Atom and it, too, is possible to add additional semantic information via rel values. There is no way for agents to determine whether they understand these rel value semantics without looking inside the body. Since XML and JSON have zero semantic value, agents can only determine whether they can safely handle the format - no prediction about the semantics can be made w/o looking inside the body. Both XML and JSON can support the rel value model for adding semantics. What does application/vnd.customer+xml or application/vnd.my-application+xml offer? The ability for agents to better predict whether the _semantic_ values of the body are understood w/o looking inside. What do you loose when adopting these media-types? Agents that do not have fore-knowledge of application/vnd.my-application+xml will simply reject the media-type and stop playing. FWIW, XML is "custom media-type friendly" as the specs adopted a "+xml" style for new media types. JSON has resisted such as style up to this point. I think this makes it more difficult to author new media-types with added semantic value when using the JSON data format. mca http://amundsen.com/blog/ On Fri, Dec 18, 2009 at 09:56, wahbedahbe <andrew.wahbe@...> wrote: > --- In rest-discuss@yahoogroups.com, "amaeze77" <amaeze@...> wrote: >> >> Folks - >> >> There seem to be two schools of thought (I think) emerging with regards to the use of media types. One school of thought suggests coarse grained media types meaning using application/xml, text/html etc etc. While the other promotes and shows examples of refined media types that reflect the domain in which they are being used in e.g. application/vnd.order+xml. >> >> I see possible pros/cons to both approaches but I'm not sure I've seen an actual discussion targeted at discussing this matter (other than certain threads on other topics delving into the issue). >> >> Anyone want to take the first shot at backing a certain approach publicly? >> >> Thanks. >> >> Eb >> > > http://tech.groups.yahoo.com/group/rest-discuss/message/6596 ;-) > > A key problem with using application/xml is that if everyone did that then how could you do any content negotiation between two distinct but XML-based formats? > > But I don't think you need to be overly specific with the media types either. i.e. application/vnd.store+xml rather than application/vnd.customer+xml, application/vnd.order+xml and application/vnd.product+xml and so on is too fine grained IMO. It becomes quite onerous to extend your application at that level of media type granularity. > > Where to draw the line? If it is important to make the type distinction in an intermediary or a connector, or in content negotiation then differentiate in the headers (use two distinct media types). If the distinction is only important when you are processing the document anyways (e.g. in the client, server or an intermediary that does deep content processing such as format translation) then you don't need to make the distinction in your headers. > > That's my take anyways. I'm curious to hear what others think. > Regards, > > Andrew > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Dec 18, 2009, at 4:07 PM, Will Hartung wrote: > On Fri, Dec 18, 2009 at 4:58 AM, Jan Algermissen > <algermissen1971@...> wrote: >> But if there is not such asn assumption - you cannot code the GET and >> subsequent operations on the response body. You just could not code >> >> - GET /entries >> - iterate over entries and do this and that >> >> because no prior assumption about the collection is allowed. Could >> be that >> the collection allways returns image/jpeg with a picture of the >> collection. >> >> >> Yes, AtomPub does say that a feed is returned - but that violates >> REST. > > Can you reference this viloation? Just curious where this is coming > from. IMHO, the hypermedia constraint forbids it. When there is a contract established between client and server that some resources will respond to GET with certain media types (e.g. AtomPub collections with application/atom+xml) then the client knows at design time which transitions will be available after the GET. IOW, it knows the state machine at design time. (Since it knows the kind of representation it will receive and this representation provides the next transitions). This contradcts state machine discovery at runtime and couples the server implementation in a way that REST actually aims to avoid. I also read it from Roy's blog entry: "A REST API should never have “typed” resources that are significant to the client. Specification authors may use resource types for describing server implementation behind the interface, but those types must be irrelevant and invisible to the client. The only types that are significant to a client are the current representation’s media type and standardized relation names." [1] Note that I am not saying this kind of coupling can be avoided in M2M interactions - I am just saying that it should be spoken about (instead of "hand-waved away") and that it should be properly understood. The issue touches at least two questions: - How can I develop a client without an existing service? I should be able to do that because it would be insane to require client development to wait until service development is done. - How much freedom does someone in charge of evolving a service *really* have without breaking clients for sure. Jan [1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 18, 2009, at 4:10 PM, Bob Haugen wrote: > On Fri, Dec 18, 2009 at 6:58 AM, Jan Algermissen > <algermissen1971@...> wrote: >> My point is that REST forbids that clients make any assumptions based >> on such 'classification'. > > Not trying to argue, just to understand: > > Where/how does REST forbid such assumptions? (I just re-scanned Roy's > dissertation, and might have missed something, but did not see > anything quite that hard and fast.) And what is the functional > difference between prior assumptions and reacting to or requesting > media type? I think the last post is also answering this. Yes? It is in some ways as simple as this: An AtomPub service is required to serve application/atom+xml for collections. Otherwise clients would break, because the AtomPub spec tells them that they can expect that media type. Now, AtomPub is very unconstraining on the server and this might hide the issue of the coupling that happens. If you design for a problem space that involves more specific hypermedia semantics you end up with a coupling that is surprisingly similar to non uniform interfaces, because so many things are being said about resources at design time. You end up asking yourself: "Damn, what exactly is it that I actually *can* change about a service implementation without messing up the clients?" Sure, I can add a new extension here and add a new supported media type there - but significantly changing the state machine, for example? Not sure. Jan > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Fri, Dec 18, 2009 at 10:03 AM, Jan Algermissen <algermissen1971@...> wrote: > > On Dec 18, 2009, at 4:10 PM, Bob Haugen wrote: > >> On Fri, Dec 18, 2009 at 6:58 AM, Jan Algermissen >> <algermissen1971@...> wrote: >>> >>> My point is that REST forbids that clients make any assumptions based >>> on such 'classification'. >> >> Not trying to argue, just to understand: >> >> Where/how does REST forbid such assumptions? (I just re-scanned Roy's >> dissertation, and might have missed something, but did not see >> anything quite that hard and fast.) And what is the functional >> difference between prior assumptions and reacting to or requesting >> media type? > > I think the last post is also answering this. Yes? Not quite. From Roy's blog entry that you quoted: "The only types that are significant to a client are the current representation’s media type and standardized relation names." Standardized relation names (as I think you have argued in this discussion, and I agree) are roughly equivalent to types in roughly the sense of typed function parameters. (Handwave handwave handwave magic happens here...) > If you design for a problem space that > involves more specific hypermedia semantics you end up with a coupling that > is surprisingly similar to non uniform interfaces, because so many things > are being said about resources at design time. You end up asking yourself: > "Damn, what exactly is it that I actually *can* change about a service > implementation without messing up the clients?" Sure, I can add a new > extension here and add a new supported media type there - but significantly > changing the state machine, for example? Not sure. Can you make that all clear with media types and relation names?
On Dec 18, 2009, at 6:33 PM, Bob Haugen wrote:
> On Fri, Dec 18, 2009 at 10:03 AM, Jan Algermissen
> <algermissen1971@...> wrote:
>>
>> On Dec 18, 2009, at 4:10 PM, Bob Haugen wrote:
>>
>>> On Fri, Dec 18, 2009 at 6:58 AM, Jan Algermissen
>>> <algermissen1971@...> wrote:
>>>>
>>>> My point is that REST forbids that clients make any assumptions
>>>> based
>>>> on such 'classification'.
>>>
>>> Not trying to argue, just to understand:
>>>
>>> Where/how does REST forbid such assumptions? (I just re-scanned
>>> Roy's
>>> dissertation, and might have missed something, but did not see
>>> anything quite that hard and fast.) And what is the functional
>>> difference between prior assumptions and reacting to or requesting
>>> media type?
>>
>> I think the last post is also answering this. Yes?
>
> Not quite.
>
> From Roy's blog entry that you quoted:
> "The only types that are significant to a client are the current
> representation’s media type and standardized relation names."
>
> Standardized relation names (as I think you have argued in this
> discussion, and I agree) are roughly equivalent to types in roughly
> the sense of typed function parameters. (Handwave handwave handwave
> magic happens here...)
>
>> If you design for a problem space that
>> involves more specific hypermedia semantics you end up with a
>> coupling that
>> is surprisingly similar to non uniform interfaces, because so many
>> things
>> are being said about resources at design time. You end up asking
>> yourself:
>> "Damn, what exactly is it that I actually *can* change about a
>> service
>> implementation without messing up the clients?" Sure, I can add a new
>> extension here and add a new supported media type there - but
>> significantly
>> changing the state machine, for example? Not sure.
>
> Can you make that all clear with media types and relation names?
An ordering example:
Suppose you are to design an ordering service. You might do the
following:
(A rather silly approach, but suitable for this example)
Define a service document media type application/ordering-srv+xml that
includes a <order-processor href=""/> element to tell the client where
the resource is that accepts orders. Example:
<service>
<order-processor href="/service/1234"/>
</service>
Next, specify that clients place orders by POSTing to the order-
processor resource and that the response will be 201 with location to
new resource that represents the order. (This specifies the client
goal of place-order (AtomPub calls the client goals 'protocol
operations', BTW).
Specify some application/order+xml for representing orders and include
an element <lineItems href=""> to holde the line items of the order.
An order would look like this:
<order>
<buyuer>...</buyer>
<lineItems href="/orders/6/lineItems">
<items>Green Doll</item>
</lineItems>
</order>
Specify another hypermedia semantic: The list of lineItems of the
order is identified by the href of the lineItem element.
Specify another goal 'add-lineitem-to-order' as: POST to the lineItems
resource (s.a.) of the order to add a line item. (Service should
respond with a 303 See Other and the order URI to indicate successful
update of the order.
(Gee - not brilliant but I hope you get the point :-)
Now, you can write a client that places an order and adds a line item
and consists of the following pseudo code:
- bootstrap with a GET to the published service URI, receiving the
service document.
- client now knows URI of order processor
- client POSTs order to order processor
- client keeps as orderUri the Location URI of the 201 response
- client does a GET on orderUri
- client uses response to find lineItems resource and POSTs new line
item.
The issue is this: How do you code the client's last line without
being sure about the media type being returned? There is nothing that
tells you that the order will be given to the client as application/
order+xml. A human driven client would not care but just process the
response and show whetever state transitions are available to the
human user (one of them possibly being 'post here to add line item').
If the response was not application/order+xml the 'add line item' goal
would not be shown.
The machine client OTH has the hard coded goal of really adding the
line item and to code that you must know that there is a reason to
expect the GET on the order to return application/order+xml. The only
way to know that is by baking it into the service specification:
Resources that are orders (oops, a 'type'!) are represented as
application/order+xml (maybe others too, but that one at least)
The effect of this is that the client developer knows at design time
that from the application state 'an order X' there will be a
transition 'add line item to X' available. This is contrary to the
idea of the client *discovering* the transition at run time.
Sure, one could code "if have add-lineItem transition then add line
item else do nothing", but this just turns a failing client assumption
into the execution of an else-branch. The issue does not go away: if
you want to code a client that orders and then adds the line item you
rely on the assumption that after placing an order there will be the
transition to add the line item. This coupldes the server to the
client quite heavily.
Pew - sorry for the mess, I hope you get the point.
Jan
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--------------------------------------
Jan Algermissen
Mail: algermissen@...
Blog: http://algermissen.blogspot.com/
Home: http://www.jalgermissen.com
--------------------------------------
See in line. On Thu, Dec 17, 2009 at 11:29 PM, Jan Algermissen <algermissen1971@...>wrote: > > On Dec 18, 2009, at 8:16 AM, Noah Campbell wrote: > > "And now - how do you code from here without relying on the fact that >> AtomPub tells you that collections come as Atom feeds?" >> >> When interrogating the service document, check out the accept element in >> the collection: >> http://bitworking.org/projects/atom/rfc5023.html#rfc.section.8.3.4. You >> mentioned categories which is another means, but it may be a red herring in >> the M2M example. >> >> Categories can be typed so you're coupling to the types. This can be pretty exact and you can black box the semantics in how you determine the content for elements in this category. > Also, when you say GET /orders, you're implying an accept header of */*. >> You need the equivalent of a human brain to process whatever returned. >> Instead, do the following: >> >> GET /orders >> Accept: application/order+xml; q=0.8, image/png >> > > Hmm - but how do I know that it makes sense to ask for > application/order+xml??? > AtomPub collections may be typed and categorized. The link I sent above discuss what goes into the "typing." After you do your initial GET. Without that typing, you're basically left to */* which means you make an assumption and hope for the best. Or, you can put what you can accept and hope your service can handle it gracefully. > And likewise: how do I know that it makes sense to asp for > application/atom+xml? > > In this case you have a resource so you need to ask a question to see if you'll get the appropriate response. This is the content negiotation. GET /resource Accept: application/atomsvc+xml And hope you don't get a HTTP/1.1 406. -Noah
On Fri, Dec 18, 2009 at 10:10 AM, Jan Algermissen <algermissen1971@...> wrote: > <lineItems href="/orders/6/lineItems"> Why doesn't: <lineItems href="/orders/6/lineItems" type="application/order+xml"> fix this? Why can't this be specified and honored? Regards, Will Hartung (willh@...)
<snip> How do you code the client's last line without being sure about the media type being returned? </snip> This is not the question you should be asking. Instead, you should ask: 1 - what tells the client how to complete the XXX state transition 2 - what informs the client that the XXX state transition exists in any given response In both cases, the answer is in out-of-band documentation. It may be true that the out-of-band documentation, along with explaining the two items above _also_ defines a string to include in the accept header that we call a "custom media-type," but there is no requirement for that. I've built a few goal-seeking clients that look for rel attributes on document elements in order to complete their work. bots do this all the time w/o making any requirements on the media-type returned. mca http://amundsen.com/blog/ On Fri, Dec 18, 2009 at 13:10, Jan Algermissen <algermissen1971@...> wrote: > > On Dec 18, 2009, at 6:33 PM, Bob Haugen wrote: > >> On Fri, Dec 18, 2009 at 10:03 AM, Jan Algermissen >> <algermissen1971@...> wrote: >>> >>> On Dec 18, 2009, at 4:10 PM, Bob Haugen wrote: >>> >>>> On Fri, Dec 18, 2009 at 6:58 AM, Jan Algermissen >>>> <algermissen1971@...> wrote: >>>>> >>>>> My point is that REST forbids that clients make any assumptions >>>>> based >>>>> on such 'classification'. >>>> >>>> Not trying to argue, just to understand: >>>> >>>> Where/how does REST forbid such assumptions? (I just re-scanned >>>> Roy's >>>> dissertation, and might have missed something, but did not see >>>> anything quite that hard and fast.) And what is the functional >>>> difference between prior assumptions and reacting to or requesting >>>> media type? >>> >>> I think the last post is also answering this. Yes? >> >> Not quite. >> >> From Roy's blog entry that you quoted: >> "The only types that are significant to a client are the current >> representation’s media type and standardized relation names." >> >> Standardized relation names (as I think you have argued in this >> discussion, and I agree) are roughly equivalent to types in roughly >> the sense of typed function parameters. (Handwave handwave handwave >> magic happens here...) >> >>> If you design for a problem space that >>> involves more specific hypermedia semantics you end up with a >>> coupling that >>> is surprisingly similar to non uniform interfaces, because so many >>> things >>> are being said about resources at design time. You end up asking >>> yourself: >>> "Damn, what exactly is it that I actually *can* change about a >>> service >>> implementation without messing up the clients?" Sure, I can add a new >>> extension here and add a new supported media type there - but >>> significantly >>> changing the state machine, for example? Not sure. >> >> Can you make that all clear with media types and relation names? > > An ordering example: > > Suppose you are to design an ordering service. You might do the > following: > (A rather silly approach, but suitable for this example) > > Define a service document media type application/ordering-srv+xml that > includes a <order-processor href=""/> element to tell the client where > the resource is that accepts orders. Example: > > <service> > <order-processor href="/service/1234"/> > </service> > > Next, specify that clients place orders by POSTing to the order- > processor resource and that the response will be 201 with location to > new resource that represents the order. (This specifies the client > goal of place-order (AtomPub calls the client goals 'protocol > operations', BTW). > > Specify some application/order+xml for representing orders and include > an element <lineItems href=""> to holde the line items of the order. > An order would look like this: > > <order> > <buyuer>...</buyer> > <lineItems href="/orders/6/lineItems"> > <items>Green Doll</item> > </lineItems> > </order> > > Specify another hypermedia semantic: The list of lineItems of the > order is identified by the href of the lineItem element. > > Specify another goal 'add-lineitem-to-order' as: POST to the lineItems > resource (s.a.) of the order to add a line item. (Service should > respond with a 303 See Other and the order URI to indicate successful > update of the order. > > (Gee - not brilliant but I hope you get the point :-) > > Now, you can write a client that places an order and adds a line item > and consists of the following pseudo code: > > - bootstrap with a GET to the published service URI, receiving the > service document. > - client now knows URI of order processor > - client POSTs order to order processor > - client keeps as orderUri the Location URI of the 201 response > - client does a GET on orderUri > - client uses response to find lineItems resource and POSTs new line > item. > > The issue is this: How do you code the client's last line without > being sure about the media type being returned? There is nothing that > tells you that the order will be given to the client as application/ > order+xml. A human driven client would not care but just process the > response and show whetever state transitions are available to the > human user (one of them possibly being 'post here to add line item'). > If the response was not application/order+xml the 'add line item' goal > would not be shown. > > The machine client OTH has the hard coded goal of really adding the > line item and to code that you must know that there is a reason to > expect the GET on the order to return application/order+xml. The only > way to know that is by baking it into the service specification: > Resources that are orders (oops, a 'type'!) are represented as > application/order+xml (maybe others too, but that one at least) > > The effect of this is that the client developer knows at design time > that from the application state 'an order X' there will be a > transition 'add line item to X' available. This is contrary to the > idea of the client *discovering* the transition at run time. > > Sure, one could code "if have add-lineItem transition then add line > item else do nothing", but this just turns a failing client assumption > into the execution of an else-branch. The issue does not go away: if > you want to code a client that orders and then adds the line item you > rely on the assumption that after placing an order there will be the > transition to add the line item. This coupldes the server to the > client quite heavily. > > Pew - sorry for the mess, I hope you get the point. > > Jan > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
I don't think it can't. For example, Atompub uses the accept element. I think the argument is that this is an advisory value because the resource may evolve independently of the client (well, this should be expected) and a client should be able to try the link and see what happens. Perhaps they could honor the value (with a fallback): GET /resource Accept: application/order+xml; q=0.9, application/order-with-shipping+xml or they could use it as a last resort, with a hail mary at the end: GET /resource Accept: application/order-with-shipping+xml; q=0.5, application/order+xml; q=0.3, */* See http://www.w3.org/TR/html401/struct/links.html#adef-type-A for discussion about using the type attribute in a link element. -Noah On Fri, Dec 18, 2009 at 10:18 AM, Will Hartung <willh@...> wrote: > On Fri, Dec 18, 2009 at 10:10 AM, Jan Algermissen > <algermissen1971@...> wrote: > > <lineItems href="/orders/6/lineItems"> > > Why doesn't: > > <lineItems href="/orders/6/lineItems" type="application/order+xml"> > > fix this? > > Why can't this be specified and honored? > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Fri, Dec 18, 2009 at 1:10 PM, Jan Algermissen <algermissen1971@...> wrote: > The effect of this is that the client developer knows at design time > that from the application state 'an order X' there will be a > transition 'add line item to X' available. This is contrary to the > idea of the client *discovering* the transition at run time. It doesn't *know* this, it's just that if it isn't, then it can't add line items. There's nothing wrong with that. Mark.
On Dec 18, 2009, at 7:51 PM, Mark Baker wrote: > On Fri, Dec 18, 2009 at 1:10 PM, Jan Algermissen > <algermissen1971@...> wrote: >> The effect of this is that the client developer knows at design time >> that from the application state 'an order X' there will be a >> transition 'add line item to X' available. This is contrary to the >> idea of the client *discovering* the transition at run time. > > It doesn't *know* this, it's just that if it isn't, then it can't add > line items. There's nothing wrong with that. But if the developer does not know it - why does she code it in the first place? Or, IOW: how do I know which possibilities to code for? Jan > > Mark. > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > An ordering example: > > Suppose you are to design an ordering service. You might do the > following: > (A rather silly approach, but suitable for this example) > > Define a service document media type application/ordering-srv+xml that > includes a <order-processor href=""/> element to tell the client where > the resource is that accepts orders. Example: > > <service> > <order-processor href="/service/1234"/> > </service> > You made your mistake in step 1. You don't define a media type for a service, you define it for a type of client. If you define a media type for a service, i.e. so that it can only describe the data and control options offered by that one service, then a client that consumes that media type is obviously coupled to the service. Your client can only do what is allowed by the media type(s) it knows how to process. If you want to build a system where the client is not bound to a specific service then you must define a (set of) media type(s) that is able to define a space of services. The space is constrained on the fly by the hypermedia processed by the client to the specific service being executed. There's no magic here. A web browser can process any service that can defined by HTML. If you start returning another media type then the browser is stuck. And as I've said before, take the human out of the equation. The browser is rendering the content, turning it into messages to the windowing system, and responding to messages (e.g. clicks) from the windowing system. When you are defining a media type for a _client_ (rather than a service) you have to think about the client-side system and how to drive it. So for your ordering client (that can interact with a wide range of ordering services): what events cause something to be bought? What information needs to be communicated to the underlying system to facilitate an ordering decision? What information accompanies the ordering event from the underlying system? Answer these questions and use them to inform your media type design. The hypermedia document declaratively describes to the client how to interact with the underlying system while in an application state. It also tells the client how to translate client events into HTTP requests for new resource representations and/or to modify resources. Content negotiation tells the server what kind of client it is dealing with where "kind" is expressed as a (set of) media type(s). This lets is represent the resource in a form that can drive the requesting client. The key place where HATEOAS and client-server decoupling fall apart in practice is when media types are defined for a service rather than for a type of client. When a media type is an expression of a specific service then HATEOAS isn't possible because the media type not designed to express the variability between services that a client can interact with in a way that is meaningful to the client. Instead people express the options in a manner that is only meaningful to the service and scratch their heads trying to figure out how a client is supposed to make the decision without some kind of "human intelligence" interpreting the choices. When you instead design the media type around a type of client then you don't have these problems. Regards, Andrew
On Fri, Dec 18, 2009 at 2:15 PM, Jan Algermissen <algermissen1971@...>wrote: > > On Dec 18, 2009, at 7:51 PM, Mark Baker wrote: > > > On Fri, Dec 18, 2009 at 1:10 PM, Jan Algermissen > > <algermissen1971@...> wrote: > >> The effect of this is that the client developer knows at design time > >> that from the application state 'an order X' there will be a > >> transition 'add line item to X' available. This is contrary to the > >> idea of the client *discovering* the transition at run time. > > > > It doesn't *know* this, it's just that if it isn't, then it can't add > > line items. There's nothing wrong with that. > > But if the developer does not know it - why does she code it in the > first place? > > Or, IOW: how do I know which possibilities to code for? > The client developer codes for the behaviors he wants the client to be able to perform. This was the point Andrew tried to make earlier in the thread: On Thu, Dec 17, 2009 at 1:10 AM, wahbedahbe <andrew.wahbe@...> wrote: > You are making the mistake of starting with the service. You need to start > with the client... tell me more about this client. What event causes a > search for tickets to occur? Where does the data that goes into the search > parameters come from? Where does the new value for the status come from? > What happens after the status is updated? > > The hypermedia format drives the client. How can you define your hypermedia > format without first understanding and defining your client? > If the client's function is to manipulate line items in a purchase order then the developer needs to code it to do so. If a client's function is only to check the shipping address on a purchase order, then it doesn't need to know about "transitions" for manipulating line items and the developer doesn't need to code such knowledge into it. I get the feeling that you think REST was supposed to enable a universal learning client: from one URL the client can learn all possible behaviors from the representations it receives. No machine can yet to that. Spiders don't even come close, even though they are the most universal machine client agents. The only client that is such a universal learner is a human being sitting in front of a browser. I'm not sure where you got your impression of how loose REST was supposed to be. Roy did say that clients can depend on (be coded for) media types and standard relations. A client must know these in advance to interact with a resource. -- Nick
On Fri, Dec 18, 2009 at 3:55 PM, Nick Gall <nick.gall@...> wrote: > I'm not sure where you got your impression of how loose REST was supposed to be. Don't know who I am agreeing or disagreeing with, but I do expect predictable patterns for the whole M2M order-to-cash cycle to emerge. That was the idea behind old-school EDI, but EDI required some months of negotiation before any 2 newly-met agents could do business. REST should be able to do better, but the standards will need to be worked out. So are standardized media types and relations sufficient?
Bob: If anyone was ever crazy enough to task me w/ implementing an EDI-like, cross-vendor M2M solution, I'd focus exclusively on getting consensus on a new registered semantically-rich media-type. In the meantime, I'd work w/ groups to leverage their existing implementations to parse out the useful LINK rel values and data-element details and implement XHTML ad-hoc representations. IOW, if there's a good chance of a wide-adoption, go for registering a new media-type. Unless and until, leverage existing semantic types that contain added semantic value. My work life has kept me in the "unless and until" category so far<g>. mca http://amundsen.com/blog/ On Fri, Dec 18, 2009 at 17:18, Bob Haugen <bob.haugen@...> wrote: > On Fri, Dec 18, 2009 at 3:55 PM, Nick Gall <nick.gall@...> wrote: >> I'm not sure where you got your impression of how loose REST was supposed to be. > > Don't know who I am agreeing or disagreeing with, but I do expect > predictable patterns for the whole M2M order-to-cash cycle to emerge. > That was the idea behind old-school EDI, but EDI required some months > of negotiation before any 2 newly-met agents could do business. REST > should be able to do better, but the standards will need to be worked > out. > > So are standardized media types and relations sufficient? > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Fri, Dec 18, 2009 at 5:18 PM, mike amundsen <mamund@...> wrote: > If anyone was ever crazy enough to task me w/ implementing an > EDI-like, cross-vendor M2M solution, I'd focus exclusively on getting > consensus on a new registered semantically-rich media-type. http://tools.ietf.org/html/rfc1767 from: http://en.wikipedia.org/wiki/Internet_media_type I thought UBL would have proposed a media type too, but couldn't find any references.
<snip> http://tools.ietf.org/html/rfc1767 </snip> yeah, i've seen that. so far, no one has been crazy enough.... mca http://amundsen.com/blog/ On Fri, Dec 18, 2009 at 21:06, Bob Haugen <bob.haugen@...> wrote: > On Fri, Dec 18, 2009 at 5:18 PM, mike amundsen <mamund@...> wrote: >> If anyone was ever crazy enough to task me w/ implementing an >> EDI-like, cross-vendor M2M solution, I'd focus exclusively on getting >> consensus on a new registered semantically-rich media-type. > > > from: > http://en.wikipedia.org/wiki/Internet_media_type > > I thought UBL would have proposed a media type too, but couldn't find > any references. > > > ------------------------------------ > > Yahoo! Groups Links > > > >
> You made your mistake in step 1. > You don't define a media type for a service, you define it for a >type of client. If you define a media type for a service, i.e. so >that it can only describe the data and control options offered by >that one service, then a client that consumes that media type is >obviously coupled to the service. Nick - What does "type of client" really mean? How are you/we distinguishing between "clients"? I'd like to know. Thanks. Eb
Guilherme Silveira wrote: > > Hello guys, > > Safari sends the following accept header, due to webkit's code [1]: > application/xml,application/xhtml > +xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 > > I could not find in section 12 or the Accept header definition in http > 1.1 what to do if there is more than one content-type with the same > q-value. In the above example, it seems like the server is free to > decide whether to send application/xml, application/xhtml+xml or > text/html. > > Any opinions on that? Should it be followed left to right > (application/xml first)? Should the server decide? > > Regards > > [1] > http://www.newmediacampaigns.com/page/webkit-team-admits-accept-header-error > The following passage from that link is in error: "The latest versions of WebKit, and thus Safari and Chrome, prefer XML over HTML in the Accept header. If a server is following the HTTP spec and serving a resource that can be represented as XML or HTML, *it will respond with HTML to Firefox and XML to Safari*." Actually, it *might* respond that way, but it's up to the server. The proper legacy RFC for HTTP describes q-values quite nicely, and explains the formula for calculating the client's q-value against that of the server. What most people seem to forget in all this brouhaha over Accept (request) headers, is that representations on the server may be assigned q-values as well. On my server, application/xhtml+xml is assigned a sufficiently higher q- value than text/html, such that any browser accepting application/ xhtml+xml, even if it prefers text/html, gets application/xhtml+xml. The same goes for systems with more than one feed -- the server q-value for Atom just needs to be .9 while all others are .1 and the calculations will work out in favor of Atom even on feed readers that prefer RSS, while still supporting RSS-only clients. It's only if your server-side q-value calculation results in a tie, that order of appearance in the client's Accept header becomes a factor. The goal of the server is to return the highest-quality representation it has to offer, that's compatible with the client. The server is not subservient to the client's Accept header. I don't really care that WebKit prefers HTML over XHTML, the fact that it supports XHTML and works with my client-side XSLT methods (which reduce bandwidth by caching XHTML template-generating code on the client, to transform raw Atom from the server) means that WebKit is compatible with my highest- quality representation (lower bandwidth = cheaper to host is the actual thinking, thanks to REST's scaling), so that's what it gets. HTH, Eric
> [1] > http://www.newmediacampaigns.com/page/webkit-team-admits-accept-header-error > I forgot my other gripe, which is with the Safari team's response: "On the other hand, this isn't a hugely important bug, [...] since content negotiation is not really used much in the wild." Actually, it is. The portion of websites which compress HTML, JS and CSS documents is sufficiently large that this technology is supported in most browsers (more than other HTTP clients) and most webservers. Of course, handling HTTP compression requires by-the-book Content Negotiation, to such an extent that intermediaries can store compressed HTTP responses, yet also unzip them on-the-fly to serve clients that don't support gzip -- without having to store two versions or pass the request on to an origin server. So it's always a bit shocking to read a browser developer claim that conneg just doesn't get used much, when it's actually widely and successfully deployed to great effect. Even if most webmasters whose sites employ compression, have absolutely no clue about it. Without being able to send compressed text files to clients effectively, wouldn't the Web require at least 50% more bandwidth than it does now? -Eric
Guilherme Silveira wrote: > > Hello guys, > > The question is somehow related so I added in the same thread: > > If I have a resource (all clients) which is a set of my clients, its > hypermedia representation either contains only a set of links, a set > of links with extra client information or all information with no > hypermedia. > > The third option does not make sense, I can get it. What about the > other two? The first one is a huge set of metadata and the second one > is a huge set of metadata with extra client information: updates to > the each client resource will affect this resource - loss of > visibility? > No, what the server does is supposed to be opaque behind a uniform interface. Don't worry about visibility beyond the uniform interface -- the server can do whatever it wants. Besides, if you're defining a resource as a list of links to client-specific resources, plus client information derived from those client-specific resources, then the fact that changing a client-specific resource may change the resource up one level in a hierarchy should be readily apparent. But that's usability, not invisibility. > > If, in the second option, every client has its own rel="self" link, > then there is no such loss? > My rule of thumb, is not to use rel='self' unless you need it programatically, for some reason. In case the document is saved without its original request URI, it should still work, which is why I like to use a <base> or xml:base href, plus relative URLs within the document. If the document is loaded from disk using the 'file://' URI scheme, it leads the user right back into the flow of the application. So I don't see much purpose for rel='self'. -Eric
I am obviously totally unable to get me point across...sorry. Thanks for following this extended thread; I'll give it one more try: On Dec 18, 2009, at 7:51 PM, Mark Baker wrote: > On Fri, Dec 18, 2009 at 1:10 PM, Jan Algermissen > <algermissen1971@...> wrote: >> The effect of this is that the client developer knows at design time >> that from the application state 'an order X' there will be a >> transition 'add line item to X' available. This is contrary to the >> idea of the client *discovering* the transition at run time. > > It doesn't *know* this, it's just that if it isn't, then it can't add > line items. There's nothing wrong with that. > So, it would absolutely make sense to do the following then? Suppose I know from a service doc that /foo/entries is an AtomPub collection. Then suppose I am implemeting a client that likes to get an image of the collection. Given that this is what I want, I should then do this? GET /foo/entries Accept: image/* Now, if you think that is an insane thing to do because it is insane to expect image/* to be available from an AtomPub collection. Please tell me why this is insane. Jan > Mark. > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 18, 2009, at 9:27 PM, wahbedahbe wrote: > --- In rest-discuss@yahoogroups.com, Jan Algermissen > <algermissen1971@...> wrote: >> >> An ordering example: >> >> Suppose you are to design an ordering service. You might do the >> following: >> (A rather silly approach, but suitable for this example) >> >> Define a service document media type application/ordering-srv+xml >> that >> includes a <order-processor href=""/> element to tell the client >> where >> the resource is that accepts orders. Example: >> >> <service> >> <order-processor href="/service/1234"/> >> </service> >> > > You made your mistake in step 1. > You don't define a media type for a service, you define it for a > type of client. If you define a media type for a service, i.e. so > that it can only describe the data and control options offered by > that one service, then a client that consumes that media type is > obviously coupled to the service. Well, I did not bother to say "define a media type for a *kind of* service" because it wasn't important for the point I am trying to make. I meant a media type along the lines of application/atomsrv+xml. You need to tie service types to media types, otherwise you do not have a basis for defining what set of other media types are used by the service nor for service discovery by service type. But that is another story. Jan > > Your client can only do what is allowed by the media type(s) it > knows how to process. If you want to build a system where the client > is not bound to a specific service then you must define a (set of) > media type(s) that is able to define a space of services. The space > is constrained on the fly by the hypermedia processed by the client > to the specific service being executed. > > There's no magic here. A web browser can process any service that > can defined by HTML. If you start returning another media type then > the browser is stuck. > > And as I've said before, take the human out of the equation. The > browser is rendering the content, turning it into messages to the > windowing system, and responding to messages (e.g. clicks) from the > windowing system. When you are defining a media type for a _client_ > (rather than a service) you have to think about the client-side > system and how to drive it. > > So for your ordering client (that can interact with a wide range of > ordering services): what events cause something to be bought? What > information needs to be communicated to the underlying system to > facilitate an ordering decision? What information accompanies the > ordering event from the underlying system? Answer these questions > and use them to inform your media type design. > > The hypermedia document declaratively describes to the client how to > interact with the underlying system while in an application state. > It also tells the client how to translate client events into HTTP > requests for new resource representations and/or to modify resources. > > Content negotiation tells the server what kind of client it is > dealing with where "kind" is expressed as a (set of) media type(s). > This lets is represent the resource in a form that can drive the > requesting client. > > The key place where HATEOAS and client-server decoupling fall apart > in practice is when media types are defined for a service rather > than for a type of client. When a media type is an expression of a > specific service then HATEOAS isn't possible because the media type > not designed to express the variability between services that a > client can interact with in a way that is meaningful to the client. > > Instead people express the options in a manner that is only > meaningful to the service and scratch their heads trying to figure > out how a client is supposed to make the decision without some kind > of "human intelligence" interpreting the choices. When you instead > design the media type around a type of client then you don't have > these problems. > > Regards, > > Andrew > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 18, 2009, at 10:55 PM, Nick Gall wrote: > > > On Fri, Dec 18, 2009 at 2:15 PM, Jan Algermissen <algermissen1971@... > > wrote: > > On Dec 18, 2009, at 7:51 PM, Mark Baker wrote: > > > On Fri, Dec 18, 2009 at 1:10 PM, Jan Algermissen > > <algermissen1971@...> wrote: > >> The effect of this is that the client developer knows at design > time > >> that from the application state 'an order X' there will be a > >> transition 'add line item to X' available. This is contrary to the > >> idea of the client *discovering* the transition at run time. > > > > It doesn't *know* this, it's just that if it isn't, then it can't > add > > line items. There's nothing wrong with that. > > But if the developer does not know it - why does she code it in the > first place? > > Or, IOW: how do I know which possibilities to code for? > > The client developer codes for the behaviors he wants the client to > be able to perform. Sure. But the client developer is not just doing anything that comes to his mind. Instead, he uses descriptions of services (such as AtomPub or OpenSearch) to get a general idea what would make sense to do! He is not just poking around in the Web trying to get Google to respond with an audio file to a search. Note that for human clients this is entirely different because these client in fact *can* simply react to whatever next state they are put in. The do not have their own state machine. The do not follow an overall goal. The human user does and aaventually is driven by the hypermedia received by the user agent. Machine clients can never be made that passive with regard to their program flow. Jan > This was the point Andrew tried to make earlier in the thread: > > On Thu, Dec 17, 2009 at 1:10 AM, wahbedahbe <andrew.wahbe@...> > wrote: > You are making the mistake of starting with the service. You need to > start with the client... tell me more about this client. What event > causes a search for tickets to occur? Where does the data that goes > into the search parameters come from? Where does the new value for > the status come from? What happens after the status is updated? > > The hypermedia format drives the client. How can you define your > hypermedia format without first understanding and defining your > client? > > If the client's function is to manipulate line items in a purchase > order then the developer needs to code it to do so. If a client's > function is only to check the shipping address on a purchase order, > then it doesn't need to know about "transitions" for manipulating > line items and the developer doesn't need to code such knowledge > into it. > > I get the feeling that you think REST was supposed to enable a > universal learning client: from one URL the client can learn all > possible behaviors from the representations it receives. No machine > can yet to that. Spiders don't even come close, even though they are > the most universal machine client agents. The only client that is > such a universal learner is a human being sitting in front of a > browser. > > I'm not sure where you got your impression of how loose REST was > supposed to be. Roy did say that clients can depend on (be coded > for) media types and standard relations. A client must know these in > advance to interact with a resource. > > -- Nick > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Sorry to disapoint, but recently I've been approaching the problem by attempting to use the strengths of both schools of thought. BUCKLE UP IT'S EXAMPLE TIME! > > We have the resource wooland.com/critters/weasel > > This resource is available in the following media types: > > application/vnd.php.serialized > application/json > application/vnd.php.serialized+xml > application/vnd.critter.composite+xml > application/xhtml+xml > > The order of these media types determines the amount of work Apache must do > to result in the media type (the default output of the PHP script for the > resource is application/vnd.php.serialized; the media type > application/xhtml+xml is arrived at after processing via output filters: > application/vnd.php.serialized -> application/vnd.php.serialized+xml -> > application/vnd.critter.composite+xml -> application/xhtml+xml) > > A quick brief explaination for each media type: > > application/vnd.php.serialized+xml is simply an XML format for > application/vnd.php.serialized. Pretty much a 1 to 1 mapping. > > application/vnd.critter.composite+xml is a service specific media type that > pulls in other related XML fragments from other resources to provide a > fuller, richer media type rather than the stock resource. This is a > composite document so contains things like the Weasels top 5 favourite > condiments, his top 5 favourite biscuits etc; The complete list of Mr > Weasels favourite biscuits can be found at > wooland.com/critters/weasel/biscuits. > > The application/xhtml+xml representation consists of an XHTML marked up > page of the composite media type. This has lovely hyperlinks and css and > animations and AJAX and stuff. This also uses a nice simple micro-format to > semanticly markup Mr Weasals details, and includes a reference to a HTML > profile, allowing user-agents to use GRDDL to determine the relationships, > and retrieve a nice RDF doc of it all. > > So hopefully you can see that the introduction of a composite media type > helps to provide the user-agent access to the core resource representations > (application/vnd.php.serialized, application/json, > application/vnd.php.serialized+xml) and richer, more detailed, gui interfacy > types as well. > > This is how I've been attempting to broach the problem. I'd be interested > to hear anyones opinons or alternatives (or glaring errors or > misunderstandings) > > Cheers! > > Ben > > > --- In rest-discuss@yahoogroups.com, "amaeze77" <amaeze@...> wrote: > > > > Folks - > > > > There seem to be two schools of thought (I think) emerging with regards > to the use of media types. One school of thought suggests coarse grained > media types meaning using application/xml, text/html etc etc. While the > other promotes and shows examples of refined media types that reflect the > domain in which they are being used in e.g. application/vnd.order+xml. > > > > I see possible pros/cons to both approaches but I'm not sure I've seen an > actual discussion targeted at discussing this matter (other than certain > threads on other topics delving into the issue). > > > > Anyone want to take the first shot at backing a certain approach > publicly? > > > > Thanks. > > > > Eb > > > > >
Subbu Allamaraju wrote:
>
> IMHO, Roy's post below must be taken with a bit of reality mixed.
> Like most things in software, it is not an absolute standard to
> measure "goodness" of RESTful web services.
>
I think, as the hypertext constraint is a critical part of the REST
architectural style, any system that falls short in this area can't be
called REST at all. The result is a different architectural style
altogether. Someone needs to name this style (preferably a name that's
buzzword-worthy) and analyze it in terms of networked software
architecture, just like REST is. The result would enable a comparison
chart to be made which lays out exactly what the differences are, in
terms of constraints and desirable properties.
If a project can live without the desirable properties induced by REST,
then perhaps this other architectural style is appropriate for it. But,
such a project should not call itself REST, because that would imply
desirable properties exist on that system, when this can't be proven
due to the obvious lack of some constraint or other. I really would
like to see this hypertext-constraint-less, but otherwise RESTful,
architectural style formalized for the plethora of REST claimants that
are actually using this other style out in the wild.
The most important takeaways from Roy's post I linked to:
"
A REST API should spend almost all of its descriptive effort in
defining the media type(s) used for representing resources and driving
application state, or in defining extended relation names and/or
hypertext-enabled mark-up for existing standard media types. Any effort
spent describing what methods to use on what URIs of interest should be
entirely defined within the scope of the processing rules for a media
type... A REST API must not define fixed resource names or hierarchies
(an obvious coupling of client and server).
"
I'll describe this further, down below in my example.
>
> Most publicly visible web services are meant for mashing up data.
> Communicating URIs in representations is one thing, but using them to
> drive application flow is an entirely different beast. Most mashup
> scenarios require fair bit of control on the flow. Take Flickr for
> example. Even if it is fixed to use HTTP correctly, making it
> hypermedia driven for application flow does not get Flickr very far.
>
Agreed. REST is not the solution to all problems. Neither is HTTP.
The Web, and the Internet itself, are constantly evolving new
architectural styles. Take RFC 5694, for instance. P2P is now gaining
some formalization as a networked-software architectural style. This
sort of formalization should also be applied to the plethora of
non-REST APIs out there that wouldn't be better off as REST, so there's
some common architectural ground rather than everyone just winging it.
>
> Of course, using hypermedia to drive application flow makes sense
> when the server can control the flow.
>
Exactly. Let's take a look at part of Talis' API:
http://n2.talis.com/wiki/Contentbox
http://n2.talis.com/wiki/Text_Search_Syntax
They've spent all their effort in describing what methods to use on
what URIs of interest, none of which are in-scope for RSS, while
defining fixed resource names and hierarchies. There's no sense of
what the Contentbox resource is... The documentation defines
the /items resource as containing a list of content items. But the
protocol returns a search form if /items is dereferenced, not a list of
contents. The overall result is an ad-hoc XML-RPC interface to a media
type obsoleted by Atom (which has a corresponding protocol so you don't
have to make up one of your own, as this API does).
I'm being harsh, though. Throw out all the query stuff and the
protocol stuff, and focus on the "Platonic Ideal" of the resource type
identified as "Contentbox" and there's plenty to work with in terms of
making it into a REST resource. But first, a note on resource types,
another valuable takeaway from Roy's post, in light of ongoing debate
on this list around the issue:
"
A REST API should never have "typed" resources that are significant to
the client. Specification authors may use resource types for describing
server implementation behind the interface, but those types must be
irrelevant and invisible to the client. The only types that are
significant to a client are the current representation’s media type and
standardized relation names.
"
It's perfectly legit to refer to different resource types in this
application with different names. For example, Contentbox and Metabox
make fine resource type names, but will ultimately be defined as
different media types, which is the only concern the client has. (Don't
try to somehow couple client behavior to the abstract notion of
"Contentbox" or "Metabox", just use them to describe your API for human
consumption.) If, instead of beginning by defining a URI allocation
scheme, this REST API's designers had followed a disciplined REST
approach, the first thing they would have done with Contentbox would've
been to define how it conceptually fits within their system.
There's nothing special about it, it's easy to grasp, it's an index for
a flat-file media collection, plus a search interface. Whatever its
directory name, producing a list of its contents is as easy as using
"Options: Directories" in Apache. That isn't quite what we want,
though. There is a duality to the Contentbox resource: first, we want
it to list all contents as per the underlying filesystem; second, we
want it to provide a metadata-driven search interface to the media
files. So I see two Contentbox resource subtypes: Contentbox-search and
Contentbox-index. I could combine the two into a single index-and-search
resource, but I'm not making that design choice, for reasons having
nothing to do with REST, though.
The next step is to figure out what media types we want to apply to
Contentbox. So we dip into our handy REST toolbox and find that most
features we want, like pagination and editing, already exist as defined
standards -- Atom and Atom Protocol, plus standard extensions like
OpenSearch. Pagination and search-syntax media-type extensions
definitely have a say in the URI allocation scheme, which is why we
don't start there, we start by defining which media types to use, plus
identifying what our resources are.
Let's start with Contentbox-index. This is a paginated list of files,
which can be presented in either XHTML or as an Atom Feed of Atom media
entries. The Atom Protocol service document identifies Contentbox-
index as an Atom Protocol collection. Here's where we throw out this
portion of the n2 API:
http://n2.talis.com/wiki/Contentbox#Request_Parameters
The "query" parameter applies to Contentbox-search, more about that
later. The "max" and "offset" parameters, which create a cache-defeating
"sliding door" are tossed in favor of individually-numbered (in the
URI) pages of predetermined length, which implement pagination as per
RFC 5005. The "sort" parameter is tossed, that's a client-side behavior
that doesn't belong in the URI (except perhaps in Contentbox-search).
The xsl parameter implements transformation in a bass-ackwards fashion,
and the desired output Content-Type doesn't belong in the URL. To do
so, instead of honoring the Accept header, is a violation of:
http://www.w3.org/2001/tag/doc/mime-respect.html
The output Content-Type is what the author/server intends, as coded in
the XSLT document. What's needed here, is some Content Negotiation.
At this point, it should be obvious that the URL of a Contentbox is
irrelevant in a REST API, so instead of coupling it to /{storename}/
items and limiting a store to one Contentbox, it should be up to the
user to name it /{storename}/foo/ or /{storename}/bar/ or even
/{storename}/foo/bar/, or all three. I'll use /{storename}/items/ with
the trailing-slash for this example.
For the Contentbox-index resource, we want paginated output in either
XHTML or Atom depending on client preference. With conneg, this gives
us a URI allocation scheme like so:
/{storename}/items/
/{storename}/items/;page=2
/{storename}/items/;page=3
Etc. The server responds with a Vary: Accept header, with Content-
Location headers like so:
/{storename}/items/index.html
/{storename}/items/index.atom
/{storename}/items/index.html;page=2
/{storename}/items/index.atom;page=2
/{storename}/items/index.html;page=3
/{storename}/items/index.atom;page=3
This makes every representation a resource in its own right. The
Content-Location, Alternates and Vary headers make conneg visible, so
even without documentation, the self-descriptive messaging reveals that
it's possible to bypass content negotiation and directly request Atom or
XHTML based on filename extension. This is a logical approach; putting
the MIME type (or a token like 'atom' or 'html') in a query string is
not.
The XHTML output is simply transformed from the Atom using XSLT. If
someone wants their own XSLT output, there are Web services out there
which allow the input of a source URL (the .atom resource directly) and
a stylesheet URL, this doesn't belong in the Contentbox request URIs.
The XHTML variant can include a form which links to such a service and
runs its Atom alternate through a user-specified stylesheet, i.e. we
can apply the hypertext constraint to this feature.
Instead of starting with resource identification, media type and link
relation selection before getting to URIs, the designers of the n2 API
picked a bookmark URI (/items) and added a bunch of features to it
through the query string. Had they followed a disciplined REST
approach, I doubt that they would have wound up identifying Contentbox
as a plethora of resources, i.e. each sort key creates two new resource
subtypes, one for ascending, the other for descending, etc.
At some point, seeing the number of semantically-identical first-class
resources, and attempting to define a URI allocation scheme for them,
would have revealed itself as a problem. This problem does not reveal
itself when a URI is defined, and a sort feature is added in the query
string, because there is no sense of what the Contentbox resource *is*
using that approach. A disciplined approach here leads to an order of
magnitude fewer URIs for the server to manage, while vastly increasing
cache efficiency (once again exploding the myth that REST creates "too
many URLs").
On to Contentbox-search. Again, /{storename}/items should be able to
use any name not reserved by the system, instead of just /items. But
the idea is the same -- we want to search the -index of the same
Contentbox's metadata, and return a representation in either XHTML or
Atom depending on client preference, listing the links to the contained
resources. The query syntax used is standardized by using OpenSearch.
/{storename}/items
/{storename}/items.html
/{storename}/items.atom
/{storename}/items?q={searchTerms}
/{storename}/items.html?q={searchTerms}
/{storename}/items.atom?q={searchTerms}
/{storename}/items?q={searchTerms}&p={startPage?}
/{storename}/items.html?q={searchTerms}&p={startPage?}
/{storename}/items.atom?q={searchTerms}&p={startPage?}
I'm aware that OpenSearch allows output format to be specified as part
of the query string, but I still believe supported output formats
should be differentiated in the Path to support content negotiation,
while also enabling conneg to be cleanly bypassed. Once again, the
XHTML output is transformed from the Atom, using XSLT on the server.
If sort order is enabled in the URIs, it should only be on Contentbox-
search, and should be worked out as a proposed extension to OpenSearch
(if there isn't already such a beast). For Contentbox-index, page-by-
page client-based sorting should do. If that's too fine-grained, the
user has the option of using Contentbox-search instead. Two interfaces
to one filesystem; I like it.
So there's a RESTful re-work of n2's Contentbox API, no hard feelings I
hope. In a nutshell, my advice here is "Just Use Atom (tm)." By
starting with the conceptual visualization of the Contentbox resource
type and discovering its subtypes, followed by the selection of standard
methods, media types / extensions, and link relations (first, last,
prev, next, edit, alternate, etc.), before even _thinking_ about URI
allocation schemes, a disciplined approach to REST is being followed.
What I've described here can be fleshed out into an API which doesn't
define fixed resource names or hierarchies. Resource interfaces are
generic, not object-centric. Interaction is driven by hypertext, not
out-of-band information. Interaction is cleanly separated from
identification. The API may be cleanly documented by describing the
Contentbox resource in terms of media types and link relations --
methods don't bear mentioning as their use is entirely defined within
the processing rules of the media types. Any server written to it would
have the freedom to manage its own namespace. The whole thing is based
on selecting the standard media types best suited to the task at hand.
"Contentbox" as a resource-type is irrelevant and invisible to the
client, as are its -search and -index sub-resources.
I may be long-winded, but I hope I've come around full-circle back to
my original point of, "Go read Roy's post on 'REST APIs must be
hypertext-driven' as it explains exactly where this API goes wrong."
The result of following his advice in my example here is a state-of-the-
art, pragmatic solution to the problem at hand, which may be widely
understood by implementers right out of the starting gate -- without any
murmurs from this list about not being "Roy Fielding's REST"...
-Eric
Of course, on the server, it's called a 'qs' value, not a 'q' value. My bad! -Eric
"Eric J. Bowman" wrote: > > Subbu Allamaraju wrote: > > > > IMHO, Roy's post below must be taken with a bit of reality mixed. > > Like most things in software, it is not an absolute standard to > > measure "goodness" of RESTful web services. > > > There is another serious benefit of the n2 developers using Roy's post as a guideline to bring their Contentbox API up to REST spec. Once that is done, implementation consists of gluing together some standard code libraries, which makes it easy to maintain. By not rolling their own API which re-invents a rather pedestrian wheel, Talis does not have to support and evolve their own custom Contentbox API -- freeing their developers to focus on their core competency, the exciting part of their project with real potential, which is the RDF stuff. There is no other API I know of which does what the Metabox API sets out to do. So it isn't a simple matter of using off-the-shelf components, like Contentbox. Best to be able to focus on the things nobody has done before, rather than to waste time bucking trends elsewhere in the overall n2 API. One thing that should be spun off as a standardization proposal is the Changeset resource. What they've done is created a delta format for RDF that is neither application- nor vendor-specific. Of course, they've implemented this delta format using PUT, which is flat-out wrong. It is not within the scope of media types to change standard method semantics on a per-resource basis, this breaks the uniform interface entirely, as method semantics must remain the same across all resources in a system in order to be uniform. However, this proposed media type would work very nicely with the only media type defined with delta-processing semantics -- PATCH. While the Changeset Protocol is completely off-base in terms of REST, there's a nugget of a good idea for a media type there, as well as the potential for a RESTful API for Metabox. Overall, n2 fits with my REST shortcut of being a distributed hypermedia application. Therefore, there is no reason that REST's constraints can't or shouldn't be applied (including, especially, the hypertext constraint). The only pragmatic approach for a distributed hypermedia API, is REST. No excuses here for coming up short. -Eric
"wahbedahbe" wrote: > > You made your mistake in step 1. > You don't define a media type for a service, you define it for a type > of client. > +1 -Eric
An application, and not the architectural style, decides what properties it needs. This is a matter of tradeoffs. I have not understood REST as an all-or-nothing style. It is a set of constraints, and conscious relaxation is okay.
My 2 cents.
Subbu
On Dec 19, 2009, at 12:08 PM, Eric J. Bowman wrote:
> Subbu Allamaraju wrote:
>>
>> IMHO, Roy's post below must be taken with a bit of reality mixed.
>> Like most things in software, it is not an absolute standard to
>> measure "goodness" of RESTful web services.
>>
>
> I think, as the hypertext constraint is a critical part of the REST
> architectural style, any system that falls short in this area can't be
> called REST at all. The result is a different architectural style
> altogether. Someone needs to name this style (preferably a name that's
> buzzword-worthy) and analyze it in terms of networked software
> architecture, just like REST is. The result would enable a comparison
> chart to be made which lays out exactly what the differences are, in
> terms of constraints and desirable properties.
>
> If a project can live without the desirable properties induced by REST,
> then perhaps this other architectural style is appropriate for it. But,
> such a project should not call itself REST, because that would imply
> desirable properties exist on that system, when this can't be proven
> due to the obvious lack of some constraint or other. I really would
> like to see this hypertext-constraint-less, but otherwise RESTful,
> architectural style formalized for the plethora of REST claimants that
> are actually using this other style out in the wild.
>
> The most important takeaways from Roy's post I linked to:
>
> "
> A REST API should spend almost all of its descriptive effort in
> defining the media type(s) used for representing resources and driving
> application state, or in defining extended relation names and/or
> hypertext-enabled mark-up for existing standard media types. Any effort
> spent describing what methods to use on what URIs of interest should be
> entirely defined within the scope of the processing rules for a media
> type... A REST API must not define fixed resource names or hierarchies
> (an obvious coupling of client and server).
> "
>
> I'll describe this further, down below in my example.
>
>>
>> Most publicly visible web services are meant for mashing up data.
>> Communicating URIs in representations is one thing, but using them to
>> drive application flow is an entirely different beast. Most mashup
>> scenarios require fair bit of control on the flow. Take Flickr for
>> example. Even if it is fixed to use HTTP correctly, making it
>> hypermedia driven for application flow does not get Flickr very far.
>>
>
> Agreed. REST is not the solution to all problems. Neither is HTTP.
> The Web, and the Internet itself, are constantly evolving new
> architectural styles. Take RFC 5694, for instance. P2P is now gaining
> some formalization as a networked-software architectural style. This
> sort of formalization should also be applied to the plethora of
> non-REST APIs out there that wouldn't be better off as REST, so there's
> some common architectural ground rather than everyone just winging it.
>
>>
>> Of course, using hypermedia to drive application flow makes sense
>> when the server can control the flow.
>>
>
> Exactly. Let's take a look at part of Talis' API:
>
> http://n2.talis.com/wiki/Contentbox
> http://n2.talis.com/wiki/Text_Search_Syntax
>
> They've spent all their effort in describing what methods to use on
> what URIs of interest, none of which are in-scope for RSS, while
> defining fixed resource names and hierarchies. There's no sense of
> what the Contentbox resource is... The documentation defines
> the /items resource as containing a list of content items. But the
> protocol returns a search form if /items is dereferenced, not a list of
> contents. The overall result is an ad-hoc XML-RPC interface to a media
> type obsoleted by Atom (which has a corresponding protocol so you don't
> have to make up one of your own, as this API does).
>
> I'm being harsh, though. Throw out all the query stuff and the
> protocol stuff, and focus on the "Platonic Ideal" of the resource type
> identified as "Contentbox" and there's plenty to work with in terms of
> making it into a REST resource. But first, a note on resource types,
> another valuable takeaway from Roy's post, in light of ongoing debate
> on this list around the issue:
>
> "
> A REST API should never have "typed" resources that are significant to
> the client. Specification authors may use resource types for describing
> server implementation behind the interface, but those types must be
> irrelevant and invisible to the client. The only types that are
> significant to a client are the current representation’s media type and
> standardized relation names.
> "
>
> It's perfectly legit to refer to different resource types in this
> application with different names. For example, Contentbox and Metabox
> make fine resource type names, but will ultimately be defined as
> different media types, which is the only concern the client has. (Don't
> try to somehow couple client behavior to the abstract notion of
> "Contentbox" or "Metabox", just use them to describe your API for human
> consumption.) If, instead of beginning by defining a URI allocation
> scheme, this REST API's designers had followed a disciplined REST
> approach, the first thing they would have done with Contentbox would've
> been to define how it conceptually fits within their system.
>
> There's nothing special about it, it's easy to grasp, it's an index for
> a flat-file media collection, plus a search interface. Whatever its
> directory name, producing a list of its contents is as easy as using
> "Options: Directories" in Apache. That isn't quite what we want,
> though. There is a duality to the Contentbox resource: first, we want
> it to list all contents as per the underlying filesystem; second, we
> want it to provide a metadata-driven search interface to the media
> files. So I see two Contentbox resource subtypes: Contentbox-search and
> Contentbox-index. I could combine the two into a single index-and-search
> resource, but I'm not making that design choice, for reasons having
> nothing to do with REST, though.
>
> The next step is to figure out what media types we want to apply to
> Contentbox. So we dip into our handy REST toolbox and find that most
> features we want, like pagination and editing, already exist as defined
> standards -- Atom and Atom Protocol, plus standard extensions like
> OpenSearch. Pagination and search-syntax media-type extensions
> definitely have a say in the URI allocation scheme, which is why we
> don't start there, we start by defining which media types to use, plus
> identifying what our resources are.
>
> Let's start with Contentbox-index. This is a paginated list of files,
> which can be presented in either XHTML or as an Atom Feed of Atom media
> entries. The Atom Protocol service document identifies Contentbox-
> index as an Atom Protocol collection. Here's where we throw out this
> portion of the n2 API:
>
> http://n2.talis.com/wiki/Contentbox#Request_Parameters
>
> The "query" parameter applies to Contentbox-search, more about that
> later. The "max" and "offset" parameters, which create a cache-defeating
> "sliding door" are tossed in favor of individually-numbered (in the
> URI) pages of predetermined length, which implement pagination as per
> RFC 5005. The "sort" parameter is tossed, that's a client-side behavior
> that doesn't belong in the URI (except perhaps in Contentbox-search).
> The xsl parameter implements transformation in a bass-ackwards fashion,
> and the desired output Content-Type doesn't belong in the URL. To do
> so, instead of honoring the Accept header, is a violation of:
>
> http://www.w3.org/2001/tag/doc/mime-respect.html
>
> The output Content-Type is what the author/server intends, as coded in
> the XSLT document. What's needed here, is some Content Negotiation.
> At this point, it should be obvious that the URL of a Contentbox is
> irrelevant in a REST API, so instead of coupling it to /{storename}/
> items and limiting a store to one Contentbox, it should be up to the
> user to name it /{storename}/foo/ or /{storename}/bar/ or even
> /{storename}/foo/bar/, or all three. I'll use /{storename}/items/ with
> the trailing-slash for this example.
>
> For the Contentbox-index resource, we want paginated output in either
> XHTML or Atom depending on client preference. With conneg, this gives
> us a URI allocation scheme like so:
>
> /{storename}/items/
> /{storename}/items/;page=2
> /{storename}/items/;page=3
>
> Etc. The server responds with a Vary: Accept header, with Content-
> Location headers like so:
>
> /{storename}/items/index.html
> /{storename}/items/index.atom
> /{storename}/items/index.html;page=2
> /{storename}/items/index.atom;page=2
> /{storename}/items/index.html;page=3
> /{storename}/items/index.atom;page=3
>
> This makes every representation a resource in its own right. The
> Content-Location, Alternates and Vary headers make conneg visible, so
> even without documentation, the self-descriptive messaging reveals that
> it's possible to bypass content negotiation and directly request Atom or
> XHTML based on filename extension. This is a logical approach; putting
> the MIME type (or a token like 'atom' or 'html') in a query string is
> not.
>
> The XHTML output is simply transformed from the Atom using XSLT. If
> someone wants their own XSLT output, there are Web services out there
> which allow the input of a source URL (the .atom resource directly) and
> a stylesheet URL, this doesn't belong in the Contentbox request URIs.
> The XHTML variant can include a form which links to such a service and
> runs its Atom alternate through a user-specified stylesheet, i.e. we
> can apply the hypertext constraint to this feature.
>
> Instead of starting with resource identification, media type and link
> relation selection before getting to URIs, the designers of the n2 API
> picked a bookmark URI (/items) and added a bunch of features to it
> through the query string. Had they followed a disciplined REST
> approach, I doubt that they would have wound up identifying Contentbox
> as a plethora of resources, i.e. each sort key creates two new resource
> subtypes, one for ascending, the other for descending, etc.
>
> At some point, seeing the number of semantically-identical first-class
> resources, and attempting to define a URI allocation scheme for them,
> would have revealed itself as a problem. This problem does not reveal
> itself when a URI is defined, and a sort feature is added in the query
> string, because there is no sense of what the Contentbox resource *is*
> using that approach. A disciplined approach here leads to an order of
> magnitude fewer URIs for the server to manage, while vastly increasing
> cache efficiency (once again exploding the myth that REST creates "too
> many URLs").
>
> On to Contentbox-search. Again, /{storename}/items should be able to
> use any name not reserved by the system, instead of just /items. But
> the idea is the same -- we want to search the -index of the same
> Contentbox's metadata, and return a representation in either XHTML or
> Atom depending on client preference, listing the links to the contained
> resources. The query syntax used is standardized by using OpenSearch.
>
> /{storename}/items
> /{storename}/items.html
> /{storename}/items.atom
>
> /{storename}/items?q={searchTerms}
> /{storename}/items.html?q={searchTerms}
> /{storename}/items.atom?q={searchTerms}
>
> /{storename}/items?q={searchTerms}&p={startPage?}
> /{storename}/items.html?q={searchTerms}&p={startPage?}
> /{storename}/items.atom?q={searchTerms}&p={startPage?}
>
> I'm aware that OpenSearch allows output format to be specified as part
> of the query string, but I still believe supported output formats
> should be differentiated in the Path to support content negotiation,
> while also enabling conneg to be cleanly bypassed. Once again, the
> XHTML output is transformed from the Atom, using XSLT on the server.
>
> If sort order is enabled in the URIs, it should only be on Contentbox-
> search, and should be worked out as a proposed extension to OpenSearch
> (if there isn't already such a beast). For Contentbox-index, page-by-
> page client-based sorting should do. If that's too fine-grained, the
> user has the option of using Contentbox-search instead. Two interfaces
> to one filesystem; I like it.
>
> So there's a RESTful re-work of n2's Contentbox API, no hard feelings I
> hope. In a nutshell, my advice here is "Just Use Atom (tm)." By
> starting with the conceptual visualization of the Contentbox resource
> type and discovering its subtypes, followed by the selection of standard
> methods, media types / extensions, and link relations (first, last,
> prev, next, edit, alternate, etc.), before even _thinking_ about URI
> allocation schemes, a disciplined approach to REST is being followed.
>
> What I've described here can be fleshed out into an API which doesn't
> define fixed resource names or hierarchies. Resource interfaces are
> generic, not object-centric. Interaction is driven by hypertext, not
> out-of-band information. Interaction is cleanly separated from
> identification. The API may be cleanly documented by describing the
> Contentbox resource in terms of media types and link relations --
> methods don't bear mentioning as their use is entirely defined within
> the processing rules of the media types. Any server written to it would
> have the freedom to manage its own namespace. The whole thing is based
> on selecting the standard media types best suited to the task at hand.
> "Contentbox" as a resource-type is irrelevant and invisible to the
> client, as are its -search and -index sub-resources.
>
> I may be long-winded, but I hope I've come around full-circle back to
> my original point of, "Go read Roy's post on 'REST APIs must be
> hypertext-driven' as it explains exactly where this API goes wrong."
> The result of following his advice in my example here is a state-of-the-
> art, pragmatic solution to the problem at hand, which may be widely
> understood by implementers right out of the starting gate -- without any
> murmurs from this list about not being "Roy Fielding's REST"...
>
> -Eric
Bob Haugen wrote: > > Don't know who I am agreeing or disagreeing with, but I do expect > predictable patterns for the whole M2M order-to-cash cycle to emerge. > That was the idea behind old-school EDI, but EDI required some months > of negotiation before any 2 newly-met agents could do business. REST > should be able to do better, but the standards will need to be worked > out. > > So are standardized media types and relations sufficient? > Yes! Absolutely. Here's how the future will likely unfold: There exists today a wide variety of shopping-cart implementations online. They each employ wildly divergent markup to handle the same types of data. A standardization effort will arise in the accessibility community, to create a metalanguage using existing tools like RDFa, @role and WAI-ARIA to express standard e-commerce transaction semantics in a fashion which works across multiple host languages. Thus, standard media types are made semantically rich by the addition of standardized attribute sets, which may be added to existing shopping- cart markup instead of going back to the drawing board, for the primary purpose of accessibility -- which is all about improving human-readable interfaces to also be device-accessible, aka machine-readable. This machine-readable metadata provides the basis for M2M transactions by coding to the self-documenting API provided by the human-readable markup. (I was on about this in another thread. While a standardized Cloud API would need a new media type, I would create it by borrowing heavily from XHTML for the purpose of making the interface accessible using standard techniques like WAI-ARIA and @role, with RDFa to express semantics preferred over the creation of new elements. Even the blind ought to be able to manipulate their servers over the Web, so creating a new media type must not result in breaking the interface between browser accessibility and the accessibility API of the host OS, by preventing the use of standard accessibility techniques. This is turning into a boilerplate argument for me, in support of my rule of thumb for creating new media types: Don't!) The problem with creating a new media type(s) for this problem area, is that it enforces a new and unfamiliar set of elements and attributes to be used, which won't be embeddable in existing host languages (it can't replace your existing shopping-cart markup), and therefore is limited to M2M transactions, forcing the development and maintenance of separate human and machine APIs for a system. This is duplication of effort, when proper maintenance of HTML shopping-cart code is work enough already. Standardize metadata attributes rather than creating new markup languages. What's particularly absurd, is the notion of creating a new media type for each application state progressed through. Ugh! I'll keep saying it: In REST, we transfer representations of application state, with the server instructing the client how to behave. The client does not send instructions to the server on how it should behave. Media types are not meant as tokens by which a client can direct server behavior. -Eric
Subbu Allamaraju wrote: > > An application, and not the architectural style, decides what > properties it needs. This is a matter of tradeoffs. I have not > understood REST as an all-or-nothing style. It is a set of > constraints, and conscious relaxation is okay. > Well, there's a difference between a relaxed constraint, and one that's missing entirely. I agree that the application decides its desired properties. But, I've been bitten by the software architecture bug, so I believe that application development ought to be guided by a formal architectural definition which encompasses those properties. As Roy describes in his thesis, start with an empty tree diagram, add and remove constraints as needed until the constraints elicit the desired properties for the system, then name the derived architectural style (if you've applied all of Roy's constraints, feel free to call it REST). Beginning with the null style... " [A] designer starts with the system needs as a whole, without constraints, and then incrementally identifies and applies constraints to elements of the system in order to differentiate the design space and allow the forces that influence system behavior to flow naturally, in harmony with the system. " I don't care if a system isn't REST, so much as I care that it underwent a well-disciplined development phase. Undisciplined development starts by declaring the REST style, and then not following it, resulting in an unknown quantity rather than a system that may be benchmarked against its stated goals. I'm rather enchanted by that new textbook, "Software Architecture: Foundations, Theory and Practice" and its progression through Modeling, Visualization, Analysis, Implementation, Deployment and Mobility. None of the fantastic, practical ideas in the book apply to a project which has no notion of being guided by software architecture, leaping right into code development without considering that Lunar Lander may be designed using either the Pipe-and-Filter or the C2 architectural styles. Building against a defined style allows better quality control over the entire lifecycle of a project. If that style is REST, you won't have worries when it comes to scaling, but that can't be guaranteed without the hypertext constraint. The resulting style must be analyzed and shown not to be deficient in that regard, or that the benefits of ignoring the hypertext constraint outweigh the deficiency. My usual $2.02... -Eric
Eric J. Bowman wrote: > > > Ian Davis wrote: > > > > Our platform API for managing RDF storage is RESTful see > > http://n2.talis.com/wiki/API_Site_Map > <http://n2.talis.com/wiki/API_Site_Map> for the docs > > > > Well, it's a fine HTTP API, but I wouldn't go any further than that, > sorry. I would suggest reading Roy's blog post, here: > > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > <http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven> > Particularly egregious are the Changeset Protocol and Store OAI > Service. Why? The changeset protocol isn't worth arguing over. The OAI-PMH verbs you can consider an existing protocol that needs to be gatewayed onto the web. The ListRecords operation is associated with a resource and bound to GET. This is a straghtforward basis to associate later one with something like a "rel" attribute value in a format (eg RDF, which is the best working example we have today of an in-band model of data). Worrying about the appearance of the string "verb" in the URL seems like a gensym fallacy. > While it's good to see Content Negotiation in action, it is > not good to see it made part of a query URL. It's adequate, as part of a ladder. This list is littered with conneg has failed threads. So why is it not good to see it made part of a query URL when the model as presented in HTTP has more or less failed? > Your supported mime types > each have unique filename extensions, why not use those, plus Content- > Location? What useful properties would that induce? > Or the Alternates header? Or the OPTIONS method? What useful properties would that induce? > Or, if > not a filename extension, why not a URI parameter? What useful properties would that induce? > Anything but using > URI queries. In a hypertext-driven API, use <link rel='alternate'/>. How will the "link" element and the "rel" attribute and the "alternate" value been understood by a client? > I could go on. For hours, after spending 30 minutes reviewing your > site. Please don't promote this as a good example of a REST > implementation. Please don't play fetch me a rock. Produce an improved design that fits the constraints or explain what properties are lost with the current design. I swear, I get tired of REST populists pointing at something, saying that's not REST, without providing alternatives or explanations, especially when people are making valid attempts to align with the architecture. Bill
It depends. On OpenRasta it's a q value too, on apache it's qs. -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Eric J. Bowman Sent: 19 December 2009 20:18 To: Eric J. Bowman Cc: Guilherme Silveira; rest-discuss Subject: Re: [rest-discuss] content type negotiation: multiple entries with same quality value Of course, on the server, it's called a 'qs' value, not a 'q' value. My bad! -Eric ------------------------------------ Yahoo! Groups Links
> For example, I read rfc3023 to mean that > a type with a +xml should be considered 'more specific' than the > generic xml. At least, it indicates that in section 7, but further > confusing me it says in the appendix that they should be considered > opaque and independent. If you have pointers to something that > explains this better, I'd appreciate it... I don't find a passage in rfc3023 that indicates that the suffix is anything but a convention used to know what formats are in the xml family. As such, media types should continue to be processed in an opaque fashion, including the attributes. I haven't see anything indicating that application/vnd.blah is more specific than application/vnd.blah;item=value The more specific description refers to priority being given to text/plain over text/*, where wildcards are the least specific. I personally sort media types by order of quality, and when the quality is equivalent, I go from most to least specific, with the exception of application/xml that always gets given the lowest priority within a specific media type list. Aka: Accept: application/xml,application/vnd.blah,text/*,text/plain Results in: application/vnd.blah text/plain application/xml text/*
Jan, I have been following this interesting thread for a while and figure it's time for me to share my thoughts. > Note that for human clients this is entirely different because these > client in fact *can* simply react to whatever next state they are put > in. I do not agree - they react to well know states as I will argue below. If a human meets some unexpected state during shopping they either give up or return to the home page and try again - something you could make the computer do as well. > The do not have their own state machine. > The do not follow an overall goal. I think the issue is just the same for humans as well as for computers. A human usually have a goal - if I go to a shop I want to buy something (or compare prices). I might just be bored and surf around, but that's another situation - you don't program computers to be bored :-) Remember websites back in approx. year 2000 - at that time most people didn't know about webshops, all webshops were different, there was no agreed-upon idea of the UI for a shop, and people *really* had a hard time figuring out how to browse, shop and pay. But over time we, as users, has learned, just like the webshops has become more similar. We, the users, look for a search text field if we want to search by a text, or we look for a menu to browse by category. Then we figure out where the price is located, then we look for the "buy" button ... an so on. We keep looking for more or less well defined markers (relations) of how-to-do-what-to-do-next. We have *learned* to recognize the "webshop media type" - it consists of text/shop.categories+html, text/shop.item+html, text/shop.order+html and so on. It's not exactly standardized, but as humans it's good enough for us. If you were to program a computer such that it could by a book, then you would teach it how to behave when presented with a text/shop.categories+html (or rather application/shop.categories+xml) document. In all of the shop media types there would be a general "navigation" part and this would be used to navigate to the place where the program could do what it wanted. First your program would be looking for an application/shop.categories+xml ressource: given the root URL, it would browse around until it found it, exactly like any human would do. Then it would browse for a category, or search for a word etc. until it found a suitable application/shop.item+xml ressource with attributes enough for the program to recognize it as the item it was looking for. Then it would look for the "buy" relation, and this would lead it to a application/shop.orders+xml ressource and so on and so on. I see no difference between the human's or the computer's way of interacting with the shop, except for one thing: standardization - a human is extremely more flexible than a computer when trying to recognize a thing. But besides that it's same same. So in my point of view everything is fine. The human web is RESTful, computers can work in same way, ergo the computer web is RESTful to. What did I miss? Regards, J�rn
Jørn, On Dec 20, 2009, at 8:20 PM, Jørn Wildt wrote: > Jan, I have been following this interesting thread for a while and > figure it's time for me to share my thoughts. > >> Note that for human clients this is entirely different because these >> client in fact *can* simply react to whatever next state they are put >> in. > > I do not agree - they react to well know states as I will argue > below. If a human meets some unexpected state during shopping they > either give up or return to the home page and try again - something > you could make the computer do as well. Yes, I agree - in part. The situation is basically the same but human users are capable of following unexpected links to a certain extend, simply because they can read and to some extend figure out what the service owner is trying to say with the new link (or go and read documentation). For example, Amazon can add "1 click" functionality and a human user will pretty likely figure out what it means and , if desired, use it. This is where REST shines so bright because it is really the service that drives that client. Sure, the overall goal of the client (e.g. buying a book) must be achievable so if Amazon turned into a Site for taking your theretical drivers license test the human client fails, too. Machine clients canot take an unexpected transition. Never. (Assuming any kind of AI is not part of the discussion). > >> The do not have their own state machine. >> The do not follow an overall goal. > > I think the issue is just the same for humans as well as for > computers. A human usually have a goal - if I go to a shop I want to > buy something (or compare prices). I might just be bored and surf > around, but that's another situation - you don't program computers > to be bored :-) Agreed. > > Remember websites back in approx. year 2000 - at that time most > people didn't know about webshops, all webshops were different, > there was no agreed-upon idea of the UI for a shop, and people > *really* had a hard time figuring out how to browse, shop and pay. > But over time we, as users, has learned, just like the webshops has > become more similar. We, the users, look for a search text field if > we want to search by a text, or we look for a menu to browse by > category. Then we figure out where the price is located, then we > look for the "buy" button ... an so on. We keep looking for more or > less well defined markers (relations) of how-to-do-what-to-do-next. > > We have *learned* to recognize the "webshop media type" - it > consists of text/shop.categories+html, text/shop.item+html, text/ > shop.order+html and so on. It's not exactly standardized, but as > humans it's good enough for us. Exactly. "It's good enough for us" is what really enables the service to evolve without breaking the whole interaction. Humans can follow (for the most part). > > If you were to program a computer such that it could by a book, then > you would teach it how to behave when presented with a text/ > shop.categories+html (or rather application/shop.categories+xml) > document. In all of the shop media types there would be a general > "navigation" part and this would be used to navigate to the place > where the program could do what it wanted. > > First your program would be looking for an application/ > shop.categories+xml ressource: given the root URL, it would browse > around until it found it, exactly like any human would do. Then it > would browse for a category, or search for a word etc. until it > found a suitable application/shop.item+xml ressource with attributes > enough for the program to recognize it as the item it was looking for. Full agreement until this point. Works fine. > Then it would look for the "buy" relation, Here is the point, though. "Looking for the "Buy" relation manifests the assumption that it will be available from an item in a serach result." If the service stops providing it at this point the client breaks. The service cannot insert extra hops with human readable documentation to direct the cliet to a place where it finally finds the "buy" relation. This might not sound so unexpected or bad after all, but it is equivalent to specifying this: "Representations of items provide a "buy" relation" and if you do this in a spec, you do effectively two things: - make use of a resource 'type' named item - assert a minimal guarrantee about the representation of an item Both violate REST's hypermedia constraint because the enable an assumption about the state machine at design time instead of making the state machine discoverable. It might not look like a big deal, but if you apply the situation to intra-enterprise services you reach a point rather quickly where your media type specifications look like OO class definitions with regard to the contract they impose. And then you (at least I) wonder how much evolvability is really left on the server side that is guarranteed not to break any of your clients, never. (And again: I am all for REST inside the enterprise, I just want to do my homework :-) Suppose you are asked this: "Fine then. Seems that searches MUST result in lists of items and that items MUST be represented by media type XY. So, what's this independent evolvability thingy about? Tell me, *what* in fact am I allowed to change in my server that makes REST systems any more decoupled than others?) Yes, there are substantial other benefits (visibility, simplicity, no vendor lock in, decade-proven technology with billions of users protecting the investment in HTTP), but if I am going to stress independent evolvability, I'd better have a hands on answer to that one) > and this would lead it to a application/shop.orders+xml ressource > and so on and so on. > > I see no difference between the human's or the computer's way of > interacting with the shop, except for one thing: standardization - a > human is extremely more flexible than a computer when trying to > recognize a thing. But besides that it's same same. > > So in my point of view everything is fine. The human web is RESTful, > computers can work in same way, ergo the computer web is RESTful to. So then, you'd say that it is perfectly RESTful that AtomPub effectively says "a GET on a collection MUST at least return application/atom+xml"? > > What did I miss? Nothing really. For one, I seem to be very bad at getting this across (or maybe I am plain dumb) on the other hand I'd just add to your thoughts that it is exactly the human flexibility that is the difference. Jan > > Regards, Jørn > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Sun, 2009-12-20 at 22:34 +0100, Jan Algermissen wrote: > > > So then, you'd say that it is perfectly RESTful that AtomPub > effectively says "a GET on a collection MUST at least return > application/atom+xml"? > Is it not the other way round? A collection in Atom is something with that media type? If you have a client that likes atom, try asking for that media type, then you know what to do with it. I dont see anywhere you need to know a priori that it is an atom collection first (you may get an indication of that from other link types of course). The "at least" is that it does not preclude other media types for a collection as well. Justin Cormack
On Dec 20, 2009, at 11:31 PM, Justin Cormack wrote: > On Sun, 2009-12-20 at 22:34 +0100, Jan Algermissen wrote: > >> >> >> So then, you'd say that it is perfectly RESTful that AtomPub >> effectively says "a GET on a collection MUST at least return >> application/atom+xml"? >> > > Is it not the other way round? A collection in Atom is something with > that media type? No, the media type application/atom+xml (type=feed) does not convey all the semantics of an AtomPub collection. You learn that a resource is a collection from an AtomPub service document, not from the media type. IOW, there are many things that are not a collection that can be very nicely represented be feed documents. The question is really: how could you possibly code a client that does a GET on an AtomPub collection and then processes the collection's entries without prior knowledge that a feed will (definitely!) be returned. > If you have a client that likes atom, try asking for > that media type, then you know what to do with it. I dont see anywhere > you need to know a priori that it is an atom collection first (you may > get an indication of that from other link types of course). So, how do you write a machine client that, for example, iterates over the entries of a collection to do something with the entries without such a priori knowledge? Example: pick a repository, iterate over the documents in a this repository and find an entry that matches some criteria. Note that this does not mean: If you happen to come across a repository and if the GET happens to return a feed then do this or that (which would be a crawl scenario where wat the crawler does is driven by the responses)! I am thinking about clients that have a certain goal to follow. > > The "at least" is that it does not preclude other media types for a > collection as well. Yepp sure. But that does not change the question. It only allows the server to provide additional representations. Jan > > Justin Cormack > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Bill de hOra wrote: > > > Particularly egregious are the Changeset Protocol and Store OAI > > Service. > > Why? > Actually, having worked through a REST design for Contentbox, I would call it the most egregious, since it turned out it could be built using off-the-shelf parts that are widely implemented and understood, and easy to understand for those who are new to the particular technologies I suggested, for a REST API right out of the box without half-trying. > > The changeset protocol isn't worth arguing over. > Why not? It fails to be a uniform interface, while assigning a delta format to be used with PUT. Since it can and should be made RESTful, I fail to see how that's nit-picking or otherwise irrelevant criticism. > > The OAI-PMH verbs > you can consider an existing protocol that needs to be gatewayed onto > the web. The ListRecords operation is associated with a resource and > bound to GET. This is a straghtforward basis to associate later one > with something like a "rel" attribute value in a format (eg RDF, > which is the best working example we have today of an in-band model > of data). Worrying about the appearance of the string "verb" in the > URL seems like a gensym fallacy. > OAI-PMH is not something I'm familiar with, looking at it on the n2 wiki after looking at the Changeset Protocol, led me to believe that it was also specific to the n2 application. To avoid confusion, the wiki should link to the actual spec, and should also indicate that this interface isn't being claimed to be RESTful. Accusing me of basing my criticism on the word "verb" appearing in a URI isn't very polite; asking me to elaborate would be, before jumping to such conclusions. > > > While it's good to see Content Negotiation in action, it is > > not good to see it made part of a query URL. > > It's adequate, as part of a ladder. This list is littered with conneg > has failed threads. So why is it not good to see it made part of a > query URL when the model as presented in HTTP has more or less failed? > This list is also littered with me pointing out that we all encounter content negotiation on a daily basis, thus it is not a failure. Threads here about failed conneg almost always come down to failing to make representations resources in their own right, and using their URIs in the Content-Location header. The counter-argument is always that this header is optional. Well, of course it is, it isn't needed for the primary use case for conneg, which is compression. Content Negotiation also works well for having both (X)HTML and Atom representations of a resource; the odd client receiving the wrong variant ought to be able to self-correct by either reading the Alternates header or introspecting for a <link rel='alternate'/> with the appropriate @type. But Content-Location is needed to make Accept-header-based conneg work properly in the real world. The solution to this deficiency isn't to make conneg part of a query URL, even on the off-chance that such a system is using query URLs in Content-Location headers. The failure of the conneg model presented in HTTP has everything to do with the poor quality of client Accept headers, and nothing to do with using filename extensions in the Path. This failure, plus the failure of a certain browser to support application/xhtml+xml, has throttled the innovation we would have otherwise seen evolve around content negotiation and XHTML on the Web. > > > Your supported mime types > > each have unique filename extensions, why not use those, plus > > Content- Location? > > What useful properties would that induce? > Proper identification of resources and the implementation of self- descriptive messaging would be to apply two of REST's four uniform interface constraints: " REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state. " The primary desirable property induced by REST's uniform interface is the decoupling of client and server, allowing independent evolvability. This is why media types don't need versioning -- they do not represent a contract (an inherent coupling) between client and server, which is why the goal is to re-use and extend standard media types. " The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs. " The other desirable property induced is visibility, which allows scaling and serendipitous re-use. > > > Or the Alternates header? Or the OPTIONS method? > > What useful properties would that induce? > Visibility. If an Atom client receives an XHTML representation, with an Alternates header, it can request the proper representation without consulting the user. A smarter client, in the absence of an Alternates header, could make an OPTIONS request on the resource, and receive information about other variants. Depending on the API, some systems may only send Alternates when an OPTIONS request is made. But, OPTIONS is something I view as primarily for human consumption, specifically developers wishing to query for an interface definition in order to design a custom client (serendipitous re-use). > > > Or, if > > not a filename extension, why not a URI parameter? > > What useful properties would that induce? > The desirable properties of a uniform interface aren't achieved without proper identification of resources, which means following RFC 3986. URIs may have parameters and queries. Properly applying URI means that, if output format matters and isn't part of the Path, then it must be a parameter -- that's what it is, a property of the resource, not an attribute of the search query. To include an output format in a query would be to imply that the scope of the search be limited to include only documents of a certain media type. While URIs are opaque, there is a logical semantic separation between hierarchical path, parameter and query inherent in the spec that must be followed. Placing output type in a query string, instead of a filename extension, is not a standard solution, therefore it is not a visible solution. By failing to apply the resource identification constraint, this solution does not have a uniform interface, thereby coupling client to server, reducing scalability, and avoiding serendipitous re-use. > > > Anything but using > > URI queries. In a hypertext-driven API, use <link rel='alternate'/>. > > How will the "link" element and the "rel" attribute and the > "alternate" value been understood by a client? > By using standard media types and link relations. Any client will only be fully-compatible with any REST API if it fully implements the media types and link relations used. A simple example would be a weblog -- I'm viewing an (X)HTML page, but my browser has introspected for <link rel='alternate'/> and read @type. If it finds a match, it displays an icon in the location bar, alerting me to the existence of a feed for the page. Another browser may fully support the media type and display the page just fine, but not introspect for a feed to display an icon. REST supports this sort of graceful degradation -- this other browser obviously doesn't implement or care about the standard link relation known as 'alternate'. For a client to be fully compatible with a REST system that relies on any standard link relation, it must implement that link relation. > > > I could go on. For hours, after spending 30 minutes reviewing your > > site. Please don't promote this as a good example of a REST > > implementation. > > Please don't play fetch me a rock. Produce an improved design that > fits the constraints or explain what properties are lost with the > current design. > There is a fundamental REST mismatch in defining an HTTP interface to an obsolete media type that doesn't define the methods being used. GET is inferred; but there exists no standard RSS Publishing Protocol that extends the application/rss+xml media type to encompass any other method. In a REST API, "what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types)." Worse, to edit certain resources, their URI is entered as the query to another resource (definitely an RPC endpoint), another fundamental REST mismatch. Those are examples of the sub-constraint of the uniform interface constraint of "manipulation of resources through representations" being violated. So, while mostly meeting the client-cache-stateless-server and layered-system constraints, this API isn't applying _any_ of the additional constraints which make up REST's uniform interface, "The central feature that distinguishes the REST architectural style from other network-based styles." Thus, the desirable properties lost are all those desirable properties associated with REST. No biggie... > > I swear, I get tired of REST populists pointing at something, saying > that's not REST, without providing alternatives or explanations, > especially when people are making valid attempts to align with the > architecture. > Then update the thread, before hitting "send"... http://tech.groups.yahoo.com/group/rest-discuss/message/14240 (Notice I've paginated Contentbox-index using URI parameters, while paginating Contentbox-search using a query attribute -- to use ";page=2" on Contentbox-search would indicate page 2 of the query- interface resource, i.e. the search form, not page 2 of the results.) I'm not the sort to single something out without an explanation. I'm as self-serving as the next human; I will eventually explain what I mean because I'm using it as an example to make some larger point(s). Lately, I've been on about using the standard media types which best fit the task at hand, rather than creating proprietary media types as a solution to every problem. This project is an example of choosing the wrong media types, except where a new delta type is properly created (yet improperly implemented). But I believe the wrong media types were chosen, because the project started by defining URIs and methods. That's something else I've been on about lately, a project must start by defining its resources in terms of standard media types and link relations before proceeding with designing the URI allocation scheme. How many times has it been said on this list over the years, that URI allocation scheme has nothing to do with REST? Yet that's where people always seem to start, naming their resources before they've been properly identified. -Eric
The problem that's been preoccupying my thoughts during the time I spend experimenting with REST, is how to teach it. I don't think anyone disputes the fact that REST is hard to learn. But why is that? I've convinced myself it's not because the students are morons, but that we, collectively as a community, have failed to teach it properly. The best evidence of that, is the recent thread asking for examples of good REST systems: It's infinitely easier to find REST implementations that aren't, than it is to find good examples (I've seen REST implemented effectively on Intranets where the client is a known quantity) that we can point to. We don't teach it properly, because we didn't learn it properly ourselves. Besides Roy, who here at any level of REST ability has a background in software architecture? Personally, I think it took me so many years to become comfortable with REST because it was my first experience with software development guided by a defined architectural style. I basically had to teach myself software architecture, but not until well after I started fancying myself a REST developer. What I'm saying, is that REST must be taught in terms of applied architecture, instead of by example, before there will ever be enough good examples to point to. You can't learn XSLT by reading O'Reilly's "XSLT Cookbook" of examples, yet we try teaching REST by hauling out the good ol' shopping cart every time. This has obviously failed. I don't think it's necessary for a REST student to understand anything about software architecture (except maybe a few terms), only to follow an approach grounded in software architecture. The wonderful new textbook, "Software Architecture: Foundations, Theory, and Practice" is something that should be read by the community, but not for the purpose of using that textbook to teach REST. The textbook uses REST to illustrate the principles of software architecture, it doesn't teach REST. But it can be used to inform us on how to better teach REST. The textbook has chapters on Modeling, Visualization, Analysis, Implementation, and Deployment and Mobility. This is the disciplined approach that I keep harping on about, of late. The Modeling chapter discusses modeling both architectures and architectural styles. It says nothing about modeling specific to REST. Roy's thesis uses modeling to illustrate the REST architectural style. So the first challenge in teaching REST is to teach how to model the components, connectors, resources and interfaces for a proposed system. REST constrains the interaction between connectors, and these constraints must be part of the model. The Visualization chapter explains the separation of modeling and visualization, but says nothing about visualization within the context of REST. The second challenge in teaching REST using a software- architecture-centric approach, is to use the model as a basis for visualizing a proposed system in terms of the Process, Connector and Data views for REST as described in Roy's thesis. The Analysis chapter also has nothing REST-specific. It's fairly self- explanatory, though. Modeling, Visualization and Analysis are not a serial approach, but an iterative process. This is the stage where, if the Model calls for the Atom media type, despite the lack of URIs at this point, the documents may be written and validated to flesh out the data model for analysis. How many hardware resources does the model require? Does the model need to be adjusted up/down? The third challenge in teaching REST is, does the model fit the system's goals? Finally, we get to Implementation, another chapter with nary a peep about REST. (I say finally, because the Deployment chapter covers topics that, frankly, anyone pursuing REST probably has hands-on experience with, so I don't see it as a teaching challenge.) Yes, this is where a URI allocation scheme is finally devised for the modeled, visualized and analyzed resources, and methods implemented so we can pass data over the wire. It is iterative with the previous methods -- selecting off-the-shelf parts may require architectural adjustment due to different design assumptions being made in a standard library. The textbook defines Implementation as the problem of maintaining a mapping between the developed system and its architectural model, and focuses on frameworks as the solution. It also says, "To imbue [desired properties] in the target system, the implementation _must_ be derived from its architecture." This is the fourth, and most important, challenge in teaching REST. Is the reason so many systems claim to be RESTful, but aren't, because 99% of developers simply don't *know* how to derive an implementation from an architectural style, because they've never been taught? I don't think they need to be taught, only given the tools to understand how a RESTful implementation is derived -- that these tools are derived from the tenets of software architecture may remain hidden behind a generic interface (so to speak). My suggestion is to dredge up and dust off ye olde shopping-cart example. Why do we insist on presenting it by defining it as what methods to apply to what resources of interest to obtain what response code and data, beginning by defining a URI allocation scheme, when we know that URI allocation schemes have (almost) nothing to do with REST, and Roy has told us that we should be discussing our resources in terms of media types and link relations instead? At some point, it should be presented in terms of Modeling, Visualizing, Analyzing, and Implementing in a REST-specific fashion. I think this may address some of the criticism of REST lacking some sort of formal guidelines. In brief: Define resources in terms of standard media types and link relations, saving URI allocation and method selection for the implementation phase. -Eric
>>>>> "Eric" == Eric J Bowman <eric@...> writes:
Eric> The problem that's been preoccupying my thoughts during the
Eric> time I spend experimenting with REST, is how to teach it. I
Eric> don't think anyone disputes the fact that REST is hard to
Eric> learn. But why is that? I've convinced myself it's not
Eric> because the students are morons, but that we, collectively
Eric> as a community, have failed to teach it properly. The best
Eric> evidence of that, is the recent thread asking for examples
Eric> of good REST systems: It's infinitely easier to find REST
Eric> implementations that aren't, than it is to find good
Eric> examples (I've seen REST implemented effectively on
Eric> Intranets where the client is a known quantity) that we can
Eric> point to.
Very interesting thoughts.
May I add that the REST community here also has a bit of a Zen
approach to REST questions? Responses are also usually of the kind
this is not it, or not close yet :-)
A while ago I saw a presentation that defined REST levels. I think
that's very helpful. First get people to think about naming things
properly (every resource has a name), about verbs: use put/delete, and
use proper http response codes.
From there on we can work on representations (content types, quality)
till we reach nirvana, the self describing system where clients have
no dependencies. Perhaps. I think the last part is probably less
interesting then the basics.
Once people have a feeling for the basics they want more. In my
experience it's the basics where people have the initial trouble as
they're so used to RPC style thinking because all frameworks they have
worked with use that, as you also indicate.
People who name their resources, use put/delete, want more.
Eric> In brief: Define resources in terms of standard media types
Eric> and link relations, saving URI allocation and method
Eric> selection for the implementation phase.
And I think this is way, way too abstract if you're new. It's the
standard media types and link relations that take a while to sink in.
So I suggest it the other way around.
--
All the best,
Berend de Boer
berend@... wrote: > > Eric> In brief: Define resources in terms of standard media types > Eric> and link relations, saving URI allocation and method > Eric> selection for the implementation phase. > > And I think this is way, way too abstract if you're new. It's the > standard media types and link relations that take a while to sink in. > Yeah, because we never take the architectural-style approach around here, and say things like: "If your only representation of that resource is text/html, then you can't DELETE that resource. You'd have to change to, or use conneg to add a representation in, application/xhtml+xml, because that media type supports DELETE (via Xforms) while text/html only supports GET and POST. The same goes for application/rss+xml, which only supports GET -- you'd have to change to, or use conneg to add, application/atom+xml because that media type supports DELETE (via Atom Protocol). If you go rogue because HTTP allows you to DELETE your negotiated text/ html + application/rss+xml resource anyway, then you're violating the uniform interface constraint. Not only has your DELETE not been driven by hypertext (a REST mismatch exists in Atom Protocol's use of DELETE and PUT, but not with Xforms), but its use requires out-of-band knowledge specific to your API that is not encompassed within the definitions of the media types you've used. Therefore, your interface couples client to server, failing to be generic." Even if that's too abstract for a noob, at least they'll understand that the correct selection amongst standard media types is a vital aspect to confront, hopefully before designing URIs, since the chosen media types may influence URI design. Maybe you're right, but changing how REST is taught can't possibly lead to a worse outcome than we have now, so it's worth a try even if we're ignoring old assumptions. -Eric
"Eric J. Bowman" wrote: > > (a REST mismatch exists in Atom > Protocol's use of DELETE and PUT, but not with Xforms) > Sorry, not PUT, I was thinking about something else. But there is a minor REST mismatch in AtomPub regarding DELETE not being hypertext- driven, an obvious coupling of client to server. But, as a small portion of an overall REST system, not enough to claim failure to apply the hypertext constraint -- just a nitpick. While Atom Protocol doesn't specify the behavior of DELETE on a collection, this disclaimer still scopes DELETE to any resource with an Atom representation. -Eric
> > What did I miss? > Nothing really. ... on the other hand I'd just add to your > thoughts that it is exactly the human flexibility that is the > difference. Yes. That certainly makes a huge difference. The only thing I can come up with in terms of terms of flexibility and ability to learn new behaviours is to actually download code to the client. Then, in principle, you could always learn the client about new stuff. But it's probably not really doable in practice. > So, what's this independent evolvability > thingy about? Tell me, *what* in fact am I allowed to change in my > server that makes REST systems any more decoupled than others?) I am pretty sure I understand your issue now - and I cannot give you any clear answer to that. But people here have been suggesting all sorts of solutions: first of all there is content negotiation - you are free to evolve your service in *any* direction you want, as long as you version it using content negotiation. Old clients will work with old ressource types, and new, improved, clients will work with new ressource types. > So then, you'd say that it is perfectly RESTful that AtomPub > effectively says "a GET on a collection MUST at least return > application/atom+xml"? I am not expert enough to answer that question. Sorry. My personal guess would be - yes, just like a human would expect it. If a human didn't find it they would give up too: I myself wouldn't know what to do if a atom feed stopped serving what it was expected to serve. I would stop working with that feed. /J�rn
"Eric J. Bowman" wrote: > > Sorry, not PUT, I was thinking about something else. But there is a > minor REST mismatch in AtomPub regarding DELETE not being hypertext- > driven, an obvious coupling of client to server. But, as a small > portion of an overall REST system, not enough to claim failure to > apply the hypertext constraint -- just a nitpick. While Atom Protocol > doesn't specify the behavior of DELETE on a collection, this > disclaimer still scopes DELETE to any resource with an Atom > representation. > Going a bit OT: I keep forgetting that I wrote a minimally-featured Atom Protocol client using Xforms, to address this REST mismatch. An Xforms REST application follows the MVC architectural style on the client. An XHTML interface is provided, which takes an Atom collection feed and displays it as one big Xform allowing individual entries to be added, edited or removed by directly manipulating the Atom resources, depending on user role as provided by HTTP-Digest. A form button may be added to any individual entry, which will call its DELETE method, meeting the hypertext constraint that eludes other Atom Protocol implementations. Part of the Xform allows the collection to be deleted in one of three ways: DELETE all members, DELETE the collection but not its members, or DELETE all members and then DELETE the collection. While having a collection-targeted DELETE silently remove all member resources of the collection, then remove the collection resource, has the "Roy stamp of approval" I do not wish to go that route here. My way is visible, because batch deletion occurs as separate DELETE requests to each member resource. The three interface options for deleting a collection are self- documenting via hypertext -- it's intuitive from the Xforms interface description in the <head> that a DELETE on a collection URL will not trigger the deletion of member resources, because that option loops through every member with an individual DELETE both ways it's called, and a trial collection DELETE will confirm this when followup HEAD requests made on collection members return 200 OK. No human- or machine-language interface description is needed. So my client extends Atom Protocol by self-describing the unspecified behavior of DELETE on a collection, in two different user-selectable ways, using hypertext to drive application state and avoiding Atom Protocol's REST mismatch on DELETE for both collections and member resources. Client and server are now decoupled, and may evolve independently. Server collection-deletion options may be changed by updating the hypertext. The client, in this case an Xforms-compatible browser, is free to evolve independently since it isn't required to have any button in the chrome to handle deleting an Atom collection (or member). It only needs to know how to interpret Xforms, not Atom or Atom Protocol. -Eric
> Here is the point, though. "Looking for the "Buy" relation manifests > the assumption that it will be available from an item in a serach > result." If the service stops providing it at this point the client > breaks. The service cannot insert extra hops with human readable > documentation to direct the cliet to a place where it finally finds > the "buy" relation. Lets compare this to the human interaction: Amazon decides to remove the "buy" link on an item. Why on earth would they do that? People wouldn't be able to shop anymore because they broke the expectation of the client. Okay, what Amazon really wanted was to insert a new step in the workflow. For instance "click this checkbox if you are not a terrorist" (now required by the US government :-)). What would Amazon do? Probaly set up huge sign reading "we have changed our interface, blah blah, follow this link instead". This is the human documentation you mention. What would the corresponding computer change be? If it is not a mandatory step (which it probably is) then it could just use content negotiation to version the interface and everybody would be happy. If it was mandatory then our computer has to learn a new media type: application/shop.terroristcheck+xml. Amazon would still keep the "buy" relation, but at the end of it there would be our new media type. From this I would say: clients should *not expect any specific media type for a relation* - the "buy" relation only states "GET this to continue buying", but at the end of that link the computer could find any ressource: an application/shop.order+xml or application/shop.terroristcheck+xml or something else. I guess a client should not be driven by *expectations* of what to find where - only by *knowledge* of it's current ressource and a *goal*. You would then end up with a huge matrix of media-types and goals, each of which telling the computer what to do next. Any unknown media type would kill the application (or make it backtrack and try something different). Our initial goal would be "find item" and initially we would look up a specific URL and get the representation stored there. Then our media-type/goal matrix would tell us what to do next. It would never depend on an expectation of what ressource type to find at the end of any relation - it should only know that "if I have media-type=application/shop.item+xml and goal=buy then I must GET the 'buy' relation and then re-evaluate with the new ressource". Does that make sense? /J�rn
Jan Algermissen wrote: > > So then, you'd say that it is perfectly RESTful that AtomPub > effectively says "a GET on a collection MUST at least return > application/atom+xml"? > Yes. That allows for a resource to have more than just an Atom representation. Reading between the lines and remembering that a request is made up of more than just its URI and method, it also effectively says that a GET request with an Accept header consisting only of 'application/atom+xml' MUST return 'application/atom+xml' or issue a 406 error. -Eric
"Eric J. Bowman" wrote: > > XHTML interface is provided, which takes an Atom collection feed and > displays it as one big Xform allowing individual entries to be added, > edited or removed by directly manipulating the Atom resources... > Ugh. In REST, we manipulate representations, not resources... -Eric
On Dec 21, 2009, at 8:51 AM, Jørn Wildt wrote: >> Here is the point, though. "Looking for the "Buy" relation manifests >> the assumption that it will be available from an item in a serach >> result." If the service stops providing it at this point the client >> breaks. The service cannot insert extra hops with human readable >> documentation to direct the cliet to a place where it finally finds >> the "buy" relation. > > Lets compare this to the human interaction: Amazon decides to remove > the "buy" link on an item. Why on earth would they do that? People > wouldn't be able to shop anymore because they broke the expectation > of the client. > > Okay, what Amazon really wanted was to insert a new step in the > workflow. For instance "click this checkbox if you are not a > terrorist" (now required by the US government :-)). What would > Amazon do? Probaly set up huge sign reading "we have changed our > interface, blah blah, follow this link instead". This is the human > documentation you mention. Yes. And you exactly describe how server evolution is independent from the client when the client is human driven. > > What would the corresponding computer change be? If it is not a > mandatory step (which it probably is) then it could just use content > negotiation to version the interface and everybody would be happy. > If it was mandatory then our computer has to learn a new media type: > application/shop.terroristcheck+xml. Amazon would still keep the > "buy" relation, but at the end of it there would be our new media > type. Yes. Or probably some kind of redirect based on a missing 'order parameter'. Key point being that the server can in fact evolve without breaking the communication immediately. > > From this I would say: clients should *not expect any specific media > type for a relation* - the "buy" relation only states "GET this to > continue buying", but at the end of that link the computer could > find any ressource: an application/shop.order+xml or application/ > shop.terroristcheck+xml or something else. My concern is with the step just before that. The client expects the buy relation to be there because otherwise you ould not even code it (you cannot make the client code choose the buy relation if you do not assume that it is there in the first place). > > I guess a client should not be driven by *expectations* of what to > find where - only by *knowledge* of it's current ressource and a > *goal*. Yep. I've been down that road, too. But I found that eventually it comes down to expecting 'availability' of goals (availability of the transition that constitutes a given goal). It's because machine clients have their own state machine - they are not driven by the service. A machine client actively is coded to buy - it does not browse around and if it suddenly finds a buy link is triggered to execute the buy goal. A machine client will consist of an inherent flow of actions (its own state machine). For example: search, pick item, buy - the only way you can code that is by expecting, for example, buying to be available from the application state 'viewing item'. And such an expectation is formed at design time, based on some hypermedia describing the kind of service (such as application/atomsrvxml). > You would then end up with a huge matrix of media-types and goals, > each of which telling the computer what to do next. Any unknown > media type would kill the application (or make it backtrack and try > something different). But this assumes that the client does not have its own program flow; like a GUI application that is driven by the user. In pure machine clients this is impossible if they persue a certain task (as opposed to crawling and indexing, for example). You can also view the client and service as two independent state machines with points of coordination (the goals). Both state machines exist independently and if the service-side state machine changes in a way that the client is not capable of mimicking the communication breaks. > > Our initial goal would be "find item" and initially we would look up > a specific URL and get the representation stored there. Then our > media-type/goal matrix would tell us what to do next. It would never > depend on an expectation of what ressource type to find at the end > of any relation - it should only know that "if I have media- > type=application/shop.item+xml and goal=buy then I must GET the > 'buy' relation and then re-evaluate with the new ressource". That is a very good approach. But you are describing a client without own program flow. Try to code that as part of some program that has its own life and just interacts with the service to 'get a job done'. The key point is: Does the client side want to buy some item? Or does it want to buy some item *SHOULD IT HAPPEN TO COME ACROSS SOME LINK THAT ENABLES TO BUY SOMETHING?* > > Does that make sense? > Yes, it does. Good thoughts. Jan > /Jørn -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 21, 2009, at 8:54 AM, Eric J. Bowman wrote: > Jan Algermissen wrote: >> >> So then, you'd say that it is perfectly RESTful that AtomPub >> effectively says "a GET on a collection MUST at least return >> application/atom+xml"? >> > > Yes. That allows for a resource to have more than just an Atom > representation. Reading between the lines and remembering that a > request is made up of more than just its URI and method, it also > effectively says that a GET request with an Accept header consisting > only of 'application/atom+xml' MUST return 'application/atom+xml' or > issue a 406 error. But this means that it is ok for the server to break it's own promise: AtomPub says a feed will be available. And it is still ok for the server to send me a 406? Suppose you invested serious money in building that client and the spec says that there will be a feed. Suddenly the whole communication falls apart, business level harm is done etc. because the service sends 406 instead of a feed document. Whose fault is it and who is going to pay for the damage done? Jan > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Thank you all for all the discussion on my questions! I highly appreciate this. I wonder which was the "first" paper on CRUD and the first paper which brought CRUD and REST togehter? The dissertation of Dr. Fielding maybe? Kind regards Steffen
> A machine client will consist of an inherent flow of actions (its own > state machine). For example: search, pick item, buy - the only way you > can code that is by expecting, for example, buying to be available > from the application state 'viewing item'. Please re-consider my media-type/goal approach. The machine client should not be driven by expectations of the flow, it should not assume search+pick+buy is the one and only sequence. It should not have a set of actions - it should have a set of goals. Achieving those goals is then dependent on the returned ressources. This means the server gets to decide the current state, not the client. The current state is defined by the media-type/goal combination and the client only controls one of them. Lets assume our client knows the new "terrorist check" media type. It has browsed to an item and so the current state is: mediatype=item, goal=buy. The client decides the next step by looking for the "buy" relation - but the server decides the next state by returning a media type of it's own choice. The next state might be mediatype="terrorist check" or mediatype="order" plus goal="buy". But, yes, ultimately the machine client must be pre-configured with a knowledge of how to reach it's current goal given any of the media-types returned. But it should not be pre-configured with any assumptions about the exact workflow - the server decides the workflow, whereas the client decides the goals. Some Amazon items may require terrorist-checks, but some may not. Some items may require an approval of a kind, some may not - it all depends on the current ressource type, not an expectation of a precise workflow. This plus content negotiation makes it possible to evolve the service as long you accept to teach your client how to handle new ressource types (and that is actually RESTful). /Jørn
On Dec 21, 2009, at 10:33 AM, Jorn Wildt wrote: >> A machine client will consist of an inherent flow of actions (its own >> state machine). For example: search, pick item, buy - the only way >> you >> can code that is by expecting, for example, buying to be available >> from the application state 'viewing item'. > > Please re-consider my media-type/goal approach. The machine client > should not be driven by expectations of the flow, it should not > assume search+pick+buy is the one and only sequence. It should not > have a set of actions - it should have a set of goals. Achieving > those goals is then dependent on the returned ressources. Yes, good POV. I've thought along these lines for quite some time, but recent coding experience has just led me to believe that you cannot get around assumptions of when to do what. You say the machine client "should have a set of goals". True. But it also has to know the partial ordering of the goals and it has to know that buying comes after picking the item. I just do not see how you could code the client without (at some point in some form) making the assumptions I am talking about. Try it. Write some pseudo code and show me how you get away with not coupling the server to the client's expectation that it will (definitely) be able to execute the "buy" goal from the application state "viewing the item". The client is not "driven by expectations of the flow" but it must drive the interaction (not be driven by it). The client is an independent program (not a user agent preprocessing received representations for a human user). Another way to say it is that client and server are coupled by the partial ordering of the goals of their collaboration. And this partial ordering coupling effectively means that the client does not only discover the state machine at runtime but makes assumptions about it at design time. > This means the server gets to decide the current state, not the > client. The current state is defined by the media-type/goal > combination and the client only controls one of them. What do you mean by "controls one of them"? > Lets assume our client knows the new "terrorist check" media type. > It has browsed to an item and so the current state is: > mediatype=item, goal=buy. But how does it know that it makes sense to actually have the goal of buying at this point? (Assuming the buy goal availability means assuming something about the available state transitions). > The client decides the next step by looking for the "buy" relation - > but the server decides the next state by returning a media type of > it's own choice. The next state might be mediatype="terrorist check" > or mediatype="order" plus goal="buy". > > But, yes, ultimately the machine client must be pre-configured with > a knowledge of how to reach it's current goal given any of the media- > types returned. But it should not be pre-configured with any > assumptions about the exact workflow - the server decides the > workflow, whereas the client decides the goals. Yes, I agree comepletely. I just fugured that it basically amountsto the same thing :-) What is the difference between 'exact workflow' and 'decide the goals'. I argue: there is no difference with regard to the induced coupling. It means knowing the state machine at design time. > > Some Amazon items may require terrorist-checks, but some may not. > Some items may require an approval of a kind, some may not - it all > depends on the current ressource type, not an expectation of a > precise workflow. This plus content negotiation makes it possible to > evolve the service as long you accept to teach your client how to > handle new ressource types (and that is actually RESTful). Do you think that it is ok for the server to break a previously established contract because the client can adapt by re-configuring/re- programing it? (Which is what I think you are saying above). Jan > > /Jørn > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@acm.org Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 21, 2009, at 10:31 AM, swschilke wrote: > Thank you all for all the discussion on my questions! I highly > appreciate this. > > I wonder which was the "first" paper on CRUD and the first paper > which brought CRUD and REST togehter? The dissertation of Dr. > Fielding maybe? > Dunno about CRUD, but you could look at the initial papers around the relational model/entity relationship model. http://portal.acm.org/citation.cfm?id=358007 http://portal.acm.org/citation.cfm?id=320440 Maybe you find something in there or in the citations. As for REST/CRUD. REST has no relationship to CRUD besides that people have equated HTTP verbs with relational operations. For POST this does not cover the whole story because POST not necessarily has the meaning of CREATE it also means: "take this data and process it according to your nature". Jan > Kind regards > > Steffen > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
> > Some Amazon items may require terrorist-checks, but some may not. > > Some items may require an approval of a kind, some may not - it all > > depends on the current ressource type, not an expectation of a > > precise workflow. This plus content negotiation makes it possible to > > evolve the service as long you accept to teach your client how to > > handle new ressource types (and that is actually RESTful). > > Do you think that it is ok for the server to break a previously > established contract because the client can adapt by re-configuring/re- > programing it? (Which is what I think you are saying above). That was not what I was trying to say :-) The server should certainly not break expectations - but the client must state it's expectations explicitely using content negotiation: if it does not include application/order.terroistcheck+xml in it's accept headers then the server is not allowed to return such ressources. Hmmm, makes me think that the accept headers for a big website can grow to be quite large! Unless we find a smart naming scheme for the media types. You certainly have to re-program your client if you want it to recognize new and improved features and ressources. But if a service decides to add a new mandatory ressource type, well, then it has to break existing clients by explicitely returning an error code saying "I cannot serve you a ressource type that you support". But this is certainly not something a webshop would like to do since it means loosing a bunch of customers - exactly like the human version of the shop would do if it made up new features that humans could not figure out. I would even say this happens all the time on the human web, it's just called "bad usability". By the way: I have never programmed anything like this, so it's all based on assumptions (which we all know are the mother of all fu**ups). But it's an interesting discussion. /Jørn
I think it's a mistake to try to think of REST in terms of CRUD, as that means trying to match resources to relational databases (and the same goes for trying to match resources to files in a file systems). While both can be applications of REST, the concept of Resource is much more vast than that. When trying to design a RESTfull system, one should look at the Resource concept at the very high abstraction possible, going down on that abstraction level and stop as soon as you can (at a abstraction level high enough to be independent of the infrastructure - like databases or other implementation levels - but low enough so to be operational or manipulable) . Also, If you match REST with CRUD, the next logical step is to use POST to invoke Stored Procedures, and that will sound too much like RPC... 2009/12/21 Jan Algermissen <algermissen1971@...> > > > > On Dec 21, 2009, at 10:31 AM, swschilke wrote: > > > Thank you all for all the discussion on my questions! I highly > > appreciate this. > > > > I wonder which was the "first" paper on CRUD and the first paper > > which brought CRUD and REST togehter? The dissertation of Dr. > > Fielding maybe? > > > > Dunno about CRUD, but you could look at the initial papers around the > relational model/entity relationship model. > > http://portal.acm.org/citation.cfm?id=358007 > http://portal.acm.org/citation.cfm?id=320440 > > Maybe you find something in there or in the citations. > > As for REST/CRUD. REST has no relationship to CRUD besides that people > have equated HTTP verbs with relational operations. For POST this does > not cover the whole story because POST not necessarily has the meaning > of CREATE it also means: "take this data and process it according to > your nature". > > Jan > > > Kind regards > > > > Steffen > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... <algermissen%40acm.org> > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > >
On Dec 21, 2009, at 11:15 AM, Jorn Wildt wrote: >>> Some Amazon items may require terrorist-checks, but some may not. >>> Some items may require an approval of a kind, some may not - it all >>> depends on the current ressource type, not an expectation of a >>> precise workflow. This plus content negotiation makes it possible to >>> evolve the service as long you accept to teach your client how to >>> handle new ressource types (and that is actually RESTful). >> >> Do you think that it is ok for the server to break a previously >> established contract because the client can adapt by re-configuring/ >> re- >> programing it? (Which is what I think you are saying above). > > That was not what I was trying to say :-) The server should > certainly not break expectations - but the client must state it's > expectations explicitely using content negotiation: if it does not > include application/order.terroistcheck+xml in it's accept headers > then the server is not allowed to return such ressources. Ah - I am talking about the expectation that is manifested by the code that populates the accept header. No matter how many media types you put in there, the set of types is based on the assumption that one of these will be available. This expectation is based on design time knowledge. (Which effectively is design time knowledge about the state machine). So, I tried to say: The server must not break those design time contract. (And I try to argue that this design time contract contradicts the hypermedia constraint). > > Hmmm, makes me think that the accept headers for a big website can > grow to be quite large! Unless we find a smart naming scheme for the > media types. > > You certainly have to re-program your client if you want it to > recognize new and improved features and ressources. > > But if a service decides to add a new mandatory ressource type, > well, then it has to break existing clients by explicitely returning > an error code saying "I cannot serve you a ressource type that you > support". But this is certainly not something a webshop would like > to do since it means loosing a bunch of customers - exactly like the > human version of the shop would do if it made up new features that > humans could not figure out. I would even say this happens all the > time on the human web, it's just called "bad usability". Yep! But "it is common sense for the service not to evolve incompatibly" just does not work inside the enterprise. There, people need to be specific about the kind of constract that is at work (even if the contract is deliberately very loose!) Saying that the server would not break the clients because it "does not make sense to do so" will not cause the CIO to assign us that budget :-) Again: I am not suggesting a strict contract is needed, I am trying to make explicit the amount of coupling that is really going on, so people can make informed decisions. > > By the way: I have never programmed anything like this, so it's all > based on assumptions (which we all know are the mother of all > fu**ups). But it's an interesting discussion. Your mind works very well. I have prototype such clients and all you say is backed by my experience. And: glad this thread is of some use after all :-) Jan > > /Jørn > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Dear Everbody, you are such a valuable resource for knowledge about REST that I apologize to bother you with my questions: What would you recommend to visualize REST (WADL) architectures? I've read some papers proposing extensions e.g., to UML, but I am open to recommendations (worst case I use Powerpoint). Kind regards and thank you very much sws
> Ah - I am talking about the expectation that is manifested by the code > that populates the accept header. No matter how many media types you > put in there, the set of types is based on the assumption that one of > these will be available. Yes. > Again: I am not suggesting a strict contract is needed, I am trying to > make explicit the amount of coupling that is really going on, so > people can make informed decisions. Maybe what is needed is a explicit formalization of what coupling you can expect from a RESTful system? Maybe it's there already somewhere in Roy Fieldings thesis? I don't know, I havent read it. Maybe there is room for a new thesis :-) > But "it is common sense for the service not to evolve incompatibly" > just does not work inside the enterprise. There, people need > to be specific about the kind of constract that is at > work (even if the contract is deliberately very loose!) I have this feeling that maybe enterprise intergration is not RESTful in itself? I am no so sure that a bank wants to use an "independently evolving" service - it kind of contradicts eveything in the banking sector. You want it to be 100% reliable. You want your system to be 100% stable - new features are considered bad unless proved otherwise. In such a scenario I really can't see anyone being happy with an evolving service. If "evolving" is mandatory for a RESTful system, well, then enterprise integration cannot be RESTful. /Jørn
On Dec 21, 2009, at 11:56 AM, swschilke wrote: > Dear Everbody, > > you are such a valuable resource for knowledge about REST that I > apologize to bother you with my questions: No need to apologize - that is the purpose of this list. > > What would you recommend to visualize REST (WADL) architectures? > I've read some papers proposing extensions e.g., to UML, but I am > open to recommendations (worst case I use Powerpoint). What do you intend to visualize? Design artifacts? or Runtime examples? Or server side implementation aspects? As for design artifacts: since with REST all design is done by specifying hypermedia there is not really much you can visualize regarding design time. For runtime examples I use UML activity diagrams with swim lanes, placing the messages as object nodes on the lanes (see the UBL docs[1] as an example). For server side implementation class diagrams are a good choice, making each resource a class. HTH, Jan [1] http://docs.oasis-open.org/ubl/cs-UBL-2.0/art/UBL-2.0-OrderingProcess.jpg > > Kind regards and thank you very much > > sws > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 21, 2009, at 11:54 AM, Jorn Wildt wrote: >> Ah - I am talking about the expectation that is manifested by the >> code >> that populates the accept header. No matter how many media types you >> put in there, the set of types is based on the assumption that one of >> these will be available. > > Yes. > >> Again: I am not suggesting a strict contract is needed, I am trying >> to >> make explicit the amount of coupling that is really going on, so >> people can make informed decisions. > > Maybe what is needed is a explicit formalization of what coupling > you can expect from a RESTful system? Maybe it's there already > somewhere in Roy Fieldings thesis? I don't know, I havent read it. > Maybe there is room for a new thesis :-) Read it! Especially the general part on software achitecture (first 'half'), which is unrelated to REST and but lays the foundation is well worth a read. > >> But "it is common sense for the service not to evolve incompatibly" >> just does not work inside the enterprise. There, people need >> to be specific about the kind of constract that is at >> work (even if the contract is deliberately very loose!) > > I have this feeling that maybe enterprise intergration is not > RESTful in itself? No, I would not say that. I'd rather question to what extend the hypermedia constraint can be adhered in M2M interactions. The question of Web vs. Enterprise is unreltaed. > I am no so sure that a bank wants to use an "independently evolving" > service - it kind of contradicts eveything in the banking sector. > You want it to be 100% reliable. You want your system to be 100% > stable - new features are considered bad unless proved otherwise. In > such a scenario I really can't see anyone being happy with an > evolving service. Software systems are constantly evolving because the economical surrounding of the enterprise is in constant change. IT has to adapt to support new business functions. REST's benefit here is (besides simplicity, visibility etc.) that you need not bring (possibly geographically distributed development teams) together to discuss each and every API change. So, no - REST is for the enterprise...because, IMO, the problem space of todays increasingly networked enterprises is exactly the problem space as that of the Web. The difference is only that inside the enterprise people need to plan differently. E.g. it is not enough to say that "An ordering service won't change incompatibly because that would be detrimental to its business model". In the enterprise people want to develop clients and services in parallel, shich rules out client design by inspecting the runtime behavior of a service. > If "evolving" is mandatory for a RESTful system, well, then > enterprise integration cannot be RESTful. No, I disagree with that (vehemently :-). Jan > > /Jørn > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
> In the enterprise people want to develop clients and > services in parallel, shich rules out client design by inspecting the > runtime behavior of a service. I remember this being mentioned by you earlier on. Should be easily solved by setting up a mock of the server services? It would be the same with a SOAP webservice - you couldn't build the client before the server unless you could mock the server somehow. /Jørn
On Dec 21, 2009, at 12:34 PM, Jorn Wildt wrote: >> In the enterprise people want to develop clients and >> services in parallel, shich rules out client design by inspecting the >> runtime behavior of a service. > > I remember this being mentioned by you earlier on. Should be easily > solved by setting up a mock of the server services? It would be the > same with a SOAP webservice - you couldn't build the client before > the server unless you could mock the server somehow. > Hmm, no. You build clients based on the definition of the kind of service. With SOAP that would be WSDL and with e.g. AtomPub it is the RFC 5023. Building clients for individual services makes no sense. Jan > /Jørn > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
> Hmm, no. You build clients based on the definition of the kind of > service. With SOAP that would be WSDL and with e.g. AtomPub it is the > RFC 5023. Building clients for individual services makes no sense. Well, then you build clients based on media type descriptions? That would be just as meaningfull as building something on top of RFC 5023. AtomPub/MediaType - what's the difference? /Jørn
On Dec 21, 2009, at 1:07 PM, Jorn Wildt wrote: >> Hmm, no. You build clients based on the definition of the kind of >> service. With SOAP that would be WSDL and with e.g. AtomPub it is the >> RFC 5023. Building clients for individual services makes no sense. > > Well, then you build clients based on media type descriptions? That > would be just as meaningfull as building something on top of RFC > 5023. AtomPub/MediaType - what's the difference? There is no difference between RFC 5023 and application/atomsrv+xml, or more precisely: RFC 5023 specifies a service type identified by application/atomsrv+xml. ... if you come across a service that provides a application/atomsrv +xml service document it is asserting that it implements RFC 5023. (Was that your question?) Jan > > /Jørn > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
you build clients based on the media type. if you want to build a client that does not require humans as part of the decision-making process that advanced the application then you need a replace the human w/ additional coding that not only properly parses the media-type, but also "understands" the media-type enough to advanced the application state in order to accomplish a goal. In the few cases I've done this, that means coding a client state-engine to seek a pre-determined goal by searching for and activating identified links (along w/ supplying identified data elements) returned in the media-type that is the server response. The client is coded to repeatedly do this until the goal is reached or the client determines the goal will never be reached. mca http://amundsen.com/blog/ On Mon, Dec 21, 2009 at 06:46, Jan Algermissen <algermissen1971@...> wrote: > > On Dec 21, 2009, at 12:34 PM, Jorn Wildt wrote: > >>> In the enterprise people want to develop clients and >>> services in parallel, shich rules out client design by inspecting the >>> runtime behavior of a service. >> >> I remember this being mentioned by you earlier on. Should be easily >> solved by setting up a mock of the server services? It would be the >> same with a SOAP webservice - you couldn't build the client before >> the server unless you could mock the server somehow. >> > > Hmm, no. You build clients based on the definition of the kind of > service. With SOAP that would be WSDL and with e.g. AtomPub it is the > RFC 5023. Building clients for individual services makes no sense. > > Jan > > >> /Jørn >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
I just had the idea of another approach to say what I try to say: When you implement a client for making requests to serch engines described by OpenSearchDescription documents (http:// www.opensearch.org) and want your client to do something with the result (e.g. extract the titles of each hit) what media types do you expect (at client implementation time) the search result to possibly be in? Jan On Dec 21, 2009, at 1:07 PM, Jorn Wildt wrote: >> Hmm, no. You build clients based on the definition of the kind of >> service. With SOAP that would be WSDL and with e.g. AtomPub it is the >> RFC 5023. Building clients for individual services makes no sense. > > Well, then you build clients based on media type descriptions? That > would be just as meaningfull as building something on top of RFC > 5023. AtomPub/MediaType - what's the difference? > > /Jørn > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Oh, lets backtrack a bit. You said earlier on: > In the enterprise people want to develop clients and services in parallel, shich rules out client design by inspecting the runtime behavior of a service. Then I said: you need not expect at runtime, you can have a mock. To this you answered: no, you build clients on specs. What I was trying to say was: if you build clients on specs and RFC 5023 (application/atomsrv+xml) is a spec, then what is keeping you from building any kind of REST client on similar specs for other media types? If both server and client agrees on the media type spec then both can be built individually and simultaneously. /Jørn
On Dec 21, 2009, at 1:17 PM, mike amundsen wrote: > you build clients based on the media type. > > if you want to build a client that does not require humans as part of > the decision-making process that advanced the application then you > need a replace the human w/ additional coding that not only properly > parses the media-type, but also "understands" the media-type enough to > advanced the application state in order to accomplish a goal. Yes! And from somewhere the client developer (at coding time!) get the idea that the goal to be acomplished is somehow available. That assumption is a contract! And what I am trying to say is that that contract should be made explicit rather than "hand waved away". > > In the few cases I've done this, that means coding a client > state-engine to seek a pre-determined goal by searching for and > activating identified links (along w/ supplying identified data > elements) returned in the media-type that is the server response. The > client is coded to repeatedly do this until the goal is reached or the > client determines the goal will never be reached. Yes. And still, there is the general knowledge that it makes sense to code this 'looking for that goal' in the first place. This is the assumption I am talking about. This is the contract (for example established by RFC 5023 for AtomPub servers). But would one make that contract very visible in a media type spec everyone would shout out loud: "Nah, you must not do that because you ought to discover that information at runtime"). Jan > > mca > http://amundsen.com/blog/ > > > > > On Mon, Dec 21, 2009 at 06:46, Jan Algermissen <algermissen1971@... > > wrote: >> >> On Dec 21, 2009, at 12:34 PM, Jorn Wildt wrote: >> >>>> In the enterprise people want to develop clients and >>>> services in parallel, shich rules out client design by inspecting >>>> the >>>> runtime behavior of a service. >>> >>> I remember this being mentioned by you earlier on. Should be easily >>> solved by setting up a mock of the server services? It would be the >>> same with a SOAP webservice - you couldn't build the client before >>> the server unless you could mock the server somehow. >>> >> >> Hmm, no. You build clients based on the definition of the kind of >> service. With SOAP that would be WSDL and with e.g. AtomPub it is the >> RFC 5023. Building clients for individual services makes no sense. >> >> Jan >> >> >>> /Jørn >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
> But would one make that contract very visible in a media type spec > everyone would shout out loud: "Nah, you must not do that because you > ought to discover that information at runtime"). Well, how do they define "discover that"? Lets assume we have a understanding of link relations in documents, just like the "buy" relation mentioned a couple of times now. You could argue that: 1) You *discover* URLs by looking for relations. There is no hard coded understanding of URL formats. Then assume you accept that workflows differs, but you need to know all the ressource/media types in the system (like we have talked about already). 2) You *discover* the next state by looking at the returned media type. This is certainly a lot better than 1) assuming a fixed URL formated and building them yourself, 2) assuming you always have an "order" at the end of a "buy" link. /Jørn
Jan: <snip> This is the assumption I am talking about. This is the contract (for example established by RFC 5023 for AtomPub servers). </snip> While it may be the case that the authors of RFC 5023 want to offer you a contract to guarantee your goal-seeking client , the type of servers I write will not make that contract w/ a client. In most all my cases, the server cannot know what goal some client wants to achieve The server can, however, commit to one or more stable state transitions. It's possible you and I are not clear on what "looklng for that goal" means. If you mean a _single_ state transition (create a resource on the server), then yes, I think it is fair to say that the server should provide a guarantee to the client. However, if you mean a goal that involves _multiple_ complete state transitions (I am ignoring transient states such as a set of redirects before reaching stable state), then I think it is a mistake to expect "a contract" from servers. For example, you might implement a client that uses multiple servers to reach a single goal (find all web pages younger than 24 hours that include the word hypermedia [using an openserch server) and determine the publishing location [using a geo-location server) and translate the resulting report into several languages (using a translation server). FWIW, I would not change my position if each of these three tasks could be accomplished using the same server. mca http://amundsen.com/blog/ On Mon, Dec 21, 2009 at 07:30, Jan Algermissen <algermissen1971@...> wrote: > > On Dec 21, 2009, at 1:17 PM, mike amundsen wrote: > >> you build clients based on the media type. >> >> if you want to build a client that does not require humans as part of >> the decision-making process that advanced the application then you >> need a replace the human w/ additional coding that not only properly >> parses the media-type, but also "understands" the media-type enough to >> advanced the application state in order to accomplish a goal. > > Yes! And from somewhere the client developer (at coding time!) get the idea > that the goal to be acomplished is somehow available. > > That assumption is a contract! And what I am trying to say is that that > contract should be made explicit rather than "hand waved away". > >> >> In the few cases I've done this, that means coding a client >> state-engine to seek a pre-determined goal by searching for and >> activating identified links (along w/ supplying identified data >> elements) returned in the media-type that is the server response. The >> client is coded to repeatedly do this until the goal is reached or the >> client determines the goal will never be reached. > > Yes. And still, there is the general knowledge that it makes sense to code > this 'looking for that goal' in the first place. This is the assumption I am > talking about. This is the contract (for example established by RFC 5023 for > AtomPub servers). > > But would one make that contract very visible in a media type spec everyone > would shout out loud: "Nah, you must not do that because you ought to > discover that information at runtime"). > > Jan > >> >> mca >> http://amundsen.com/blog/ >> >> >> >> >> On Mon, Dec 21, 2009 at 06:46, Jan Algermissen <algermissen1971@...> >> wrote: >>> >>> On Dec 21, 2009, at 12:34 PM, Jorn Wildt wrote: >>> >>>>> In the enterprise people want to develop clients and >>>>> services in parallel, shich rules out client design by inspecting the >>>>> runtime behavior of a service. >>>> >>>> I remember this being mentioned by you earlier on. Should be easily >>>> solved by setting up a mock of the server services? It would be the >>>> same with a SOAP webservice - you couldn't build the client before >>>> the server unless you could mock the server somehow. >>>> >>> >>> Hmm, no. You build clients based on the definition of the kind of >>> service. With SOAP that would be WSDL and with e.g. AtomPub it is the >>> RFC 5023. Building clients for individual services makes no sense. >>> >>> Jan >>> >>> >>>> /Jørn >>>> >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>> >>> -------------------------------------- >>> Jan Algermissen >>> >>> Mail: algermissen@... >>> Blog: http://algermissen.blogspot.com/ >>> Home: http://www.jalgermissen.com >>> -------------------------------------- >>> >>> >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >>> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
What you saying is something like this? - A client and a server are coded for a workflow Search - Choose - Buy. - The server choose to change that to Search - Choose - Confirm - Buy. So, for this to work on the client, the agreed media-type has to define a priori those 4 relations, so the client can discover it? But if that's the case, what's the use of generic media-types like application/xml? Everything will have to be application/vnd.order+xml? 2009/12/21 Jorn Wildt <jw@...> > > > > But would one make that contract very visible in a media type spec > > everyone would shout out loud: "Nah, you must not do that because you > > ought to discover that information at runtime"). > > Well, how do they define "discover that"? > > Lets assume we have a understanding of link relations in documents, just > like the "buy" relation mentioned a couple of times now. > > You could argue that: > > 1) You *discover* URLs by looking for relations. There is no hard coded > understanding of URL formats. > > Then assume you accept that workflows differs, but you need to know all the > ressource/media types in the system (like we have talked about already). > > 2) You *discover* the next state by looking at the returned media type. > > This is certainly a lot better than 1) assuming a fixed URL formated and > building them yourself, 2) assuming you always have an "order" at the end of > a "buy" link. > > /Jørn > > >
I was hasty in my last post. My final point: I assert that server cannot ensure anything other that what is provided by a media-type. It is possible that a media-type definition would codify a set of state transitions in order to complete a goal. I've not seen an example of this in a media-type definition (anyone feel free to point me in the right direction) and I suspect it would be a rather difficult media-type to implement and support. mca http://amundsen.com/blog/ On Mon, Dec 21, 2009 at 07:50, mike amundsen <mamund@...> wrote: > Jan: > > <snip> > This is the assumption I am talking about. This is the contract (for > example established by RFC 5023 for AtomPub servers). > </snip> > > While it may be the case that the authors of RFC 5023 want to offer > you a contract to guarantee your goal-seeking client , the type of > servers I write will not make that contract w/ a client. In most all > my cases, the server cannot know what goal some client wants to > achieve The server can, however, commit to one or more stable state > transitions. > > It's possible you and I are not clear on what "looklng for that goal" > means. If you mean a _single_ state transition (create a resource on > the server), then yes, I think it is fair to say that the server > should provide a guarantee to the client. However, if you mean a goal > that involves _multiple_ complete state transitions (I am ignoring > transient states such as a set of redirects before reaching stable > state), then I think it is a mistake to expect "a contract" from > servers. > > For example, you might implement a client that uses multiple servers > to reach a single goal (find all web pages younger than 24 hours that > include the word hypermedia [using an openserch server) and determine > the publishing location [using a geo-location server) and translate > the resulting report into several languages (using a translation > server). FWIW, I would not change my position if each of these three > tasks could be accomplished using the same server. > > mca > http://amundsen.com/blog/ > > > > > On Mon, Dec 21, 2009 at 07:30, Jan Algermissen <algermissen1971@...> wrote: >> >> On Dec 21, 2009, at 1:17 PM, mike amundsen wrote: >> >>> you build clients based on the media type. >>> >>> if you want to build a client that does not require humans as part of >>> the decision-making process that advanced the application then you >>> need a replace the human w/ additional coding that not only properly >>> parses the media-type, but also "understands" the media-type enough to >>> advanced the application state in order to accomplish a goal. >> >> Yes! And from somewhere the client developer (at coding time!) get the idea >> that the goal to be acomplished is somehow available. >> >> That assumption is a contract! And what I am trying to say is that that >> contract should be made explicit rather than "hand waved away". >> >>> >>> In the few cases I've done this, that means coding a client >>> state-engine to seek a pre-determined goal by searching for and >>> activating identified links (along w/ supplying identified data >>> elements) returned in the media-type that is the server response. The >>> client is coded to repeatedly do this until the goal is reached or the >>> client determines the goal will never be reached. >> >> Yes. And still, there is the general knowledge that it makes sense to code >> this 'looking for that goal' in the first place. This is the assumption I am >> talking about. This is the contract (for example established by RFC 5023 for >> AtomPub servers). >> >> But would one make that contract very visible in a media type spec everyone >> would shout out loud: "Nah, you must not do that because you ought to >> discover that information at runtime"). >> >> Jan >> >>> >>> mca >>> http://amundsen.com/blog/ >>> >>> >>> >>> >>> On Mon, Dec 21, 2009 at 06:46, Jan Algermissen <algermissen1971@...> >>> wrote: >>>> >>>> On Dec 21, 2009, at 12:34 PM, Jorn Wildt wrote: >>>> >>>>>> In the enterprise people want to develop clients and >>>>>> services in parallel, shich rules out client design by inspecting the >>>>>> runtime behavior of a service. >>>>> >>>>> I remember this being mentioned by you earlier on. Should be easily >>>>> solved by setting up a mock of the server services? It would be the >>>>> same with a SOAP webservice - you couldn't build the client before >>>>> the server unless you could mock the server somehow. >>>>> >>>> >>>> Hmm, no. You build clients based on the definition of the kind of >>>> service. With SOAP that would be WSDL and with e.g. AtomPub it is the >>>> RFC 5023. Building clients for individual services makes no sense. >>>> >>>> Jan >>>> >>>> >>>>> /Jørn >>>>> >>>>> >>>>> >>>>> ------------------------------------ >>>>> >>>>> Yahoo! Groups Links >>>>> >>>>> >>>>> >>>> >>>> -------------------------------------- >>>> Jan Algermissen >>>> >>>> Mail: algermissen@... >>>> Blog: http://algermissen.blogspot.com/ >>>> Home: http://www.jalgermissen.com >>>> -------------------------------------- >>>> >>>> >>>> >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>>> >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> >> >
--- In rest-discuss@yahoogroups.com, António Mota <amsmota@...> wrote: > What you saying is something like this? > - A client and a server are coded for a workflow Search - Choose - Buy. > - The server choose to change that to Search - Choose - Confirm - Buy. > So, for this to work on the client, the agreed media-type has to define a > priori those 4 relations, so the client can discover it? Yes. > But if that's the case, what's the use of generic media-types like > application/xml? I don't know. I am not the one arguing for using a generic format :-) /Jørn
On Dec 21, 2009, at 1:25 PM, Jorn Wildt wrote: > Oh, lets backtrack a bit. You said earlier on: > >> In the enterprise people want to develop clients and services in >> parallel, shich rules out client design by inspecting the runtime >> behavior of a service. > > Then I said: you need not expect at runtime, you can have a mock. To > this you answered: no, you build clients on specs. > > What I was trying to say was: if you build clients on specs and RFC > 5023 (application/atomsrv+xml) is a spec, then what is keeping you > from building any kind of REST client on similar specs for other > media types? If both server and client agrees on the media type spec > then both can be built individually and simultaneously. No, that is all fine and I agree. I am questioning the RESTfulness of specs that allow the clients to make assumptions about the hypermedia it will receive at some point in the interaction. AtomPub for example enables the client *implementor* to assume that a GET on a collection will return an Atom feed document. This is equivalent to making an assumption about the application state to be in after the GET to the collection. And I am trying to say that M2M clients (besides passibe, server driven crawlers) can only be built when such contracts are in place. Jan > > /Jørn > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
<snip> What so you mean by "stable state transition"? </snip> I mean transitions that result in a "stable" application state. For example: Here is a stable state transition *** request GET /data/search?phrase=hypermedia *** response HTTP/1.1 200 OK Here as a state transition that includes a transient transition (the 201 Created w/ Location Header) that leads to a stable state (200 OK). *** request POST /orders/ .... *** response HTTP/1.1 201 Created Location: /orders/123 *** request GET /orders/123 *** response HTTP/1.1 200 OK There are other cases where the server may instruct the client to continue to another state in order to complete a single "operation." mca http://amundsen.com/blog/ On Mon, Dec 21, 2009 at 08:39, Jan Algermissen <algermissen1971@...> wrote: > > On Dec 21, 2009, at 1:50 PM, mike amundsen wrote: > >> Jan: >> >> <snip> >> This is the assumption I am talking about. This is the contract (for >> example established by RFC 5023 for AtomPub servers). >> </snip> >> >> While it may be the case that the authors of RFC 5023 want to offer >> you a contract to guarantee your goal-seeking client , the type of >> servers I write will not make that contract w/ a client. In most all >> my cases, the server cannot know what goal some client wants to >> achieve The server can, however, commit to one or more stable state >> transitions. > > What so you mean by "stable state transition"? > > Jan > > > > > > > >> >> It's possible you and I are not clear on what "looklng for that goal" >> means. If you mean a _single_ state transition (create a resource on >> the server), then yes, I think it is fair to say that the server >> should provide a guarantee to the client. However, if you mean a goal >> that involves _multiple_ complete state transitions (I am ignoring >> transient states such as a set of redirects before reaching stable >> state), then I think it is a mistake to expect "a contract" from >> servers. >> >> For example, you might implement a client that uses multiple servers >> to reach a single goal (find all web pages younger than 24 hours that >> include the word hypermedia [using an openserch server) and determine >> the publishing location [using a geo-location server) and translate >> the resulting report into several languages (using a translation >> server). FWIW, I would not change my position if each of these three >> tasks could be accomplished using the same server. >> >> mca >> http://amundsen.com/blog/ >> >> >> >> >> On Mon, Dec 21, 2009 at 07:30, Jan Algermissen <algermissen1971@...> >> wrote: >>> >>> On Dec 21, 2009, at 1:17 PM, mike amundsen wrote: >>> >>>> you build clients based on the media type. >>>> >>>> if you want to build a client that does not require humans as part of >>>> the decision-making process that advanced the application then you >>>> need a replace the human w/ additional coding that not only properly >>>> parses the media-type, but also "understands" the media-type enough to >>>> advanced the application state in order to accomplish a goal. >>> >>> Yes! And from somewhere the client developer (at coding time!) get the >>> idea >>> that the goal to be acomplished is somehow available. >>> >>> That assumption is a contract! And what I am trying to say is that that >>> contract should be made explicit rather than "hand waved away". >>> >>>> >>>> In the few cases I've done this, that means coding a client >>>> state-engine to seek a pre-determined goal by searching for and >>>> activating identified links (along w/ supplying identified data >>>> elements) returned in the media-type that is the server response. The >>>> client is coded to repeatedly do this until the goal is reached or the >>>> client determines the goal will never be reached. >>> >>> Yes. And still, there is the general knowledge that it makes sense to >>> code >>> this 'looking for that goal' in the first place. This is the assumption I >>> am >>> talking about. This is the contract (for example established by RFC 5023 >>> for >>> AtomPub servers). >>> >>> But would one make that contract very visible in a media type spec >>> everyone >>> would shout out loud: "Nah, you must not do that because you ought to >>> discover that information at runtime"). >>> >>> Jan >>> >>>> >>>> mca >>>> http://amundsen.com/blog/ >>>> >>>> >>>> >>>> >>>> On Mon, Dec 21, 2009 at 06:46, Jan Algermissen <algermissen1971@...> >>>> wrote: >>>>> >>>>> On Dec 21, 2009, at 12:34 PM, Jorn Wildt wrote: >>>>> >>>>>>> In the enterprise people want to develop clients and >>>>>>> services in parallel, shich rules out client design by inspecting the >>>>>>> runtime behavior of a service. >>>>>> >>>>>> I remember this being mentioned by you earlier on. Should be easily >>>>>> solved by setting up a mock of the server services? It would be the >>>>>> same with a SOAP webservice - you couldn't build the client before >>>>>> the server unless you could mock the server somehow. >>>>>> >>>>> >>>>> Hmm, no. You build clients based on the definition of the kind of >>>>> service. With SOAP that would be WSDL and with e.g. AtomPub it is the >>>>> RFC 5023. Building clients for individual services makes no sense. >>>>> >>>>> Jan >>>>> >>>>> >>>>>> /Jørn >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------ >>>>>> >>>>>> Yahoo! Groups Links >>>>>> >>>>>> >>>>>> >>>>> >>>>> -------------------------------------- >>>>> Jan Algermissen >>>>> >>>>> Mail: algermissen@acm.org >>>>> Blog: http://algermissen.blogspot.com/ >>>>> Home: http://www.jalgermissen.com >>>>> -------------------------------------- >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> ------------------------------------ >>>>> >>>>> Yahoo! Groups Links >>>>> >>>>> >>>>> >>>>> >>> >>> -------------------------------------- >>> Jan Algermissen >>> >>> Mail: algermissen@... >>> Blog: http://algermissen.blogspot.com/ >>> Home: http://www.jalgermissen.com >>> -------------------------------------- >>> >>> >>> >>> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@acm.org > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
On Dec 21, 2009, at 1:50 PM, mike amundsen wrote: > Jan: > > <snip> > This is the assumption I am talking about. This is the contract (for > example established by RFC 5023 for AtomPub servers). > </snip> > > While it may be the case that the authors of RFC 5023 want to offer > you a contract to guarantee your goal-seeking client , the type of > servers I write will not make that contract w/ a client. In most all > my cases, the server cannot know what goal some client wants to > achieve The server can, however, commit to one or more stable state > transitions. What so you mean by "stable state transition"? Jan > > It's possible you and I are not clear on what "looklng for that goal" > means. If you mean a _single_ state transition (create a resource on > the server), then yes, I think it is fair to say that the server > should provide a guarantee to the client. However, if you mean a goal > that involves _multiple_ complete state transitions (I am ignoring > transient states such as a set of redirects before reaching stable > state), then I think it is a mistake to expect "a contract" from > servers. > > For example, you might implement a client that uses multiple servers > to reach a single goal (find all web pages younger than 24 hours that > include the word hypermedia [using an openserch server) and determine > the publishing location [using a geo-location server) and translate > the resulting report into several languages (using a translation > server). FWIW, I would not change my position if each of these three > tasks could be accomplished using the same server. > > mca > http://amundsen.com/blog/ > > > > > On Mon, Dec 21, 2009 at 07:30, Jan Algermissen <algermissen1971@mac.com > > wrote: >> >> On Dec 21, 2009, at 1:17 PM, mike amundsen wrote: >> >>> you build clients based on the media type. >>> >>> if you want to build a client that does not require humans as part >>> of >>> the decision-making process that advanced the application then you >>> need a replace the human w/ additional coding that not only properly >>> parses the media-type, but also "understands" the media-type >>> enough to >>> advanced the application state in order to accomplish a goal. >> >> Yes! And from somewhere the client developer (at coding time!) get >> the idea >> that the goal to be acomplished is somehow available. >> >> That assumption is a contract! And what I am trying to say is that >> that >> contract should be made explicit rather than "hand waved away". >> >>> >>> In the few cases I've done this, that means coding a client >>> state-engine to seek a pre-determined goal by searching for and >>> activating identified links (along w/ supplying identified data >>> elements) returned in the media-type that is the server response. >>> The >>> client is coded to repeatedly do this until the goal is reached or >>> the >>> client determines the goal will never be reached. >> >> Yes. And still, there is the general knowledge that it makes sense >> to code >> this 'looking for that goal' in the first place. This is the >> assumption I am >> talking about. This is the contract (for example established by RFC >> 5023 for >> AtomPub servers). >> >> But would one make that contract very visible in a media type spec >> everyone >> would shout out loud: "Nah, you must not do that because you ought to >> discover that information at runtime"). >> >> Jan >> >>> >>> mca >>> http://amundsen.com/blog/ >>> >>> >>> >>> >>> On Mon, Dec 21, 2009 at 06:46, Jan Algermissen <algermissen1971@... >>> > >>> wrote: >>>> >>>> On Dec 21, 2009, at 12:34 PM, Jorn Wildt wrote: >>>> >>>>>> In the enterprise people want to develop clients and >>>>>> services in parallel, shich rules out client design by >>>>>> inspecting the >>>>>> runtime behavior of a service. >>>>> >>>>> I remember this being mentioned by you earlier on. Should be >>>>> easily >>>>> solved by setting up a mock of the server services? It would be >>>>> the >>>>> same with a SOAP webservice - you couldn't build the client before >>>>> the server unless you could mock the server somehow. >>>>> >>>> >>>> Hmm, no. You build clients based on the definition of the kind of >>>> service. With SOAP that would be WSDL and with e.g. AtomPub it is >>>> the >>>> RFC 5023. Building clients for individual services makes no sense. >>>> >>>> Jan >>>> >>>> >>>>> /Jørn >>>>> >>>>> >>>>> >>>>> ------------------------------------ >>>>> >>>>> Yahoo! Groups Links >>>>> >>>>> >>>>> >>>> >>>> -------------------------------------- >>>> Jan Algermissen >>>> >>>> Mail: algermissen@... >>>> Blog: http://algermissen.blogspot.com/ >>>> Home: http://www.jalgermissen.com >>>> -------------------------------------- >>>> >>>> >>>> >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>>> >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Mon, Dec 21, 2009 at 12:46 AM, Eric J. Bowman <eric@...> wrote: > berend@... wrote: >> >> Eric> In brief: Define resources in terms of standard media types >> Eric> and link relations, saving URI allocation and method >> Eric> selection for the implementation phase. >> >> And I think this is way, way too abstract if you're new. It's the >> standard media types and link relations that take a while to sink in. >> > > Yeah, because we never take the architectural-style approach around > here, and say things like: > > "If your only representation of that resource is text/html, then you > can't DELETE that resource. You'd have to change to, or use conneg to > add a representation in, application/xhtml+xml, because that media type > supports DELETE (via Xforms) while text/html only supports GET and > POST. The same goes for application/rss+xml, which only supports GET > -- you'd have to change to, or use conneg to add, application/atom+xml > because that media type supports DELETE (via Atom Protocol). > > If you go rogue because HTTP allows you to DELETE your negotiated text/ > html + application/rss+xml resource anyway, then you're violating the > uniform interface constraint. At risk of showing my ignorance, I'm not seeing how this is violating any constraint. As I understand it, the uniform interface is defined as the 'contract' between components in the system - independent of resources or their representation. The allowable subset of methods on a given resource isn't strictly defined by the representation. In HTTP, OPTIONS would be the way to determine that, right? --tim
On Sun, Dec 20, 2009 at 11:26 PM, Eric J. Bowman <eric@...> wrote: > The problem that's been preoccupying my thoughts during the time I > spend experimenting with REST, is how to teach it. I don't think > anyone disputes the fact that REST is hard to learn. But why is that? > I've convinced myself it's not because the students are morons, but > that we, collectively as a community, have failed to teach it > properly. The best evidence of that, is the recent thread asking for > examples of good REST systems: It's infinitely easier to find REST > implementations that aren't, than it is to find good examples (I've > seen REST implemented effectively on Intranets where the client is a > known quantity) that we can point to. > > We don't teach it properly, because we didn't learn it properly > ourselves. Besides Roy, who here at any level of REST ability has a > background in software architecture? Personally, I think it took me so > many years to become comfortable with REST because it was my first > experience with software development guided by a defined architectural > style. I basically had to teach myself software architecture, but not > until well after I started fancying myself a REST developer. > > What I'm saying, is that REST must be taught in terms of applied > architecture, instead of by example, before there will ever be enough > good examples to point to. You can't learn XSLT by reading O'Reilly's > "XSLT Cookbook" of examples, yet we try teaching REST by hauling out > the good ol' shopping cart every time. This has obviously failed. > > I don't think it's necessary for a REST student to understand anything > about software architecture (except maybe a few terms), only to follow > an approach grounded in software architecture. The wonderful new > textbook, "Software Architecture: Foundations, Theory, and Practice" is > something that should be read by the community, but not for the purpose > of using that textbook to teach REST. The textbook uses REST to > illustrate the principles of software architecture, it doesn't teach > REST. But it can be used to inform us on how to better teach REST. > > The textbook has chapters on Modeling, Visualization, Analysis, > Implementation, and Deployment and Mobility. This is the disciplined > approach that I keep harping on about, of late. > > The Modeling chapter discusses modeling both architectures and > architectural styles. It says nothing about modeling specific to > REST. Roy's thesis uses modeling to illustrate the REST architectural > style. So the first challenge in teaching REST is to teach how to > model the components, connectors, resources and interfaces for a > proposed system. REST constrains the interaction between connectors, > and these constraints must be part of the model. > > The Visualization chapter explains the separation of modeling and > visualization, but says nothing about visualization within the context > of REST. The second challenge in teaching REST using a software- > architecture-centric approach, is to use the model as a basis for > visualizing a proposed system in terms of the Process, Connector and > Data views for REST as described in Roy's thesis. > > The Analysis chapter also has nothing REST-specific. It's fairly self- > explanatory, though. Modeling, Visualization and Analysis are not a > serial approach, but an iterative process. This is the stage where, if > the Model calls for the Atom media type, despite the lack of URIs at > this point, the documents may be written and validated to flesh out the > data model for analysis. How many hardware resources does the model > require? Does the model need to be adjusted up/down? The third > challenge in teaching REST is, does the model fit the system's goals? > > Finally, we get to Implementation, another chapter with nary a peep > about REST. (I say finally, because the Deployment chapter covers > topics that, frankly, anyone pursuing REST probably has hands-on > experience with, so I don't see it as a teaching challenge.) Yes, this > is where a URI allocation scheme is finally devised for the modeled, > visualized and analyzed resources, and methods implemented so we can > pass data over the wire. It is iterative with the previous methods -- > selecting off-the-shelf parts may require architectural adjustment due > to different design assumptions being made in a standard library. > > The textbook defines Implementation as the problem of maintaining a > mapping between the developed system and its architectural model, and > focuses on frameworks as the solution. It also says, "To imbue > [desired properties] in the target system, the implementation _must_ be > derived from its architecture." This is the fourth, and most important, > challenge in teaching REST. Is the reason so many systems claim to be > RESTful, but aren't, because 99% of developers simply don't *know* how > to derive an implementation from an architectural style, because they've > never been taught? I don't think they need to be taught, only given the > tools to understand how a RESTful implementation is derived -- that > these tools are derived from the tenets of software architecture may > remain hidden behind a generic interface (so to speak). > > My suggestion is to dredge up and dust off ye olde shopping-cart > example. Why do we insist on presenting it by defining it as what > methods to apply to what resources of interest to obtain what response > code and data, beginning by defining a URI allocation scheme, when we > know that URI allocation schemes have (almost) nothing to do with REST, > and Roy has told us that we should be discussing our resources in terms > of media types and link relations instead? At some point, it should be > presented in terms of Modeling, Visualizing, Analyzing, and > Implementing in a REST-specific fashion. I think this may address some > of the criticism of REST lacking some sort of formal guidelines. > > In brief: Define resources in terms of standard media types and link > relations, saving URI allocation and method selection for the > implementation phase. I think the struggle in communicating REST has been that people jump to chapter 5. Or, people teaching it, start with chapter 5. Chapter 5 reasonably assumes the knowledge of the previous 4 and so jumping straight into it without appreciating its foundation will inevitably lead to a perverted understanding. I think one way to address this is to start explaining the framework as a precursor to REST - in the end they must know this anyway or else how will they know about adding new constraints and such. Of course, explaining software architecture is hard because of preconceived notions and loaded vocabularies - I posted about this to the list(can't find it) some time ago and reposted it here[1]. --tim [1] - http://williamstw.blogspot.com/2009/11/architectural-styles-constraints.html
On Sun, Dec 20, 2009 at 1:47 PM, Sebastien Lambla <seb@...> wrote: >> For example, I read rfc3023 to mean that >> a type with a +xml should be considered 'more specific' than the >> generic xml. At least, it indicates that in section 7, but further >> confusing me it says in the appendix that they should be considered >> opaque and independent. If you have pointers to something that >> explains this better, I'd appreciate it... > > I don't find a passage in rfc3023 that indicates that the suffix is anything > but a convention used to know what formats are in the xml family. As such, > media types should continue to be processed in an opaque fashion, including > the attributes. I'm reading section 7 as that. Specifically, " As XML development continues, new XML document types are appearing rapidly. Many of these XML document types would benefit from the identification possibilities of a more specific MIME media type than text/xml or application/xml can provide, and it is likely that many new media types for XML-based document types will be registered in the near and ongoing future. While the benefits of specific MIME types for particular types of XML documents are significant, all XML documents share common structures and syntax that make possible common processing. Some areas where 'generic' processing is useful include: ... " I figured the language 'generic' and 'more specific' were meant to match up with the conneg language of http? Thanks, --tim
<snip> The allowable subset of methods on a given resource isn't strictly defined by the representation. In HTTP, OPTIONS would be the way to determine that, right? </snip> I agree that the HTTP spec for OPTIONS [1] implies that clients can use OPTIONS to determine which methods are allowed [2]. However, I've not seen examples of clients doing this at runtime (examples anyone?). So, this can be a dev-time check for some, but then I would treat this as a "hint" rather than a guarantee since the server can change this at some time in the future. [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.2 [2] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.7 mca http://amundsen.com/blog/ On Mon, Dec 21, 2009 at 09:07, Tim Williams <williamstw@...> wrote: > On Mon, Dec 21, 2009 at 12:46 AM, Eric J. Bowman <eric@...> wrote: >> berend@... wrote: >>> >>> Eric> In brief: Define resources in terms of standard media types >>> Eric> and link relations, saving URI allocation and method >>> Eric> selection for the implementation phase. >>> >>> And I think this is way, way too abstract if you're new. It's the >>> standard media types and link relations that take a while to sink in. >>> >> >> Yeah, because we never take the architectural-style approach around >> here, and say things like: >> >> "If your only representation of that resource is text/html, then you >> can't DELETE that resource. You'd have to change to, or use conneg to >> add a representation in, application/xhtml+xml, because that media type >> supports DELETE (via Xforms) while text/html only supports GET and >> POST. The same goes for application/rss+xml, which only supports GET >> -- you'd have to change to, or use conneg to add, application/atom+xml >> because that media type supports DELETE (via Atom Protocol). >> >> If you go rogue because HTTP allows you to DELETE your negotiated text/ >> html + application/rss+xml resource anyway, then you're violating the >> uniform interface constraint. > > At risk of showing my ignorance, I'm not seeing how this is violating > any constraint. As I understand it, the uniform interface is defined > as the 'contract' between components in the system - independent of > resources or their representation. The allowable subset of methods on > a given resource isn't strictly defined by the representation. In > HTTP, OPTIONS would be the way to determine that, right? > > --tim > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Dec 21, 2009, at 4:19 PM, mike amundsen wrote: > <snip> > The allowable subset of methods on a given resource isn't strictly > defined by the representation. In HTTP, OPTIONS would be the way to > determine that, right? > </snip> The most common way of stating which methods to use are - method to use is expressed at runtime by means of a form (e.g. HTML forms, OpenSearch's parameters extension) - method to use is defined by media type specification (e.g. AtomPub) Discovery at runtime is IMHO not really that useful except maybe for generic clients that crawl a URI space and record resource capabilities. Usually, if a client does not understand the semantics of a link well enough to immediately know the method, then finding out about the method is not the primary problem (understanding the link would be the primary thing to do). Finding out about support for PUT might make sence because that enables the client to directly infer authorability of the resource. OTH, an Expect: 100-continue header on the PUT request would make the previous check for PUT unnecessary. Jan > > I agree that the HTTP spec for OPTIONS [1] implies that clients can > use OPTIONS to determine which methods are allowed [2]. However, I've > not seen examples of clients doing this at runtime (examples anyone?). > So, this can be a dev-time check for some, but then I would treat this > as a "hint" rather than a guarantee since the server can change this > at some time in the future. > > [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.2 > [2] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.7 > mca > http://amundsen.com/blog/ > > > > > On Mon, Dec 21, 2009 at 09:07, Tim Williams <williamstw@...> > wrote: >> On Mon, Dec 21, 2009 at 12:46 AM, Eric J. Bowman <eric@... >> > wrote: >>> berend@... wrote: >>>> >>>> Eric> In brief: Define resources in terms of standard media >>>> types >>>> Eric> and link relations, saving URI allocation and method >>>> Eric> selection for the implementation phase. >>>> >>>> And I think this is way, way too abstract if you're new. It's the >>>> standard media types and link relations that take a while to sink >>>> in. >>>> >>> >>> Yeah, because we never take the architectural-style approach around >>> here, and say things like: >>> >>> "If your only representation of that resource is text/html, then you >>> can't DELETE that resource. You'd have to change to, or use >>> conneg to >>> add a representation in, application/xhtml+xml, because that media >>> type >>> supports DELETE (via Xforms) while text/html only supports GET and >>> POST. The same goes for application/rss+xml, which only supports >>> GET >>> -- you'd have to change to, or use conneg to add, application/atom >>> +xml >>> because that media type supports DELETE (via Atom Protocol). >>> >>> If you go rogue because HTTP allows you to DELETE your negotiated >>> text/ >>> html + application/rss+xml resource anyway, then you're violating >>> the >>> uniform interface constraint. >> >> At risk of showing my ignorance, I'm not seeing how this is violating >> any constraint. As I understand it, the uniform interface is defined >> as the 'contract' between components in the system - independent of >> resources or their representation. The allowable subset of methods >> on >> a given resource isn't strictly defined by the representation. In >> HTTP, OPTIONS would be the way to determine that, right? >> >> --tim >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Tim: <snip> My original contention was that 'calling DELETE' on some resource (URI) provided by the server, isn't 'going rogue' or violating the uniform interface even if it's not in the representation. </snip> I think we're in agreement here. I posted this more as a request to see if anyone actually knows of runtime use of OPTIONS to settle interface issues. I think Jan's point about using Expect headers is a good one. I, for one, have never used Expect as a request header on POST or PUT [1] to check for server compliance. Anyone have a living example of this? mca http://amundsen.com/blog/ [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.2.3 On Mon, Dec 21, 2009 at 11:53, Tim Williams <williamstw@...> wrote: > On Mon, Dec 21, 2009 at 10:19 AM, mike amundsen <mamund@...> wrote: >> <snip> >> The allowable subset of methods on a given resource isn't strictly >> defined by the representation. In HTTP, OPTIONS would be the way to >> determine that, right? >> </snip> >> >> I agree that the HTTP spec for OPTIONS [1] implies that clients can >> use OPTIONS to determine which methods are allowed [2]. However, I've >> not seen examples of clients doing this at runtime (examples anyone?). >> So, this can be a dev-time check for some, but then I would treat this >> as a "hint" rather than a guarantee since the server can change this >> at some time in the future. > > I don't think I've seen a need to do that check at runtime. > Practically, the allowable subset of the uniform interface for a given > resource are stated in documentation somewhere? It is a more precise > way to determine the allowable subset for a given resource though. > > My original contention was that 'calling DELETE' on some resource > (URI) provided by the server, isn't 'going rogue' or violating the > uniform interface even if it's not in the representation. It may be > met with a 405, but since "DELETE" is a part of the uniform interface > between components in the system, I don't see how using it might be > considered a violation of it. > > --tim >
On Mon, Dec 21, 2009 at 10:19 AM, mike amundsen <mamund@...> wrote: > <snip> > The allowable subset of methods on a given resource isn't strictly > defined by the representation. In HTTP, OPTIONS would be the way to > determine that, right? > </snip> > > I agree that the HTTP spec for OPTIONS [1] implies that clients can > use OPTIONS to determine which methods are allowed [2]. However, I've > not seen examples of clients doing this at runtime (examples anyone?). > So, this can be a dev-time check for some, but then I would treat this > as a "hint" rather than a guarantee since the server can change this > at some time in the future. I don't think I've seen a need to do that check at runtime. Practically, the allowable subset of the uniform interface for a given resource are stated in documentation somewhere? It is a more precise way to determine the allowable subset for a given resource though. My original contention was that 'calling DELETE' on some resource (URI) provided by the server, isn't 'going rogue' or violating the uniform interface even if it's not in the representation. It may be met with a 405, but since "DELETE" is a part of the uniform interface between components in the system, I don't see how using it might be considered a violation of it. --tim
Hello Erick. I actually cheer that idea of yours. I think I mentioned it once in this list, that I usually do not answer to HTTP issues, my comments are more of the architectural related type, and I usually say that you must first determine if your application will gain from REST or not before implementing. First, no architect comes from a non-development background. You must be first a developer. I do teach software system architecture class at the University, and the first part of the course is focused to make the student, a developer, to stop thinking as such and start seeing the solution as a bigger, wider thing, beyond code.A difficult thing to do, believe me. When we see REST (just 2 hours introduction), we study it as a metaphor driven architecture, after we understood all variables about quality properties and views. At that point, students understand that not all systems are the same, and that you can analyze lots of factors before even thinking on classes and packages. To truly understand REST, you need to understand the basics of architecture, the Whys. Then you can work on defining the need of REST for you app, and then on how to organize your architectural elements following the REST metaphorical constrains. Converting that into code should be the least difficult thing to do. And there is where you see all those questions about APIs and HTTP interactions. I do think the important things are not there, the important thing is not about using POST or PUT. See what I mean? It is funny you mention about the book. I'm actually in the process of writing-defining a book called Architecting with REST, but following a different approach of the one you exemplify. I would love to see more questions in this list referred to architectural or design issues, but of course no less of the HTTP ones, which are an invaluable source of knowledge for actually creating the app. BTW, REST is actually kind of hard. Cheers! William Martinez Pomares --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > The problem that's been preoccupying my thoughts during the time I > spend experimenting with REST, is how to teach it. I don't think > anyone disputes the fact that REST is hard to learn. But why is that? > I've convinced myself it's not because the students are morons, but > that we, collectively as a community, have failed to teach it > properly. The best evidence of that, is the recent thread asking for > examples of good REST systems: It's infinitely easier to find REST > implementations that aren't, than it is to find good examples (I've > seen REST implemented effectively on Intranets where the client is a > known quantity) that we can point to. > > We don't teach it properly, because we didn't learn it properly > ourselves. Besides Roy, who here at any level of REST ability has a > background in software architecture? Personally, I think it took me so > many years to become comfortable with REST because it was my first > experience with software development guided by a defined architectural > style. I basically had to teach myself software architecture, but not > until well after I started fancying myself a REST developer. > > What I'm saying, is that REST must be taught in terms of applied > architecture, instead of by example, before there will ever be enough > good examples to point to. You can't learn XSLT by reading O'Reilly's > "XSLT Cookbook" of examples, yet we try teaching REST by hauling out > the good ol' shopping cart every time. This has obviously failed. > > I don't think it's necessary for a REST student to understand anything > about software architecture (except maybe a few terms), only to follow > an approach grounded in software architecture. The wonderful new > textbook, "Software Architecture: Foundations, Theory, and Practice" is > something that should be read by the community, but not for the purpose > of using that textbook to teach REST. The textbook uses REST to > illustrate the principles of software architecture, it doesn't teach > REST. But it can be used to inform us on how to better teach REST. > > The textbook has chapters on Modeling, Visualization, Analysis, > Implementation, and Deployment and Mobility. This is the disciplined > approach that I keep harping on about, of late. > > The Modeling chapter discusses modeling both architectures and > architectural styles. It says nothing about modeling specific to > REST. Roy's thesis uses modeling to illustrate the REST architectural > style. So the first challenge in teaching REST is to teach how to > model the components, connectors, resources and interfaces for a > proposed system. REST constrains the interaction between connectors, > and these constraints must be part of the model. > > The Visualization chapter explains the separation of modeling and > visualization, but says nothing about visualization within the context > of REST. The second challenge in teaching REST using a software- > architecture-centric approach, is to use the model as a basis for > visualizing a proposed system in terms of the Process, Connector and > Data views for REST as described in Roy's thesis. > > The Analysis chapter also has nothing REST-specific. It's fairly self- > explanatory, though. Modeling, Visualization and Analysis are not a > serial approach, but an iterative process. This is the stage where, if > the Model calls for the Atom media type, despite the lack of URIs at > this point, the documents may be written and validated to flesh out the > data model for analysis. How many hardware resources does the model > require? Does the model need to be adjusted up/down? The third > challenge in teaching REST is, does the model fit the system's goals? > > Finally, we get to Implementation, another chapter with nary a peep > about REST. (I say finally, because the Deployment chapter covers > topics that, frankly, anyone pursuing REST probably has hands-on > experience with, so I don't see it as a teaching challenge.) Yes, this > is where a URI allocation scheme is finally devised for the modeled, > visualized and analyzed resources, and methods implemented so we can > pass data over the wire. It is iterative with the previous methods -- > selecting off-the-shelf parts may require architectural adjustment due > to different design assumptions being made in a standard library. > > The textbook defines Implementation as the problem of maintaining a > mapping between the developed system and its architectural model, and > focuses on frameworks as the solution. It also says, "To imbue > [desired properties] in the target system, the implementation _must_ be > derived from its architecture." This is the fourth, and most important, > challenge in teaching REST. Is the reason so many systems claim to be > RESTful, but aren't, because 99% of developers simply don't *know* how > to derive an implementation from an architectural style, because they've > never been taught? I don't think they need to be taught, only given the > tools to understand how a RESTful implementation is derived -- that > these tools are derived from the tenets of software architecture may > remain hidden behind a generic interface (so to speak). > > My suggestion is to dredge up and dust off ye olde shopping-cart > example. Why do we insist on presenting it by defining it as what > methods to apply to what resources of interest to obtain what response > code and data, beginning by defining a URI allocation scheme, when we > know that URI allocation schemes have (almost) nothing to do with REST, > and Roy has told us that we should be discussing our resources in terms > of media types and link relations instead? At some point, it should be > presented in terms of Modeling, Visualizing, Analyzing, and > Implementing in a REST-specific fashion. I think this may address some > of the criticism of REST lacking some sort of formal guidelines. > > In brief: Define resources in terms of standard media types and link > relations, saving URI allocation and method selection for the > implementation phase. > > -Eric >
Hi Jan, Forgive me, but I've been squinting your activity diagram [1] and trying to figure out how that maps onto a sequence of REST interactions. E.g., can each message passing from buyer to seller be mapped onto an HTTP request, and each message from seller to buyer an HTTP response? Or am I completely misunderstanding the intention of the diagram? I'm interested because I've been trying to figure out the best conventions for diagramming workflow that map cleanly onto a hypermedia / REST service design. Your activity diagram is quite different from the type of state transition diagrams used in [2]. Maybe I shouldn't be trying to compare [1] and [2] because they're trying to model different aspects of the system? Sorry, thinking out loud... Thanks, Alistair. [1] http://docs.oasis-open.org/ubl/cs-UBL-2.0/art/UBL-2.0-OrderingProcess.jpg [2] http://www.infoq.com/articles/webber-rest-workflow --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Dec 21, 2009, at 11:56 AM, swschilke wrote: > > > Dear Everbody, > > > > you are such a valuable resource for knowledge about REST that I > > apologize to bother you with my questions: > > No need to apologize - that is the purpose of this list. > > > > > What would you recommend to visualize REST (WADL) architectures? > > I've read some papers proposing extensions e.g., to UML, but I am > > open to recommendations (worst case I use Powerpoint). > > What do you intend to visualize? Design artifacts? or Runtime > examples? Or server side implementation aspects? > > As for design artifacts: since with REST all design is done by > specifying hypermedia there is not really much you can visualize > regarding design time. > > For runtime examples I use UML activity diagrams with swim lanes, > placing the messages as object nodes on the lanes (see the UBL docs[1] > as an example). > > For server side implementation class diagrams are a good choice, > making each resource a class. > > > HTH, > > Jan > > > [1] http://docs.oasis-open.org/ubl/cs-UBL-2.0/art/UBL-2.0-OrderingProcess.jpg > > > > > > Kind regards and thank you very much > > > > sws > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- >
"But this means that it is ok for the server to break it's own promise: AtomPub says a feed will be available. And it is still ok for the server to send me a 406?" I'm not sure how this is breaking it's promise? Don't forget that the response codes are part of the standard interface and contract with the client. While not specific to REST (i.e. HTTP is one implementation) the concept of uniform interface is. I think that many people focus on the verbs (GET, PUT, POST, DELETE) while very little attention is given to the plethora of response codes. Atompub definitely brought 201 Created up from oblivion, but the others just as important. If a bad request or a state transition isn't understood, there is a proper response to it. On Mon, Dec 21, 2009 at 12:48 AM, Jan Algermissen <algermissen1971@...>wrote: > > On Dec 21, 2009, at 8:54 AM, Eric J. Bowman wrote: > > > Jan Algermissen wrote: > >> > >> So then, you'd say that it is perfectly RESTful that AtomPub > >> effectively says "a GET on a collection MUST at least return > >> application/atom+xml"? > >> > > > > Yes. That allows for a resource to have more than just an Atom > > representation. Reading between the lines and remembering that a > > request is made up of more than just its URI and method, it also > > effectively says that a GET request with an Accept header consisting > > only of 'application/atom+xml' MUST return 'application/atom+xml' or > > issue a 406 error. > > But this means that it is ok for the server to break it's own promise: > AtomPub says a feed will be available. And it is still ok for the > server to send me a 406? > > Suppose you invested serious money in building that client and the > spec says that there will be a feed. Suddenly the whole communication > falls apart, business level harm is done etc. because the service > sends 406 instead of a feed document. > > Whose fault is it and who is going to pay for the damage done? > > Jan > > > > > > > > -Eric > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Dec 21, 2009, at 6:23 PM, Noah Campbell wrote: > "But this means that it is ok for the server to break it's own > promise: > AtomPub says a feed will be available. And it is still ok for the > server to send me a 406?" > > I'm not sure how this is breaking it's promise? The spec says that a GET on a collection will return an Atom feed[1]. Yes, I know that the client should prepare for other return codes as well. But then you can also ask why the spec says what is being returned at all. The reason the spec says that an Atom feed is returned is that this design time information is needed to code an AtomPub client.The staement cannot be removed from the spec. It is part of the semantics of application/atomsrv+xml. AtomPub servers that do not respond with an Atom feed to a GET request to a collection are breaking clients. And the hint about the other HTTP codes is just informing the developer to handle it gracefully. The intended/expected coordination between client and server fails, regardless of handling the 406 or running into an unhandled exception. The client cannot do what it was coded to do (coded to do based on [1]). [1] http://tools.ietf.org/html/rfc5023#section-5.2 > Don't forget that the response codes are part of the standard > interface and contract with the client. While not specific to REST > (i.e. HTTP is one implementation) the concept of uniform interface > is. I think that many people focus on the verbs (GET, PUT, POST, > DELETE) while very little attention is given to the plethora of > response codes. Atompub definitely brought 201 Created up from > oblivion, but the others just as important. If a bad request or a > state transition isn't understood, there is a proper response to it. Yes. But the proper response isn't helping because the implied contract is broken anyhow. Do you think that an AtomPub server that is only capable of replying to a GET on a collection with an image of the collection is a correct implementation of RFC 5023? Should a test tool report an error for such a service because [1] is not correctly implemented? Jan > > -Noah > > On Mon, Dec 21, 2009 at 12:48 AM, Jan Algermissen <algermissen1971@... > > wrote: > > On Dec 21, 2009, at 8:54 AM, Eric J. Bowman wrote: > > > Jan Algermissen wrote: > >> > >> So then, you'd say that it is perfectly RESTful that AtomPub > >> effectively says "a GET on a collection MUST at least return > >> application/atom+xml"? > >> > > > > Yes. That allows for a resource to have more than just an Atom > > representation. Reading between the lines and remembering that a > > request is made up of more than just its URI and method, it also > > effectively says that a GET request with an Accept header consisting > > only of 'application/atom+xml' MUST return 'application/atom+xml' or > > issue a 406 error. > > But this means that it is ok for the server to break it's own promise: > AtomPub says a feed will be available. And it is still ok for the > server to send me a 406? > > Suppose you invested serious money in building that client and the > spec says that there will be a feed. Suddenly the whole communication > falls apart, business level harm is done etc. because the service > sends 406 instead of a feed document. > > Whose fault is it and who is going to pay for the damage done? > > Jan > > > > > > > > -Eric > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 21, 2009, at 6:23 PM, Noah Campbell wrote: > "But this means that it is ok for the server to break it's own > promise: > AtomPub says a feed will be available. And it is still ok for the > server to send me a 406?" > > I'm not sure how this is breaking it's promise? Just saw that AtomPub even says this: "For example, although this specification only defines the expected behavior of Collections with respect to GET and POST, this does not imply that PUT, DELETE, PROPPATCH, and others are forbidden on Collection Resources" [1] AtomPub says that it "defines the expected behavior of Collections with respect to GET and POST" which is then provided in section 5. Is that not defining expectations that client developers use to implement AtomPub clients? If these expectations are not met by the server then, yes, the server is breaking the contract established by RFC 5023. Jan [1] http://tools.ietf.org/html/rfc5023#section-4.4 > Don't forget that the response codes are part of the standard > interface and contract with the client. While not specific to REST > (i.e. HTTP is one implementation) the concept of uniform interface > is. I think that many people focus on the verbs (GET, PUT, POST, > DELETE) while very little attention is given to the plethora of > response codes. Atompub definitely brought 201 Created up from > oblivion, but the others just as important. If a bad request or a > state transition isn't understood, there is a proper response to it. > > -Noah > > On Mon, Dec 21, 2009 at 12:48 AM, Jan Algermissen <algermissen1971@... > > wrote: > > On Dec 21, 2009, at 8:54 AM, Eric J. Bowman wrote: > > > Jan Algermissen wrote: > >> > >> So then, you'd say that it is perfectly RESTful that AtomPub > >> effectively says "a GET on a collection MUST at least return > >> application/atom+xml"? > >> > > > > Yes. That allows for a resource to have more than just an Atom > > representation. Reading between the lines and remembering that a > > request is made up of more than just its URI and method, it also > > effectively says that a GET request with an Accept header consisting > > only of 'application/atom+xml' MUST return 'application/atom+xml' or > > issue a 406 error. > > But this means that it is ok for the server to break it's own promise: > AtomPub says a feed will be available. And it is still ok for the > server to send me a 406? > > Suppose you invested serious money in building that client and the > spec says that there will be a feed. Suddenly the whole communication > falls apart, business level harm is done etc. because the service > sends 406 instead of a feed document. > > Whose fault is it and who is going to pay for the damage done? > > Jan > > > > > > > > -Eric > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Mon, Dec 21, 2009 at 9:48 AM, Jan Algermissen <algermissen1971@...>wrote: > > On Dec 21, 2009, at 6:23 PM, Noah Campbell wrote: > > "But this means that it is ok for the server to break it's own promise: >> AtomPub says a feed will be available. And it is still ok for the >> server to send me a 406?" >> >> I'm not sure how this is breaking it's promise? >> > > Just saw that AtomPub even says this: > "For example, although this specification only defines the expected > behavior of Collections with respect to GET and POST, this does not imply > that PUT, DELETE, PROPPATCH, and others are forbidden on Collection > Resources" [1] > AtomPub says that it "defines the expected behavior of Collections with > respect to GET and POST" which is then provided in section 5. Is that not > defining expectations that client developers use to implement AtomPub > clients? > If these expectations are not met by the server then, yes, the server is > breaking the contract established by RFC 5023. > Jan > [1] http://tools.ietf.org/html/rfc5023#section-4.4 > > > I think you answered your own question. > > > Don't forget that the response codes are part of the standard interface >> and contract with the client. While not specific to REST (i.e. HTTP is one >> implementation) the concept of uniform interface is. I think that many >> people focus on the verbs (GET, PUT, POST, DELETE) while very little >> attention is given to the plethora of response codes. Atompub definitely >> brought 201 Created up from oblivion, but the others just as important. If >> a bad request or a state transition isn't understood, there is a proper >> response to it. >> >> -Noah >> >> On Mon, Dec 21, 2009 at 12:48 AM, Jan Algermissen < >> algermissen1971@...> wrote: >> >> On Dec 21, 2009, at 8:54 AM, Eric J. Bowman wrote: >> >> > Jan Algermissen wrote: >> >> >> >> So then, you'd say that it is perfectly RESTful that AtomPub >> >> effectively says "a GET on a collection MUST at least return >> >> application/atom+xml"? >> >> >> > >> > Yes. That allows for a resource to have more than just an Atom >> > representation. Reading between the lines and remembering that a >> > request is made up of more than just its URI and method, it also >> > effectively says that a GET request with an Accept header consisting >> > only of 'application/atom+xml' MUST return 'application/atom+xml' or >> > issue a 406 error. >> >> But this means that it is ok for the server to break it's own promise: >> AtomPub says a feed will be available. And it is still ok for the >> server to send me a 406? >> >> Suppose you invested serious money in building that client and the >> spec says that there will be a feed. Suddenly the whole communication >> falls apart, business level harm is done etc. because the service >> sends 406 instead of a feed document. >> >> Whose fault is it and who is going to pay for the damage done? >> >> Jan >> >> >> >> >> > >> > -Eric >> > >> > >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> >> > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
"AtomPub for example enables the client *implementor* to assume that a GET on a collection will return an Atom feed document." To your prior point, something is broken, but what? Is it the architectural style (by asking it on this mailing list it may be that you think it is)? Is it the transport HTTP? Is it the specification? Is it the implementor of the server or the client? Is it something else? You focus on the assumption being negative and rightly so, but lets be formal about what an assumption is. You've alluded to an assumption not met as negative. If I had to translate this into code it would look like this: fread (buffer, 1, lSize, pFile) There is an assumption here given all the variables are initialized correctly. Do you see it? The return value is not checked. The read may not have read all the data in the file in this particular call. Who is the guilty party? Is it the architecture, POSIX? Is it the specification, http://www.cplusplus.com/reference/clibrary/cstdio/fread/? Is it the implementation, GNU? Is it the implementor? I'd argue it's the implementor. C has a long established history of using return values to indicate success (as well as return values...but errno provides a (kludgy?) workaround). I would urge an implementer to understand the architecture style, the specification, the implementation and focus very hard on making sure assumptions like the above are not scattered through out the code. Since REST is about two remote systems communicating, I'd argue that any client must validate any assumption before proceeding, including checking the error code. If not, the client will be be brittle, prone to error, and cost more in ongoing maintenance. Good, robust applications assume nothing. Let's assume for a moment the AtomPub spec represents the typical spec for a service. It assumes RESTful architectural style using the HTTP transport. To your point, the service must behave has specified for any goal to be obtained. Aspects of the http transport "leak" into the interaction even those it has not been specified. The spec doesn't call out all the different response codes and how to handle them, it relies on those familiar with the HTTP transport to deal with those gracefully. Case in point, if you do: GET / Accept: application/atomsvc+xml and get a 307: Moved Temporarily Location: /svc.atom or 305: Use Proxy Location: /proxy/svc.atom or 401: Unauthorized www-authenticate: basic Is this an error? Roy thesis doesn't explicitly say yes or no. However, the argument for a uniform interface is that the intermediary can participate without affecting the remote call. I'll extrapolate a little in that a uniform interface provides a common behavior that permeates all levels of an architecture, including the implementation. The testers should be not be surprised to see the 3 response outline above and should be able to accommodate appropriately. Hopefully this response helps move the discussion forward :) -Noah On Mon, Dec 21, 2009 at 4:35 AM, Jan Algermissen <algermissen1971@...>wrote: > > On Dec 21, 2009, at 1:25 PM, Jorn Wildt wrote: > > > Oh, lets backtrack a bit. You said earlier on: > > > >> In the enterprise people want to develop clients and services in > >> parallel, shich rules out client design by inspecting the runtime > >> behavior of a service. > > > > Then I said: you need not expect at runtime, you can have a mock. To > > this you answered: no, you build clients on specs. > > > > What I was trying to say was: if you build clients on specs and RFC > > 5023 (application/atomsrv+xml) is a spec, then what is keeping you > > from building any kind of REST client on similar specs for other > > media types? If both server and client agrees on the media type spec > > then both can be built individually and simultaneously. > > > No, that is all fine and I agree. I am questioning the RESTfulness of > specs that allow the clients to make assumptions about the hypermedia > it will receive at some point in the interaction. AtomPub for example > enables the client *implementor* to assume that a GET on a collection > will return an Atom feed document. This is equivalent to making an > assumption about the application state to be in after the GET to the > collection. > > And I am trying to say that M2M clients (besides passibe, server > driven crawlers) can only be built when such contracts are in place. > > Jan > > > > > > /Jørn > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@acm.org > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
"swschilke" wrote: > > I wonder which was the "first" paper on CRUD and the first paper > which brought CRUD and REST togehter? The dissertation of Dr. > Fielding maybe? > The thesis makes no mention of CRUD, and Roy is on record stating that REST has nothing to do with CRUD. -Eric
On 11 Dec 2009, at 01:41, Solomon Duskis wrote:
> These seem like great examples of RDF and semantic web data linked data. I
> definitely will be looking into it. However, I'm not quite sure how that
> would apply to a business scenario which includes both linked data and
> operations and work flow.
There is the GoodRelations ontology
http://purl.org/goodrelations/
which should start finding a lot of very useful applications. I have not been following it but I think it has been used by BestBuy now, and does some interesting things in Google Search results
http://www.ebusiness-unibw.org/wiki/GoodRelationsInGoogle
> What's the sweet spot for RDF? What rules of thumb would you use to choose
> RDF vs. ATOM vs. XML/JSon API alternatives?
There is already a confusion in your question: you first need to distinguish between syntax and semantics. RDF is about semantics, done on a global scale with URIs.
Here is a picture describing the relation between syntax and semantics
http://blogs.sun.com/bblfish/entry/the_limitations_of_json
The format to serialise it in is a secondary problem, and there are a lot available. In fact any XML format can be seen as an RDF format with GRDDL . The advantage of the native formats is that once you have defined your vocabulary, you no longer need to worry about the formats again! That is one big advantage.
The next is that you can mix vocabularies, so you don't have to keep reinventing names for things.
Third: you can find the meaning of a term (URLs in a linked data environment) by clicking on it. (Just like you do on the web).
http://blogs.sun.com/bblfish/entry/get_my_meaning
So the above reasons make it simpler to develop and use applications. That means the people seeing your RDF will be able to quickly work out how it fits with their needs, what you are saying.
Finally it just simple REST. On a global medium like the web you can ONLY use URIs to refer to things. So you might as well think that way from the start. If you don't do it, it is because you have not yet broken out of client/server thinking.
Henry
>
> -Solomon
>
> On Thu, Dec 10, 2009 at 7:29 PM, Story Henry <henry.story@...>wrote:
>
>>
>>
>>
>> On 26 Nov 2009, at 20:45, swschilke wrote:
>>
>>> Dear *,
>>>
>>> can you kindly point me out some good examples of REST implementations.
>> Preferable well documented ;-) - I've read the O'Reilly book about the
>> Twitter API but I want to see more.
>>
>> Try http://dbpedia.org and the growing Linked Data space
>> http://linkeddata.org/
>>
>> Henry
>>
>>> Kind regards
>>>
>>>
>>> sws
>>>
On Dec 21, 2009, at 5:46 PM, alistair.miles wrote: > Hi Jan, > > Forgive me, but I've been squinting your activity diagram [1] and > trying to figure out how that maps onto a sequence of REST > interactions. E.g., can each message passing from buyer to seller be > mapped onto an HTTP request, and each message from seller to buyer > an HTTP response? Or am I completely misunderstanding the intention > of the diagram? My intention was to provide an example of how you can place object nodes on the swim lanes to visualize the payload. I did not mean to point you to the collaboration actually shown in that diagram. (It is not a RESTstyle interaction but message passing with fixed endpoints - other story anyway). In my diarams I show each HTTP message as an object node. Request and response. > > I'm interested because I've been trying to figure out the best > conventions for diagramming workflow that map cleanly onto a > hypermedia / REST service design. Your activity diagram is quite > different from the type of state transition diagrams used in [2]. Those are state diagrams, showing the current's client application state. That is another thing you could model that I forgot to mention. Beware though, that this is also never a design artifact but a runtime example (since the actual state machine is not known at design time - it is discovered at run time). > Maybe I shouldn't be trying to compare [1] and [2] because they're > trying to model different aspects of the system? Sorry, thinking out > loud... See previous sentence. Jan > > Thanks, > > Alistair. > > [1] http://docs.oasis-open.org/ubl/cs-UBL-2.0/art/UBL-2.0-OrderingProcess.jpg > [2] http://www.infoq.com/articles/webber-rest-workflow > > --- In rest-discuss@yahoogroups.com, Jan Algermissen > <algermissen1971@...> wrote: >> >> >> On Dec 21, 2009, at 11:56 AM, swschilke wrote: >> >>> Dear Everbody, >>> >>> you are such a valuable resource for knowledge about REST that I >>> apologize to bother you with my questions: >> >> No need to apologize - that is the purpose of this list. >> >>> >>> What would you recommend to visualize REST (WADL) architectures? >>> I've read some papers proposing extensions e.g., to UML, but I am >>> open to recommendations (worst case I use Powerpoint). >> >> What do you intend to visualize? Design artifacts? or Runtime >> examples? Or server side implementation aspects? >> >> As for design artifacts: since with REST all design is done by >> specifying hypermedia there is not really much you can visualize >> regarding design time. >> >> For runtime examples I use UML activity diagrams with swim lanes, >> placing the messages as object nodes on the lanes (see the UBL >> docs[1] >> as an example). >> >> For server side implementation class diagrams are a good choice, >> making each resource a class. >> >> >> HTH, >> >> Jan >> >> >> [1] http://docs.oasis-open.org/ubl/cs-UBL-2.0/art/UBL-2.0-OrderingProcess.jpg >> >> >>> >>> Kind regards and thank you very much >>> >>> sws >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Noah, (see below), On Dec 21, 2009, at 8:19 PM, Noah Campbell wrote: > "AtomPub for example enables the client *implementor* to assume that > a GET on a collection will return an Atom feed document." > > To your prior point, something is broken, but what? Is it the > architectural style (by asking it on this mailing list it may be > that you think it is)? Is it the transport HTTP? Is it the > specification? Is it the implementor of the server or the client? > Is it something else? > > You focus on the assumption being negative and rightly so, but lets > be formal about what an assumption is. You've alluded to an > assumption not met as negative. I is usually not that negative on the open Web because the overall expectations are not that strict; people allways plan for any kinds of changes to happen and REST advantage here is that the uniform interface enables the communication (the talking to each other) to succeed even if there is an error. Instead of everything falling apart the client user or developer can pick up the clue (e.g. the 406 body) and follow her nose to fix things. But this is a model that is very hard to sell inside the enterprise because the business level contracts require a certain degree of certainty (e.g. SLAs). Saying "hey, if business transactions suddenly stop working, look at the lock file and see what the service owner suggested as a fix. Nah, this will not happen evry often, just be prepared for it in any case". OTH, it might be the price to pay for the evolvability extreme of not needing any kind of out of band communication between client and server developer at all. Possibly, if you compare investment in time and travel resources etc. involved in discussing interfaces of the SOAP style with the cost of some missing transactions it might even make a compelling case. (Like airline rather pay customers some money for overbooked flights than to make sure that every passenger definitely gets a seat. The latter just costs less). This would lead to "If you are going to adopt REST with all the benefits do it all the way through and believe that the business level harm occasionally done by evolution costs far less than running a SOAP architecture in the long run. > If I had to translate this into code it would look like this: > > fread (buffer, 1, lSize, pFile) > > There is an assumption here given all the variables are initialized > correctly. Do you see it? > > The return value is not checked. The read may not have read all the > data in the file in this particular call. Who is the guilty party? > Is it the architecture, POSIX? Is it the specification, http://www.cplusplus.com/reference/clibrary/cstdio/fread/? > Is it the implementation, GNU? Is it the implementor? I'd argue > it's the implementor. C has a long established history of using > return values to indicate success (as well as return values...but > errno provides a (kludgy?) workaround). > > I would urge an implementer to understand the architecture style, > the specification, the implementation and focus very hard on making > sure assumptions like the above are not scattered through out the > code. Since REST is about two remote systems communicating, I'd > argue that any client must validate any assumption before > proceeding, including checking the error code. If not, the client > will be be brittle, prone to error, and cost more in ongoing > maintenance. Good, robust applications assume nothing. > > Let's assume for a moment the AtomPub spec represents the typical > spec for a service. It assumes RESTful architectural style using > the HTTP transport. To your point, the service must behave has > specified for any goal to be obtained. Aspects of the http > transport "leak" into the interaction even those it has not been > specified. The spec doesn't call out all the different response > codes and how to handle them, it relies on those familiar with the > HTTP transport to deal with those gracefully. Case in point, if you > do: > > GET / > Accept: application/atomsvc+xml > > and get a > > 307: Moved Temporarily > Location: /svc.atom > > or > > 305: Use Proxy > Location: /proxy/svc.atom > > or > > 401: Unauthorized > www-authenticate: basic > > Is this an error? > > Roy thesis doesn't explicitly say yes or no. However, the argument > for a uniform interface is that the intermediary can participate > without affecting the remote call. I'll extrapolate a little in > that a uniform interface provides a common behavior that permeates > all levels of an architecture, including the implementation. The > testers should be not be surprised to see the 3 response outline > above and should be able to accommodate appropriately. > Agreed and I see your point. But (sorry :-) I'd expect an HTTP client connector to be able to follow these redirects or authenticate on its own without even propagating it to the next level. Most client connectors do so (depending on config of course). So, I'd limit what we are talking about to steady states and leave out the transient ones. However, I understand you to say that an AtomPub client implementation that uses an HTTP client connector must of course implement all of HTTP. And yes, I agree that the 406 must be handled correctly. But then? there is no possible recovery from the broken expectation to receive an Atom feed. > Hopefully this response helps move the discussion forward :) Thanks for keeping up with this. I am just sorry that I seem to be so unable to get this accross. Jan > > -Noah > > On Mon, Dec 21, 2009 at 4:35 AM, Jan Algermissen <algermissen1971@... > > wrote: > > On Dec 21, 2009, at 1:25 PM, Jorn Wildt wrote: > > > Oh, lets backtrack a bit. You said earlier on: > > > >> In the enterprise people want to develop clients and services in > >> parallel, shich rules out client design by inspecting the runtime > >> behavior of a service. > > > > Then I said: you need not expect at runtime, you can have a mock. To > > this you answered: no, you build clients on specs. > > > > What I was trying to say was: if you build clients on specs and RFC > > 5023 (application/atomsrv+xml) is a spec, then what is keeping you > > from building any kind of REST client on similar specs for other > > media types? If both server and client agrees on the media type spec > > then both can be built individually and simultaneously. > > > No, that is all fine and I agree. I am questioning the RESTfulness of > specs that allow the clients to make assumptions about the hypermedia > it will receive at some point in the interaction. AtomPub for example > enables the client *implementor* to assume that a GET on a collection > will return an Atom feed document. This is equivalent to making an > assumption about the application state to be in after the GET to the > collection. > > And I am trying to say that M2M clients (besides passibe, server > driven crawlers) can only be built when such contracts are in place. > > Jan > > > > > > /Jørn > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 21, 2009, at 10:18 PM, Jan Algermissen wrote: > (Like airline rather pay customers some money > for overbooked flights than to make sure that every passenger > definitely gets a seat. The latter just costs less). s/latter/former/ Jan
"This would lead to "If you are going to adopt REST with all the benefits do it all the way through and believe that the business level harm occasionally done by evolution costs far less than running a SOAP architecture in the long run." Basically, no. If your notion that SOAP somehow solved the problems that you've identified being an issue in REST then I'm curious how you over came SOAP's shortcoming's. If anything, SOAP is more rigid and this leads to increase cost in the face of change. This has nothing to do with SOAP the architecture, but more SOAP the implementation. WSDL has done more harm than good, IMO. I've seen POX work really well, but again it's a different architecture than REST. I'm curious how SLA enforcement is achieved with a SOAP architecture? -Noah On Mon, Dec 21, 2009 at 1:18 PM, Jan Algermissen <algermissen1971@...>wrote: > Noah, > > (see below), > > > On Dec 21, 2009, at 8:19 PM, Noah Campbell wrote: > > "AtomPub for example enables the client *implementor* to assume that a GET >> on a collection will return an Atom feed document." >> >> To your prior point, something is broken, but what? Is it the >> architectural style (by asking it on this mailing list it may be that you >> think it is)? Is it the transport HTTP? Is it the specification? Is it >> the implementor of the server or the client? Is it something else? >> >> You focus on the assumption being negative and rightly so, but lets be >> formal about what an assumption is. You've alluded to an assumption not met >> as negative. >> > > I is usually not that negative on the open Web because the overall > expectations are not that strict; people allways plan for any kinds of > changes to happen and REST advantage here is that the uniform interface > enables the communication (the talking to each other) to succeed even if > there is an error. Instead of everything falling apart the client user or > developer can pick up the clue (e.g. the 406 body) and follow her nose to > fix things. > > But this is a model that is very hard to sell inside the enterprise because > the business level contracts require a certain degree of certainty (e.g. > SLAs). Saying "hey, if business transactions suddenly stop working, look at > the lock file and see what the service owner suggested as a fix. Nah, this > will not happen evry often, just be prepared for it in any case". > > OTH, it might be the price to pay for the evolvability extreme of not > needing any kind of out of band communication between client and server > developer at all. Possibly, if you compare investment in time and travel > resources etc. involved in discussing interfaces of the SOAP style with the > cost of some missing transactions it might even make a compelling case. > (Like airline rather pay customers some money for overbooked flights than to > make sure that every passenger definitely gets a seat. The latter just costs > less). > > This would lead to "If you are going to adopt REST with all the benefits do > it all the way through and believe that the business level harm occasionally > done by evolution costs far less than running a SOAP architecture in the > long run. > > > If I had to translate this into code it would look like this: >> >> fread (buffer, 1, lSize, pFile) >> >> There is an assumption here given all the variables are initialized >> correctly. Do you see it? >> >> The return value is not checked. The read may not have read all the data >> in the file in this particular call. Who is the guilty party? Is it the >> architecture, POSIX? Is it the specification, >> http://www.cplusplus.com/reference/clibrary/cstdio/fread/? Is it the >> implementation, GNU? Is it the implementor? I'd argue it's the implementor. >> C has a long established history of using return values to indicate success >> (as well as return values...but errno provides a (kludgy?) workaround). >> >> I would urge an implementer to understand the architecture style, the >> specification, the implementation and focus very hard on making sure >> assumptions like the above are not scattered through out the code. Since >> REST is about two remote systems communicating, I'd argue that any client >> must validate any assumption before proceeding, including checking the error >> code. If not, the client will be be brittle, prone to error, and cost more >> in ongoing maintenance. Good, robust applications assume nothing. >> >> Let's assume for a moment the AtomPub spec represents the typical spec for >> a service. It assumes RESTful architectural style using the HTTP transport. >> To your point, the service must behave has specified for any goal to be >> obtained. Aspects of the http transport "leak" into the interaction even >> those it has not been specified. The spec doesn't call out all the >> different response codes and how to handle them, it relies on those familiar >> with the HTTP transport to deal with those gracefully. Case in point, if >> you do: >> >> GET / >> Accept: application/atomsvc+xml >> >> and get a >> >> 307: Moved Temporarily >> Location: /svc.atom >> >> or >> >> 305: Use Proxy >> Location: /proxy/svc.atom >> >> or >> >> 401: Unauthorized >> www-authenticate: basic >> >> Is this an error? >> >> Roy thesis doesn't explicitly say yes or no. However, the argument for a >> uniform interface is that the intermediary can participate without affecting >> the remote call. I'll extrapolate a little in that a uniform interface >> provides a common behavior that permeates all levels of an architecture, >> including the implementation. The testers should be not be surprised to see >> the 3 response outline above and should be able to accommodate >> appropriately. >> >> > Agreed and I see your point. But (sorry :-) I'd expect an HTTP client > connector to be able to follow these redirects or authenticate on its own > without even propagating it to the next level. Most client connectors do so > (depending on config of course). So, I'd limit what we are talking about to > steady states and leave out the transient ones. > > However, I understand you to say that an AtomPub client implementation that > uses an HTTP client connector must of course implement all of HTTP. And yes, > I agree that the 406 must be handled correctly. But then? there is no > possible recovery from the broken expectation to receive an Atom feed. > > > > Hopefully this response helps move the discussion forward :) >> > > Thanks for keeping up with this. I am just sorry that I seem to be so > unable to get this accross. > > Jan > > > > >> -Noah >> >> On Mon, Dec 21, 2009 at 4:35 AM, Jan Algermissen <algermissen1971@...> >> wrote: >> >> On Dec 21, 2009, at 1:25 PM, Jorn Wildt wrote: >> >> > Oh, lets backtrack a bit. You said earlier on: >> > >> >> In the enterprise people want to develop clients and services in >> >> parallel, shich rules out client design by inspecting the runtime >> >> behavior of a service. >> > >> > Then I said: you need not expect at runtime, you can have a mock. To >> > this you answered: no, you build clients on specs. >> > >> > What I was trying to say was: if you build clients on specs and RFC >> > 5023 (application/atomsrv+xml) is a spec, then what is keeping you >> > from building any kind of REST client on similar specs for other >> > media types? If both server and client agrees on the media type spec >> > then both can be built individually and simultaneously. >> >> >> No, that is all fine and I agree. I am questioning the RESTfulness of >> specs that allow the clients to make assumptions about the hypermedia >> it will receive at some point in the interaction. AtomPub for example >> enables the client *implementor* to assume that a GET on a collection >> will return an Atom feed document. This is equivalent to making an >> assumption about the application state to be in after the GET to the >> collection. >> >> And I am trying to say that M2M clients (besides passibe, server >> driven crawlers) can only be built when such contracts are in place. >> >> Jan >> >> >> > >> > /Jørn >> > >> > >> > >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> >> > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
Noah, umm, I tried to say the opposite than you understood. What I meant is that if you really adopt REST you should do it in the same fashion as it works on the open Web (including an occasional 406 for example). The cost of the system not running until you fixed the clients to catch up with the (rare but possible) unexpected server evolution should be less than the overall additional cost of adopting a WS-* like technology. IOW, REST is economically cheaper than SOAP even if you embrace the occasional problems[1] resulting from truly uncoupled components. [1] 'Problem' meaning: a communication problem based on the client being a machine, not a human. Jan On Dec 21, 2009, at 11:47 PM, Noah Campbell wrote: > "This would lead to "If you are going to adopt REST with all the > benefits do it all the way through and believe that the business > level harm occasionally done by evolution costs far less than > running a SOAP architecture in the long run." > > Basically, no. If your notion that SOAP somehow solved the problems > that you've identified being an issue in REST then I'm curious how > you over came SOAP's shortcoming's. If anything, SOAP is more rigid > and this leads to increase cost in the face of change. This has > nothing to do with SOAP the architecture, but more SOAP the > implementation. WSDL has done more harm than good, IMO. I've seen > POX work really well, but again it's a different architecture than > REST. > > I'm curious how SLA enforcement is achieved with a SOAP architecture? > > -Noah > > On Mon, Dec 21, 2009 at 1:18 PM, Jan Algermissen <algermissen1971@mac.com > > wrote: > Noah, > > (see below), > > > On Dec 21, 2009, at 8:19 PM, Noah Campbell wrote: > > "AtomPub for example enables the client *implementor* to assume that > a GET on a collection will return an Atom feed document." > > To your prior point, something is broken, but what? Is it the > architectural style (by asking it on this mailing list it may be > that you think it is)? Is it the transport HTTP? Is it the > specification? Is it the implementor of the server or the client? > Is it something else? > > You focus on the assumption being negative and rightly so, but lets > be formal about what an assumption is. You've alluded to an > assumption not met as negative. > > I is usually not that negative on the open Web because the overall > expectations are not that strict; people allways plan for any kinds > of changes to happen and REST advantage here is that the uniform > interface enables the communication (the talking to each other) to > succeed even if there is an error. Instead of everything falling > apart the client user or developer can pick up the clue (e.g. the > 406 body) and follow her nose to fix things. > > But this is a model that is very hard to sell inside the enterprise > because the business level contracts require a certain degree of > certainty (e.g. SLAs). Saying "hey, if business transactions > suddenly stop working, look at the lock file and see what the > service owner suggested as a fix. Nah, this will not happen evry > often, just be prepared for it in any case". > > OTH, it might be the price to pay for the evolvability extreme of > not needing any kind of out of band communication between client and > server developer at all. Possibly, if you compare investment in time > and travel resources etc. involved in discussing interfaces of the > SOAP style with the cost of some missing transactions it might even > make a compelling case. (Like airline rather pay customers some > money for overbooked flights than to make sure that every passenger > definitely gets a seat. The latter just costs less). > > This would lead to "If you are going to adopt REST with all the > benefits do it all the way through and believe that the business > level harm occasionally done by evolution costs far less than > running a SOAP architecture in the long run. > > > If I had to translate this into code it would look like this: > > fread (buffer, 1, lSize, pFile) > > There is an assumption here given all the variables are initialized > correctly. Do you see it? > > The return value is not checked. The read may not have read all the > data in the file in this particular call. Who is the guilty party? > Is it the architecture, POSIX? Is it the specification, http://www.cplusplus.com/reference/clibrary/cstdio/fread/? > Is it the implementation, GNU? Is it the implementor? I'd argue > it's the implementor. C has a long established history of using > return values to indicate success (as well as return values...but > errno provides a (kludgy?) workaround). > > I would urge an implementer to understand the architecture style, > the specification, the implementation and focus very hard on making > sure assumptions like the above are not scattered through out the > code. Since REST is about two remote systems communicating, I'd > argue that any client must validate any assumption before > proceeding, including checking the error code. If not, the client > will be be brittle, prone to error, and cost more in ongoing > maintenance. Good, robust applications assume nothing. > > Let's assume for a moment the AtomPub spec represents the typical > spec for a service. It assumes RESTful architectural style using > the HTTP transport. To your point, the service must behave has > specified for any goal to be obtained. Aspects of the http > transport "leak" into the interaction even those it has not been > specified. The spec doesn't call out all the different response > codes and how to handle them, it relies on those familiar with the > HTTP transport to deal with those gracefully. Case in point, if you > do: > > GET / > Accept: application/atomsvc+xml > > and get a > > 307: Moved Temporarily > Location: /svc.atom > > or > > 305: Use Proxy > Location: /proxy/svc.atom > > or > > 401: Unauthorized > www-authenticate: basic > > Is this an error? > > Roy thesis doesn't explicitly say yes or no. However, the argument > for a uniform interface is that the intermediary can participate > without affecting the remote call. I'll extrapolate a little in > that a uniform interface provides a common behavior that permeates > all levels of an architecture, including the implementation. The > testers should be not be surprised to see the 3 response outline > above and should be able to accommodate appropriately. > > > Agreed and I see your point. But (sorry :-) I'd expect an HTTP > client connector to be able to follow these redirects or > authenticate on its own without even propagating it to the next > level. Most client connectors do so (depending on config of course). > So, I'd limit what we are talking about to steady states and leave > out the transient ones. > > However, I understand you to say that an AtomPub client > implementation that uses an HTTP client connector must of course > implement all of HTTP. And yes, I agree that the 406 must be handled > correctly. But then? there is no possible recovery from the broken > expectation to receive an Atom feed. > > > > Hopefully this response helps move the discussion forward :) > > Thanks for keeping up with this. I am just sorry that I seem to be > so unable to get this accross. > > Jan > > > > > -Noah > > On Mon, Dec 21, 2009 at 4:35 AM, Jan Algermissen <algermissen1971@... > > wrote: > > On Dec 21, 2009, at 1:25 PM, Jorn Wildt wrote: > > > Oh, lets backtrack a bit. You said earlier on: > > > >> In the enterprise people want to develop clients and services in > >> parallel, shich rules out client design by inspecting the runtime > >> behavior of a service. > > > > Then I said: you need not expect at runtime, you can have a mock. To > > this you answered: no, you build clients on specs. > > > > What I was trying to say was: if you build clients on specs and RFC > > 5023 (application/atomsrv+xml) is a spec, then what is keeping you > > from building any kind of REST client on similar specs for other > > media types? If both server and client agrees on the media type spec > > then both can be built individually and simultaneously. > > > No, that is all fine and I agree. I am questioning the RESTfulness of > specs that allow the clients to make assumptions about the hypermedia > it will receive at some point in the interaction. AtomPub for example > enables the client *implementor* to assume that a GET on a collection > will return an Atom feed document. This is equivalent to making an > assumption about the application state to be in after the GET to the > collection. > > And I am trying to say that M2M clients (besides passibe, server > driven crawlers) can only be built when such contracts are in place. > > Jan > > > > > > /Jørn > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 21, 2009, at 11:47 PM, Noah Campbell wrote: > > > "This would lead to "If you are going to adopt REST with all the > benefits do it all the way through and believe that the business > level harm occasionally done by evolution costs far less than > running a SOAP architecture in the long run." > > Basically, no. If your notion that SOAP somehow solved the problems > that you've identified being an issue in REST then I'm curious how > you over came SOAP's shortcoming's. If anything, SOAP is more rigid > and this leads to increase cost in the face of change. This has > nothing to do with SOAP the architecture, but more SOAP the > implementation. WSDL has done more harm than good, IMO. I've seen > POX work really well, but again it's a different architecture than > REST. > > I'm curious how SLA enforcement is achieved with a SOAP architecture? That is simple with SOAP because the artifact to enforce is the API. The SLA would be around the API lifecycle (e.g. once an API is out, it has to persist for three years). Not saying that the API is a sufficient means to guarantee stability but it expresses a fixed contract. WS-* simply excludes evolution without explicit versioning (doo, hope that is really correct; not an expert there). The SLA would only be violated if an existing API would go away. The evolution issue is done away with by tightly coupling the components. (AFAIK anyway) Jan > > -Noah > > On Mon, Dec 21, 2009 at 1:18 PM, Jan Algermissen <algermissen1971@... > > wrote: > Noah, > > (see below), > > > On Dec 21, 2009, at 8:19 PM, Noah Campbell wrote: > > "AtomPub for example enables the client *implementor* to assume that > a GET on a collection will return an Atom feed document." > > To your prior point, something is broken, but what? Is it the > architectural style (by asking it on this mailing list it may be > that you think it is)? Is it the transport HTTP? Is it the > specification? Is it the implementor of the server or the client? > Is it something else? > > You focus on the assumption being negative and rightly so, but lets > be formal about what an assumption is. You've alluded to an > assumption not met as negative. > > I is usually not that negative on the open Web because the overall > expectations are not that strict; people allways plan for any kinds > of changes to happen and REST advantage here is that the uniform > interface enables the communication (the talking to each other) to > succeed even if there is an error. Instead of everything falling > apart the client user or developer can pick up the clue (e.g. the > 406 body) and follow her nose to fix things. > > But this is a model that is very hard to sell inside the enterprise > because the business level contracts require a certain degree of > certainty (e.g. SLAs). Saying "hey, if business transactions > suddenly stop working, look at the lock file and see what the > service owner suggested as a fix. Nah, this will not happen evry > often, just be prepared for it in any case". > > OTH, it might be the price to pay for the evolvability extreme of > not needing any kind of out of band communication between client and > server developer at all. Possibly, if you compare investment in time > and travel resources etc. involved in discussing interfaces of the > SOAP style with the cost of some missing transactions it might even > make a compelling case. (Like airline rather pay customers some > money for overbooked flights than to make sure that every passenger > definitely gets a seat. The latter just costs less). > > This would lead to "If you are going to adopt REST with all the > benefits do it all the way through and believe that the business > level harm occasionally done by evolution costs far less than > running a SOAP architecture in the long run. > > > If I had to translate this into code it would look like this: > > fread (buffer, 1, lSize, pFile) > > There is an assumption here given all the variables are initialized > correctly. Do you see it? > > The return value is not checked. The read may not have read all the > data in the file in this particular call. Who is the guilty party? > Is it the architecture, POSIX? Is it the specification,http://www.cplusplus.com/reference/clibrary/cstdio/fread/? > Is it the implementation, GNU? Is it the implementor? I'd argue > it's the implementor. C has a long established history of using > return values to indicate success (as well as return values...but > errno provides a (kludgy?) workaround). > > I would urge an implementer to understand the architecture style, > the specification, the implementation and focus very hard on making > sure assumptions like the above are not scattered through out the > code. Since REST is about two remote systems communicating, I'd > argue that any client must validate any assumption before > proceeding, including checking the error code. If not, the client > will be be brittle, prone to error, and cost more in ongoing > maintenance. Good, robust applications assume nothing. > > Let's assume for a moment the AtomPub spec represents the typical > spec for a service. It assumes RESTful architectural style using > the HTTP transport. To your point, the service must behave has > specified for any goal to be obtained. Aspects of the http > transport "leak" into the interaction even those it has not been > specified. The spec doesn't call out all the different response > codes and how to handle them, it relies on those familiar with the > HTTP transport to deal with those gracefully. Case in point, if you > do: > > GET / > Accept: application/atomsvc+xml > > and get a > > 307: Moved Temporarily > Location: /svc.atom > > or > > 305: Use Proxy > Location: /proxy/svc.atom > > or > > 401: Unauthorized > www-authenticate: basic > > Is this an error? > > Roy thesis doesn't explicitly say yes or no. However, the argument > for a uniform interface is that the intermediary can participate > without affecting the remote call. I'll extrapolate a little in > that a uniform interface provides a common behavior that permeates > all levels of an architecture, including the implementation. The > testers should be not be surprised to see the 3 response outline > above and should be able to accommodate appropriately. > > > Agreed and I see your point. But (sorry :-) I'd expect an HTTP > client connector to be able to follow these redirects or > authenticate on its own without even propagating it to the next > level. Most client connectors do so (depending on config of course). > So, I'd limit what we are talking about to steady states and leave > out the transient ones. > > However, I understand you to say that an AtomPub client > implementation that uses an HTTP client connector must of course > implement all of HTTP. And yes, I agree that the 406 must be handled > correctly. But then? there is no possible recovery from the broken > expectation to receive an Atom feed. > > > > Hopefully this response helps move the discussion forward :) > > Thanks for keeping up with this. I am just sorry that I seem to be > so unable to get this accross. > > Jan > > > > > -Noah > > On Mon, Dec 21, 2009 at 4:35 AM, Jan Algermissen <algermissen1971@... > > wrote: > > On Dec 21, 2009, at 1:25 PM, Jorn Wildt wrote: > > > Oh, lets backtrack a bit. You said earlier on: > > > >> In the enterprise people want to develop clients and services in > >> parallel, shich rules out client design by inspecting the runtime > >> behavior of a service. > > > > Then I said: you need not expect at runtime, you can have a mock. To > > this you answered: no, you build clients on specs. > > > > What I was trying to say was: if you build clients on specs and RFC > > 5023 (application/atomsrv+xml) is a spec, then what is keeping you > > from building any kind of REST client on similar specs for other > > media types? If both server and client agrees on the media type spec > > then both can be built individually and simultaneously. > > > No, that is all fine and I agree. I am questioning the RESTfulness of > specs that allow the clients to make assumptions about the hypermedia > it will receive at some point in the interaction. AtomPub for example > enables the client *implementor* to assume that a GET on a collection > will return an Atom feed document. This is equivalent to making an > assumption about the application state to be in after the GET to the > collection. > > And I am trying to say that M2M clients (besides passibe, server > driven crawlers) can only be built when such contracts are in place. > > Jan > > > > > > /Jørn > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 22, 2009, at 12:11 AM, Roger Gonzalez wrote: > mike amundsen wrote: >> you build clients based on the media type. > Any given resource may have multiple representations that have > exactly the same media type; for example, an image resource may have > an image/png representing the full content as well as an image/png > representing a thumbnail. Content negotiation based only on media > type isn't sufficient. > These representations should be available at different resources because they are different things. I'd use: /foo/images/6676 /foo/images/6676?view=thumbnail Then conneg works fine on both. Jan > -rg > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
"Eric J. Bowman" wrote: > > So my client extends Atom Protocol by self-describing the unspecified > behavior of DELETE on a collection, in two different user-selectable > ways, using hypertext to drive application state and avoiding Atom > Protocol's REST mismatch on DELETE for both collections and member > resources. Client and server are now decoupled, and may evolve > independently. > My system also extends Atom Protocol through the use of PATCH. The system is a basic weblog, with multiple authors, plus registered and unregistered users. Role-based security is implemented (using HTTP- Digest) per HTTP method: Authors may POST new articles and PUT edits to their own articles. Registered users may POST new comments and, for a limited time, PUT edits to their own comments. Unregistered users may POST comments. Only Administrator-authors may DELETE anything. What I want, is for authors and registered users to be able to change the tags associated with an article. If I follow Atom Protocol and do this with PUT, then I'm breaking my security model by allowing any author or registered user to potentially edit the article, unless I add a whole lot of code to the server. This would also require much more bandwidth than necessary, since the client only wants to update the <category/> tags. So I define application/atomcat+xml as one possible delta format for application/atom+xml representations, and implement PATCH. Now, my security model remains intact, as I have a new method to secure, which only allows the <category/> tags to be changed. This will flesh out to where the server returns 202 Accepted with a message indicating that the server keeps track of all submitted PATCH entities associated with the article, and calculates the Top 5 tags, which then become the actual tags for an article. So a PATCH request may or may not honor the user's intention, and is not limited to only suggesting five tags. Without all that, a PATCH adds new tags by virtue of their presence, while removing old tags by virtue of their absence, in the submitted application/atomcat+xml document. Sure, this implementation is architecturally sound, but I have to put up my asterisk stating that this portion of my API is not standardized, and is therefore not REST. Currently, by virtue of Xforms 1.1 allowing any HTTP method to be used, PATCH is only defined for application/xhtml +xml. Its use is not defined by the media types I'm using, as application/atomcat+xml doesn't define itself as a possible delta format (even though it can be), and application/atom+xml only defines GET, PUT, POST and DELETE (and HEAD, as a given any time GET is allowed). So how do I change my Implementation to make it REST? Well, I don't. The problem lies with my Model. However, the (re)standardization of PATCH recently makes it inevitable that Atom Protocol will eventually be revised -- PATCH only didn't make the cut because it wasn't "in" HTTP (even though it was). Now that that's been cleared up, there's no reason to avoid PATCH in Atom Protocol (not suggesting this will happen any time soon, though). When the time comes to add PATCH to Atom Protocol, it will be possible to base that standardization on existing implementations. If my use makes the cut, I can change my Model to follow the new standard, instead of going off the reservation. My Implementation wouldn't need to be changed, but it would become RESTful. Until then, it's proprietary, even if it's open-source, from the REST perspective. -Eric
"Eric J. Bowman" wrote: > > "swschilke" wrote: > > > > I wonder which was the "first" paper on CRUD and the first paper > > which brought CRUD and REST togehter? The dissertation of Dr. > > Fielding maybe? > > > > The thesis makes no mention of CRUD, and Roy is on record stating that > REST has nothing to do with CRUD. > But this doesn't mean you can't RESTfully map CRUD to HTTP. Atom Protocol is basically a CRUD implementation. My extension to it, here: http://tech.groups.yahoo.com/group/rest-discuss/message/14316 Basically implements CRUUD, where the first 'U' means whole update via replacement, while the second 'U' means partial update via diff. -Eric
On Dec 21, 2009, at 4:07 PM, Eric J. Bowman wrote: > Sure, this implementation is architecturally sound, but I have to put > up my asterisk stating that this portion of my API is not standardized, > and is therefore not REST. Currently, by virtue of Xforms 1.1 allowing I must say that this is an extreme interpretation. You are implying that any hint of non-standardness makes an app unRESTful. Not even the underlying standards of the web require such strict adherence. Besides being questionable, such an interpretation is not very useful. What is the end goal here? Striving to ensure that an app meet this interpretation, or is it to deliver something of value to the stakeholders? If providing value to the stakeholders requires use of *everything* standard, then that is what should guide an implementation. Subbu
See below... On Mon, Dec 21, 2009 at 3:16 PM, Jan Algermissen <algermissen1971@...>wrote: > > On Dec 21, 2009, at 11:47 PM, Noah Campbell wrote: > > >> >> "This would lead to "If you are going to adopt REST with all the benefits >> do it all the way through and believe that the business level harm >> occasionally done by evolution costs far less than running a SOAP >> architecture in the long run." >> >> Basically, no. If your notion that SOAP somehow solved the problems that >> you've identified being an issue in REST then I'm curious how you over came >> SOAP's shortcoming's. If anything, SOAP is more rigid and this leads to >> increase cost in the face of change. This has nothing to do with SOAP the >> architecture, but more SOAP the implementation. WSDL has done more harm >> than good, IMO. I've seen POX work really well, but again it's a different >> architecture than REST. >> >> I'm curious how SLA enforcement is achieved with a SOAP architecture? >> > > > That is simple with SOAP because the artifact to enforce is the API. WSDL being the API artifact here? > The SLA would be around the API lifecycle (e.g. once an API is out, it has > to persist for three years). Not saying that the API is a sufficient means > to guarantee stability but it expresses a fixed contract. WS-* simply > excludes evolution without explicit versioning (doo, hope that is really > correct; not an expert there). The SLA would only be violated if an existing > API would go away. The evolution issue is done away with by tightly coupling > the components. > The evolution is done away with to the point that it's very difficult to change anything in practice. It's my opinion that tight coupling is actually a risk/liability, but I digress. Versioning is one means of controlling evolution and RESTful architecture supports it. There are numerous ways to achieve a transition from one version to the next. SOAP has options as well, but it can't take advantage of intermediaries to aide in the transition. For example, a RESTful architecture based on HTTP can use an HTTP load balancer to direct traffic to another version (via 301/307 or through pass through proxy) because it can take advantage of the URLs for uniquely identifying a resource. SOAP isn't so lucky since it tunnels through one URL, i.e. /context/service, and the proxy would have to inspect the payload to know where to route it (que the ESB vendor sales pitch here). SOAP may present the appearance of an artifact to establish a SLA, but it may be a false sense of stability. I've also seen RESTful system that include configuration as the first transition in the client state. The first response to a url is a document (xhtml, atom, xml) that has relationships a client becomes tightly coupled to. A rel tag with "apiv2" and a link to the v2 version of the service. The server cannot retire until all clients evolve to a new version. The client can start to evolve when a new version is made available (they can be done in parallel, but this is an optimization) and the client code is rolled out. v2 and v3 of the site can be running side by side if necessary. > (AFAIK anyway) > > Jan > > > >> -Noah >> >> On Mon, Dec 21, 2009 at 1:18 PM, Jan Algermissen <algermissen1971@...> >> wrote: >> Noah, >> >> (see below), >> >> >> On Dec 21, 2009, at 8:19 PM, Noah Campbell wrote: >> >> "AtomPub for example enables the client *implementor* to assume that a GET >> on a collection will return an Atom feed document." >> >> To your prior point, something is broken, but what? Is it the >> architectural style (by asking it on this mailing list it may be that you >> think it is)? Is it the transport HTTP? Is it the specification? Is it >> the implementor of the server or the client? Is it something else? >> >> You focus on the assumption being negative and rightly so, but lets be >> formal about what an assumption is. You've alluded to an assumption not met >> as negative. >> >> I is usually not that negative on the open Web because the overall >> expectations are not that strict; people allways plan for any kinds of >> changes to happen and REST advantage here is that the uniform interface >> enables the communication (the talking to each other) to succeed even if >> there is an error. Instead of everything falling apart the client user or >> developer can pick up the clue (e.g. the 406 body) and follow her nose to >> fix things. >> >> But this is a model that is very hard to sell inside the enterprise >> because the business level contracts require a certain degree of certainty >> (e.g. SLAs). Saying "hey, if business transactions suddenly stop working, >> look at the lock file and see what the service owner suggested as a fix. >> Nah, this will not happen evry often, just be prepared for it in any case". >> >> OTH, it might be the price to pay for the evolvability extreme of not >> needing any kind of out of band communication between client and server >> developer at all. Possibly, if you compare investment in time and travel >> resources etc. involved in discussing interfaces of the SOAP style with the >> cost of some missing transactions it might even make a compelling case. >> (Like airline rather pay customers some money for overbooked flights than to >> make sure that every passenger definitely gets a seat. The latter just costs >> less). >> >> This would lead to "If you are going to adopt REST with all the benefits >> do it all the way through and believe that the business level harm >> occasionally done by evolution costs far less than running a SOAP >> architecture in the long run. >> >> >> If I had to translate this into code it would look like this: >> >> fread (buffer, 1, lSize, pFile) >> >> There is an assumption here given all the variables are initialized >> correctly. Do you see it? >> >> The return value is not checked. The read may not have read all the data >> in the file in this particular call. Who is the guilty party? Is it the >> architecture, POSIX? Is it the specification, >> http://www.cplusplus.com/reference/clibrary/cstdio/fread/? Is it the >> implementation, GNU? Is it the implementor? I'd argue it's the implementor. >> C has a long established history of using return values to indicate success >> (as well as return values...but errno provides a (kludgy?) workaround). >> >> I would urge an implementer to understand the architecture style, the >> specification, the implementation and focus very hard on making sure >> assumptions like the above are not scattered through out the code. Since >> REST is about two remote systems communicating, I'd argue that any client >> must validate any assumption before proceeding, including checking the error >> code. If not, the client will be be brittle, prone to error, and cost more >> in ongoing maintenance. Good, robust applications assume nothing. >> >> Let's assume for a moment the AtomPub spec represents the typical spec for >> a service. It assumes RESTful architectural style using the HTTP transport. >> To your point, the service must behave has specified for any goal to be >> obtained. Aspects of the http transport "leak" into the interaction even >> those it has not been specified. The spec doesn't call out all the >> different response codes and how to handle them, it relies on those familiar >> with the HTTP transport to deal with those gracefully. Case in point, if >> you do: >> >> GET / >> Accept: application/atomsvc+xml >> >> and get a >> >> 307: Moved Temporarily >> Location: /svc.atom >> >> or >> >> 305: Use Proxy >> Location: /proxy/svc.atom >> >> or >> >> 401: Unauthorized >> www-authenticate: basic >> >> Is this an error? >> >> Roy thesis doesn't explicitly say yes or no. However, the argument for a >> uniform interface is that the intermediary can participate without affecting >> the remote call. I'll extrapolate a little in that a uniform interface >> provides a common behavior that permeates all levels of an architecture, >> including the implementation. The testers should be not be surprised to see >> the 3 response outline above and should be able to accommodate >> appropriately. >> >> >> Agreed and I see your point. But (sorry :-) I'd expect an HTTP client >> connector to be able to follow these redirects or authenticate on its own >> without even propagating it to the next level. Most client connectors do so >> (depending on config of course). So, I'd limit what we are talking about to >> steady states and leave out the transient ones. >> >> However, I understand you to say that an AtomPub client implementation >> that uses an HTTP client connector must of course implement all of HTTP. And >> yes, I agree that the 406 must be handled correctly. But then? there is no >> possible recovery from the broken expectation to receive an Atom feed. >> >> >> >> Hopefully this response helps move the discussion forward :) >> >> Thanks for keeping up with this. I am just sorry that I seem to be so >> unable to get this accross. >> >> Jan >> >> >> >> >> -Noah >> >> On Mon, Dec 21, 2009 at 4:35 AM, Jan Algermissen <algermissen1971@...> >> wrote: >> >> On Dec 21, 2009, at 1:25 PM, Jorn Wildt wrote: >> >> > Oh, lets backtrack a bit. You said earlier on: >> > >> >> In the enterprise people want to develop clients and services in >> >> parallel, shich rules out client design by inspecting the runtime >> >> behavior of a service. >> > >> > Then I said: you need not expect at runtime, you can have a mock. To >> > this you answered: no, you build clients on specs. >> > >> > What I was trying to say was: if you build clients on specs and RFC >> > 5023 (application/atomsrv+xml) is a spec, then what is keeping you >> > from building any kind of REST client on similar specs for other >> > media types? If both server and client agrees on the media type spec >> > then both can be built individually and simultaneously. >> >> >> No, that is all fine and I agree. I am questioning the RESTfulness of >> specs that allow the clients to make assumptions about the hypermedia >> it will receive at some point in the interaction. AtomPub for example >> enables the client *implementor* to assume that a GET on a collection >> will return an Atom feed document. This is equivalent to making an >> assumption about the application state to be in after the GET to the >> collection. >> >> And I am trying to say that M2M clients (besides passibe, server >> driven crawlers) can only be built when such contracts are in place. >> >> Jan >> >> >> > >> > /Jørn >> > >> > >> > >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> >> >> -------------------------------------- >> Jan Algermissen >> >> Mail: algermissen@... >> Blog: http://algermissen.blogspot.com/ >> Home: http://www.jalgermissen.com >> -------------------------------------- >> >> >> >> >> >> >> >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
On Mon, 21 Dec 2009 11:53:59 -0500
Tim Williams wrote:
>
> My original contention was that 'calling DELETE' on some resource
> (URI) provided by the server, isn't 'going rogue' or violating the
> uniform interface even if it's not in the representation. It may be
> met with a 405, but since "DELETE" is a part of the uniform interface
> between components in the system, I don't see how using it might be
> considered a violation of it.
>
Careful -- DELETE is a protocol-independent generic-interface method,
the HTTP implementation of which doesn't automatically result in a
uniform REST interface. As with most methods. My Xforms Atom Protocol
client, discussed here:
http://tech.groups.yahoo.com/group/rest-discuss/message/14260
Describes a uniform REST interface, and is in fact the only Atom
Protocol implementation I've seen that doesn't break the hypertext
constraint. However, the addition of PATCH:
http://tech.groups.yahoo.com/group/rest-discuss/message/14316
Degrades my API to being a generic HTTP interface, as much as I may
wish to call it REST. Oh, it's architecturally sound and all, but the
REST style requires that standard media types be used for applying
method semantics, so this is clearly not the REST style, even though
PATCH is now officially part of the generic-interface-method club.
Let's take a closer look at PUT. HTTP (and FTP) assigns two different
semantics to PUT -- creation and replacement. But, a REST API must
maintain consistent method semantics across all resources. Practically
speaking, the uniform interface constraint means your REST API must
limit PUT to one use or the other based on media type.
(Media types don't define method semantics {except in the case of a
media type which introduces a new method}, they describe the applied
semantics of existing methods. This is why a custom media type cannot
apply partial-update semantics to PUT -- that would be redefining
method semantics rather than applying them.)
For example, Atom Protocol constrains PUT to replacement semantics,
while constraining POST to creation semantics. Without changing the
semantics of the PUT or POST method, the application/atom+xml media
type describes the applied semantics of those HTTP methods, with the
goal of creating a uniform REST interface. Any intermediary looking at
PUT or POST requests with the Atom media type knows the specific
semantics of the request, which cannot be known just by looking at the
protocol's generic method semantics.
Let's say you've implemented a standard Atom Protocol system, but now
you want to allow PUT to be used with a user-supplied URI to add items
into a collection without creating Atom media entries for them.
First of all, as with all use of PUT for file-upload applications, and
as Roy has pointed out, the hypertext constraint is broken. Second, by
assigning a second semantic (create) to PUT for all resources _but_
application/atom+xml (replace), you have degraded your interface to the
status of generic HTTP. A uniform REST interface requires that method
semantics be identical across all resources in the system, they MUST
NOT vary by media type or resource "type".
REST requires self-descriptive messaging. This means it's the
combination of URI (includes protocol), request method, and *media
type* that determines the interface semantics. Without including a
media type in a PUT request, how is an HTTP intermediary to determine
whether the semantics are creation or replacement? That isn't very
self-descriptive. Whereas, if the media type is application/atom+xml,
it's clear to the entire world that the interaction semantics are
replacement, not creation, as specified by the media type.
This brings us back to DELETE, which, believe it or not, also has two
different semantics in HTTP. WebDAV extends DELETE to include a
'Depth' header, in the absence of which all members of a collection are
deleted along with the collection. Otherwise, HTTP DELETE is meant to
remove a single resource. There are two paradigms at cross-purpose
here, one is Web as Filesystem and the other is Web As It Exists.
Since a Web collection may be a transitory thing, say the top-5 most-
commented posts on a weblog, the elimination of all members if it's
deleted to make way for a top-10 list would be undesirable. Or, the
resource may literally map to a filesystem, in which case all members
must be deleted before a collection may be removed. Either way is
supported in a generic interface. Only one way or the other is allowed
(per protocol) in a uniform interface.
Atom Protocol skipped the debate entirely, by deciding to allow DELETE
on collections without defining the semantics. Feature, or bug? You
decide... The Atom media type could have opted to define WebDAV
semantics to a collection DELETE, in which case transitory Atom
Protocol collections wouldn't be practical, or gone the other way and
defined it as only removing the collection feed, in which case the
folks wanting to do bulk-delete with one request would be unhappy.
So yes, a REST API must rely on media types to determine the semantics
of protocol methods. Using HTTP DELETE on a resource represented only
as text/html isn't RESTful now, but it may become so once HTML 5 has
added (hopefully at least) PUT and DELETE into the text/html realm, at
which point they can be hypertext-driven. The other solution is to use
FTP DELETE, since that protocol doesn't care about media type and won't
allow collection-delete. But this only works if you're following the
filesystem paradigm and don't care about the hypertext constraint.
You can't use FTP DELETE to remove a negotiated resource, since it
isn't really there on the filesystem, so you'd have to use HTTP DELETE
-- but which semantic? Remove just the negotiated resource, or remove
all variants first? Or use MDELETE from WebDAV or FTP? While there
are multiple methods and semantics from multiple protocols to achieve
deletion, the REST style isn't free-form, your design choices must be
encompassed within standard media types.
The Xforms solution allows any choice to be hypertext-driven, so
creating a system using the standard application/xhtml+xml media type
is my first choice, transformed from some other variant, and presented
at a negotiated URL. The media type allows any DELETE functionality to
be defined in-band for the entire resource, or targeted at some other
resource, using the hypertext constraint while preserving the uniform
REST interface (if implemented properly, of course).
If an Atom entry is one variant in a negotiated resource, and the other
is HTML, the resource may be deleted outright because the Atom media
type specifically allows DELETE to remove the member resource (even
without hypertext), making it only logical to also DELETE the derived
HTML variant. Sorry to complicate things by pointing out that you can't
just DELETE willy-nilly in REST, but...
-Eric
Subbu Allamaraju wrote: > > On Dec 21, 2009, at 4:07 PM, Eric J. Bowman wrote: > > > Sure, this implementation is architecturally sound, but I have to > > put up my asterisk stating that this portion of my API is not > > standardized, and is therefore not REST. Currently, by virtue of > > Xforms 1.1 allowing > I also just wrote this: " Degrades my API to being a generic HTTP interface, as much as I may wish to call it REST. Oh, it's architecturally sound and all, but the REST style requires that standard media types be used for applying method semantics, so this is clearly not the REST style, even though PATCH is now officially part of the generic-interface-method club. " > > I must say that this is an extreme interpretation. You are implying > that any hint of non-standardness makes an app unRESTful. Not even > the underlying standards of the web require such strict adherence. > We aren't talking about underlying standards, we're talking about an architectural style that is based upon the use of an evolving set of standard methods, media types and link relations. I'm drawing a very clear line of distinction between my 100% REST Atom Protocol system, and its 0% REST tagging feature. Standard Atom Protocol clients won't see anything amiss, and will interoperate with the system to the best of their abilities, but cannot see the system as a whole. To participate in the tagging activity requires the use of a nonstandard client. I am harsher with my own work than I am with the work of others; I don't see my overall API as RESTful because of the tagging feature, but if someone else were to have implemented it, I wouldn't bother bringing it up. The key thing in REST is to optimize the hell out of GET. If some non-REST feature isn't having any effect on GET performance, then it doesn't really matter that much to the style (although it may be critical to the goals of the system). PATCH traffic is insignificant compared to GET traffic, so a nonstandard PATCH-based feature can safely be suboptimal. > > Besides being questionable, such an interpretation is not very > useful. What is the end goal here? Striving to ensure that an app > meet this interpretation, or is it to deliver something of value to > the stakeholders? If providing value to the stakeholders requires use > of *everything* standard, then that is what should guide an > implementation. > (Hypothetically speaking:) The end goal here is what differentiates the system in a crowded field, the tagging feature. The Domain Owner wants a top-flight weblog and believes that this feature will help attract Authors (by socializing the chore of tagging), Members (folks will only sign up if there's something in it for them, like a new toy to play with), and Nonmember Visitors (due to the quality of the content provided by a community of regulars commenting on well-written articles, drawn in by the differentiating features like social tagging). The Internal Developers want something that any moron who can read a spec can maintain, that provides a fundamentally sound platform on which differentiating features may be created and modified in response to user feedback. They also understand that wide adoption by External Developers of the protocol underlying the tagging feature, is crucial to the success of that feature. If a proprietary client is required to use it, then it won't get very far. The Internal Developers approach the Domain Owner and sell him on the notion of a REST architecture based around Atom Protocol, due to REST's scalability, efficiency, maintainability and serendipitous re-use. Which is brought about primarily through the decoupling of client from server provided by standard methods, media types and link relations... Aren't the Internal Developers morally and ethically obligated at this point, to inform the Domain Owner that the tagging feature is based on a proprietary PATCH protocol, initially only be available via an Xforms interface to users who download and install the necessary browser + extension, unless External Developers create custom clients for it? That it meets none of the goals of REST and therefore is a mismatch with the style? Isn't the success of this feature critical to the project? In order for the overall goals of the system to be met, the PATCH extension to Atom Protocol must be standardized. Only once it's available as part of the standard libraries, will this system meet its goals, which happen to overlap the goals of a REST architecture. Until then, any client of the tagging feature is coupled to a single server implementation. In order to succeed, the PATCH protocol must gain acceptance by being implemented in other systems (even if the feature is different, say auto-tagging instead of social tagging). Only then will it succeed in attracting the developers of existing Atom Protocol clients and libraries. Only then will it be a candidate standard. Only then will client be decoupled from server, through the shared understanding of the evolution of an existing standard media type to encompass a new method. It is absolutely essential that the non-RESTful nature of the PATCH protocol be recognized before the implementation is even considered. The Domain Owner is footing the bill, and must be able to make informed decisions. In this case, a decision to team with the Internal and External Developers to create an open proposed standard to include this protocol operation under the application/atom+xml umbrella would be required for the project to succeed. Otherwise, the nonstandard nature of the key differentiating feature will be the project's Achilles' Heel. Another possible decision, would be to modify existing open-source Atom Protocol client code for the new protocol extension, offering pre- compiled clients for different OSs on the website (putting back the code, of course). The worst decision would be to ignore the REST mismatch and move on with a wholly proprietary API that breaks REST's uniform interface. Like the early attempts at manned flight -- it might take off, but it will never fly. There are ways to overcome the limitations of nonstandard implementations. But let's not pretend that not using standard media types is somehow stylistically compatible with REST, when the benefits of that style come from the decoupling of client and server that's only possible _with_ standard media types. Media types that exist for the purpose of interacting with a single implementation, are fundamentally at odds with the REST style. My rule of thumb for creating media types remains: Don't! -Eric
Any standardization effort would, in addition to defining the behavior of application/atomcat+xml as a diff format, need to define a general application/atomdiff+xml media type for general patching. Consider updating the atom:copyright of every Atom document in a system. The existing protocol interaction is GET-PUT, whereas this change could be made using a much lighter-weight HEAD-PATCH interaction. The new media type would need to be extensible in the same way as Atom, while limiting its scope by treating the contents of many elements as CDATA. The point being, it's hard enough work to extend an existing media type, let alone create a new one. You may wind up having to develop something else, in order to make a proposal, like I would have to do by coming up with application/atomdiff+xml before I could get my desired media type of applicaton/atomcat+xml standardized as a diff format to PATCH Atom content. Forging ahead without a standardization effort is so much easier, that it constitutes an architectural cop-out in REST. The Internet and the Web are wide-open due to standardization winning out over proprietariness. The REST style is not only derived from the standardization process, it was also used to guide that process, and it is meant to foster the ongoing wide-open nature of the Internet and the Web by promoting the use of standard methods, media types, and link relations over proprietary this-system-only designs -- since that's what allowed the Web to flourish in the first place. The crucible of the Web made the REST style what it is, because that's what worked. -Eric
On Dec 22, 2009, at 1:38 AM, Subbu Allamaraju wrote: > > On Dec 21, 2009, at 4:07 PM, Eric J. Bowman wrote: > > > Sure, this implementation is architecturally sound, but I have to put > > up my asterisk stating that this portion of my API is not standardized, > > and is therefore not REST. Currently, by virtue of Xforms 1.1 allowing > > I must say that this is an extreme interpretation. You are implying that any hint of non-standardness makes an app unRESTful. Not even the underlying standards of the web require such strict adherence. I second that. Quoting from one of Roy's posts [1]: "I should also note that the above is not yet fully RESTful, at least how I use the term. All I have done is described the service interfaces, which is no more than any RPC. In order to make it RESTful, I would need to add hypertext to introduce and define the service, describe how to perform the mapping using forms and/or link templates, and provide code to combine the visualizations in useful ways. I could even go further and define these relationships as a standard, much like Atom has standardized a normal set of HTTP relationships with expected semantics, but I have bigger fish to fry right now." Seems to me that even Roy believes standardization is a desired, but not mandatory, property of RESTful systems. Stefan [1] http://roy.gbiv.com/untangled/2008/paper-tigers-and-hidden-dragons > > Besides being questionable, such an interpretation is not very useful. What is the end goal here? Striving to ensure that an app meet this interpretation, or is it to deliver something of value to the stakeholders? If providing value to the stakeholders requires use of *everything* standard, then that is what should guide an implementation. > > Subbu >
On Dec 22, 2009, at 3:15 AM, Noah Campbell wrote: > See below... > > On Mon, Dec 21, 2009 at 3:16 PM, Jan Algermissen <algermissen1971@... > > wrote: > > On Dec 21, 2009, at 11:47 PM, Noah Campbell wrote: > > > > "This would lead to "If you are going to adopt REST with all the > benefits do it all the way through and believe that the business > level harm occasionally done by evolution costs far less than > running a SOAP architecture in the long run." > > Basically, no. If your notion that SOAP somehow solved the problems > that you've identified being an issue in REST then I'm curious how > you over came SOAP's shortcoming's. If anything, SOAP is more rigid > and this leads to increase cost in the face of change. This has > nothing to do with SOAP the architecture, but more SOAP the > implementation. WSDL has done more harm than good, IMO. I've seen > POX work really well, but again it's a different architecture than > REST. > > I'm curious how SLA enforcement is achieved with a SOAP architecture? > > > That is simple with SOAP because the artifact to enforce is the API. > > WSDL being the API artifact here? Yes, (at least that is what I think). > > The SLA would be around the API lifecycle (e.g. once an API is out, > it has to persist for three years). Not saying that the API is a > sufficient means to guarantee stability but it expresses a fixed > contract. WS-* simply excludes evolution without explicit versioning > (doo, hope that is really correct; not an expert there). The SLA > would only be violated if an existing API would go away. The > evolution issue is done away with by tightly coupling the components. > > The evolution is done away with to the point that it's very > difficult to change anything in practice. It's my opinion that > tight coupling is actually a risk/liability, Yes, of course. > but I digress. > > Versioning is one means of controlling evolution and RESTful > architecture supports it. There are numerous ways to achieve a > transition from one version to the next. SOAP has options as well, > but it can't take advantage of intermediaries to aide in the > transition. > > For example, a RESTful architecture based on HTTP can use an HTTP > load balancer to direct traffic to another version (via 301/307 or > through pass through proxy) because it can take advantage of the > URLs for uniquely identifying a resource. SOAP isn't so lucky since > it tunnels through one URL, i.e. /context/service, and the proxy > would have to inspect the payload to know where to route it (que the > ESB vendor sales pitch here). SOAP may present the appearance of an > artifact to establish a SLA, but it may be a false sense of stability. Yes. > > > > I've also seen RESTful system that include configuration as the > first transition in the client state. The first response to a url > is a document (xhtml, atom, xml) that has relationships a client > becomes tightly coupled to. A rel tag with "apiv2" and a link to > the v2 version of the service. The server cannot retire until all > clients evolve to a new version. The client can start to evolve > when a new version is made available (they can be done in parallel, > but this is an optimization) and the client code is rolled out. v2 > and v3 of the site can be running side by side if necessary. Yes. Jan > > > (AFAIK anyway) > > Jan > > > > -Noah > > On Mon, Dec 21, 2009 at 1:18 PM, Jan Algermissen <algermissen1971@... > > wrote: > Noah, > > (see below), > > > On Dec 21, 2009, at 8:19 PM, Noah Campbell wrote: > > "AtomPub for example enables the client *implementor* to assume that > a GET on a collection will return an Atom feed document." > > To your prior point, something is broken, but what? Is it the > architectural style (by asking it on this mailing list it may be > that you think it is)? Is it the transport HTTP? Is it the > specification? Is it the implementor of the server or the client? > Is it something else? > > You focus on the assumption being negative and rightly so, but lets > be formal about what an assumption is. You've alluded to an > assumption not met as negative. > > I is usually not that negative on the open Web because the overall > expectations are not that strict; people allways plan for any kinds > of changes to happen and REST advantage here is that the uniform > interface enables the communication (the talking to each other) to > succeed even if there is an error. Instead of everything falling > apart the client user or developer can pick up the clue (e.g. the > 406 body) and follow her nose to fix things. > > But this is a model that is very hard to sell inside the enterprise > because the business level contracts require a certain degree of > certainty (e.g. SLAs). Saying "hey, if business transactions > suddenly stop working, look at the lock file and see what the > service owner suggested as a fix. Nah, this will not happen evry > often, just be prepared for it in any case". > > OTH, it might be the price to pay for the evolvability extreme of > not needing any kind of out of band communication between client and > server developer at all. Possibly, if you compare investment in time > and travel resources etc. involved in discussing interfaces of the > SOAP style with the cost of some missing transactions it might even > make a compelling case. (Like airline rather pay customers some > money for overbooked flights than to make sure that every passenger > definitely gets a seat. The latter just costs less). > > This would lead to "If you are going to adopt REST with all the > benefits do it all the way through and believe that the business > level harm occasionally done by evolution costs far less than > running a SOAP architecture in the long run. > > > If I had to translate this into code it would look like this: > > fread (buffer, 1, lSize, pFile) > > There is an assumption here given all the variables are initialized > correctly. Do you see it? > > The return value is not checked. The read may not have read all the > data in the file in this particular call. Who is the guilty party? > Is it the architecture, POSIX? Is it the specification,http://www.cplusplus.com/reference/clibrary/cstdio/fread/? > Is it the implementation, GNU? Is it the implementor? I'd argue > it's the implementor. C has a long established history of using > return values to indicate success (as well as return values...but > errno provides a (kludgy?) workaround). > > I would urge an implementer to understand the architecture style, > the specification, the implementation and focus very hard on making > sure assumptions like the above are not scattered through out the > code. Since REST is about two remote systems communicating, I'd > argue that any client must validate any assumption before > proceeding, including checking the error code. If not, the client > will be be brittle, prone to error, and cost more in ongoing > maintenance. Good, robust applications assume nothing. > > Let's assume for a moment the AtomPub spec represents the typical > spec for a service. It assumes RESTful architectural style using > the HTTP transport. To your point, the service must behave has > specified for any goal to be obtained. Aspects of the http > transport "leak" into the interaction even those it has not been > specified. The spec doesn't call out all the different response > codes and how to handle them, it relies on those familiar with the > HTTP transport to deal with those gracefully. Case in point, if you > do: > > GET / > Accept: application/atomsvc+xml > > and get a > > 307: Moved Temporarily > Location: /svc.atom > > or > > 305: Use Proxy > Location: /proxy/svc.atom > > or > > 401: Unauthorized > www-authenticate: basic > > Is this an error? > > Roy thesis doesn't explicitly say yes or no. However, the argument > for a uniform interface is that the intermediary can participate > without affecting the remote call. I'll extrapolate a little in > that a uniform interface provides a common behavior that permeates > all levels of an architecture, including the implementation. The > testers should be not be surprised to see the 3 response outline > above and should be able to accommodate appropriately. > > > Agreed and I see your point. But (sorry :-) I'd expect an HTTP > client connector to be able to follow these redirects or > authenticate on its own without even propagating it to the next > level. Most client connectors do so (depending on config of course). > So, I'd limit what we are talking about to steady states and leave > out the transient ones. > > However, I understand you to say that an AtomPub client > implementation that uses an HTTP client connector must of course > implement all of HTTP. And yes, I agree that the 406 must be handled > correctly. But then? there is no possible recovery from the broken > expectation to receive an Atom feed. > > > > Hopefully this response helps move the discussion forward :) > > Thanks for keeping up with this. I am just sorry that I seem to be > so unable to get this accross. > > Jan > > > > > -Noah > > On Mon, Dec 21, 2009 at 4:35 AM, Jan Algermissen <algermissen1971@... > > wrote: > > On Dec 21, 2009, at 1:25 PM, Jorn Wildt wrote: > > > Oh, lets backtrack a bit. You said earlier on: > > > >> In the enterprise people want to develop clients and services in > >> parallel, shich rules out client design by inspecting the runtime > >> behavior of a service. > > > > Then I said: you need not expect at runtime, you can have a mock. To > > this you answered: no, you build clients on specs. > > > > What I was trying to say was: if you build clients on specs and RFC > > 5023 (application/atomsrv+xml) is a spec, then what is keeping you > > from building any kind of REST client on similar specs for other > > media types? If both server and client agrees on the media type spec > > then both can be built individually and simultaneously. > > > No, that is all fine and I agree. I am questioning the RESTfulness of > specs that allow the clients to make assumptions about the hypermedia > it will receive at some point in the interaction. AtomPub for example > enables the client *implementor* to assume that a GET on a collection > will return an Atom feed document. This is equivalent to making an > assumption about the application state to be in after the GET to the > collection. > > And I am trying to say that M2M clients (besides passibe, server > driven crawlers) can only be built when such contracts are in place. > > Jan > > > > > > /Jørn > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
On Dec 22, 2009, at 8:22 AM, Stefan Tilkov wrote: > On Dec 22, 2009, at 1:38 AM, Subbu Allamaraju wrote: > >> >> On Dec 21, 2009, at 4:07 PM, Eric J. Bowman wrote: >> >>> Sure, this implementation is architecturally sound, but I have to >>> put >>> up my asterisk stating that this portion of my API is not >>> standardized, >>> and is therefore not REST. Currently, by virtue of Xforms 1.1 >>> allowing >> >> I must say that this is an extreme interpretation. You are implying >> that any hint of non-standardness makes an app unRESTful. Not even >> the underlying standards of the web require such strict adherence. > > I second that. +1 Besides - what is a standard anyway? IETF? OASIS? W3C? Google? What matters is that the hypermedia semantics used are properly specified and made available on the Web so clients can "follow their nose". Jan > Quoting from one of Roy's posts [1]: > > "I should also note that the above is not yet fully RESTful, at > least how I use the term. All I have done is described the service > interfaces, which is no more than any RPC. In order to make it > RESTful, I would need to add hypertext to introduce and define the > service, describe how to perform the mapping using forms and/or link > templates, and provide code to combine the visualizations in useful > ways. I could even go further and define these relationships as a > standard, much like Atom has standardized a normal set of HTTP > relationships with expected semantics, but I have bigger fish to fry > right now." > > Seems to me that even Roy believes standardization is a desired, but > not mandatory, property of RESTful systems. > > Stefan > > > [1] http://roy.gbiv.com/untangled/2008/paper-tigers-and-hidden-dragons > >> >> Besides being questionable, such an interpretation is not very >> useful. What is the end goal here? Striving to ensure that an app >> meet this interpretation, or is it to deliver something of value to >> the stakeholders? If providing value to the stakeholders requires >> use of *everything* standard, then that is what should guide an >> implementation. >> >> Subbu >> > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Indeed, I wonder how many technologies, or techniques, go into standardization because they are so widely used, and not the opposite. I'm thinking about Dependency Injection in Java, everybody uses it, I never saw noone saying by using it the application will become non-J2EE compatible... And only now is being standardized by JCP... 2009/12/22 Jan Algermissen <algermissen1971@...> > > > > On Dec 22, 2009, at 8:22 AM, Stefan Tilkov wrote: > > > On Dec 22, 2009, at 1:38 AM, Subbu Allamaraju wrote: > > > >> > >> On Dec 21, 2009, at 4:07 PM, Eric J. Bowman wrote: > >> > >>> Sure, this implementation is architecturally sound, but I have to > >>> put > >>> up my asterisk stating that this portion of my API is not > >>> standardized, > >>> and is therefore not REST. Currently, by virtue of Xforms 1.1 > >>> allowing > >> > >> I must say that this is an extreme interpretation. You are implying > >> that any hint of non-standardness makes an app unRESTful. Not even > >> the underlying standards of the web require such strict adherence. > > > > I second that. > > +1 > > Besides - what is a standard anyway? IETF? OASIS? W3C? Google? What > matters is that the hypermedia semantics used are properly specified > and made available on the Web so clients can "follow their nose". > > Jan > > > > Quoting from one of Roy's posts [1]: > > > > "I should also note that the above is not yet fully RESTful, at > > least how I use the term. All I have done is described the service > > interfaces, which is no more than any RPC. In order to make it > > RESTful, I would need to add hypertext to introduce and define the > > service, describe how to perform the mapping using forms and/or link > > templates, and provide code to combine the visualizations in useful > > ways. I could even go further and define these relationships as a > > standard, much like Atom has standardized a normal set of HTTP > > relationships with expected semantics, but I have bigger fish to fry > > right now." > > > > Seems to me that even Roy believes standardization is a desired, but > > not mandatory, property of RESTful systems. > > > > Stefan > > > > > > [1] http://roy.gbiv.com/untangled/2008/paper-tigers-and-hidden-dragons > > > >> > >> Besides being questionable, such an interpretation is not very > >> useful. What is the end goal here? Striving to ensure that an app > >> meet this interpretation, or is it to deliver something of value to > >> the stakeholders? If providing value to the stakeholders requires > >> use of *everything* standard, then that is what should guide an > >> implementation. > >> > >> Subbu > >> > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... <algermissen%40acm.org> > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > >
On 21 Dec 2009, at 04:26, Eric J. Bowman wrote: > My suggestion is to dredge up and dust off ye olde shopping-cart > example. OK. > In brief: Define resources in terms of standard media types and link > relations, saving URI allocation and method selection for the > implementation phase. Nearly right, but I would de-emphasise media types, until the last moment. Here is how to go about it. 1. Take a problem the is not client server specific. Ie: try something that spans domains, that requires distributed cooperation among agents. eg: Social Networks. Think big, and built simple. 2. define your models using RDF. Take an example on foaf: http://xmlns.com/foaf/0.1/ (and publish those models as linked data, so we have recursion) 3. Create Linked Data using those models. Build linked data examples that span across domains. Ie. one resource is defined on my site, the other on yours and link between them. FOAF is a good example of this. see for example how the data in this file points to data others have on their web site curl http://bblfish.net/people/henry/card You can choose one or more media types to do this, with content negotiation. Ie: the same URL can return any number of representations: html, rdf/xml, n3, ... 4. Create browsers of linked data eg, the foaf address book https://sommer.dev.java.net/AddressBook.html or web versions of the same http://foaf-visualizer.org/ 5. Add security restfully eg, foaf+ssl http://esw.w3.org/topic/foaf+ssl 6. Now you can do shopping, in a RESTful manner using the GoodRelations ontology for example http://purl.org/goodrelations/ You probably just need to define certain types of resources, as being ShoppingCarts, and actions that one needs to do on those to make it possible for people to create buying agents.
Did I get all options presented so far? > For example, a RESTful architecture based on HTTP can use an HTTP > > load balancer to direct traffic to another version (via 301/307 or > > through pass through proxy) because it can take advantage of the > > URLs for uniquely identifying a resource. > URI based evolution > The first response to a url > is a document (xhtml, atom, xml) that has relationships a client > becomes tightly coupled to. A rel tag with "apiv2" and a link to > the v2 version of the service. Entry point with versioning configuration - based evolution The media type change based evolution can be replaced with one (or both) of the previous ones. Any other solutions to keep the old process and a new one at the same time? Regards guilherme
Eric, First, a disclaimer... I didn't read your entire message. As with your other threads, your verbosity has overwhelmed the little time I have to participate in this discussion... I apologize, it's interesting stuff otherwise... On Mon, Dec 21, 2009 at 9:36 PM, Eric J. Bowman <eric@bisonsystems.net> wrote: > On Mon, 21 Dec 2009 11:53:59 -0500 > Tim Williams wrote: >> >> My original contention was that 'calling DELETE' on some resource >> (URI) provided by the server, isn't 'going rogue' or violating the >> uniform interface even if it's not in the representation. It may be >> met with a 405, but since "DELETE" is a part of the uniform interface >> between components in the system, I don't see how using it might be >> considered a violation of it. >> > > Careful -- DELETE is a protocol-independent generic-interface method, > the HTTP implementation of which doesn't automatically result in a > uniform REST interface. As with most methods. My Xforms Atom Protocol > client, discussed here: This is strange, I'm personally not after a holy grail of cross-communications protocol interface uniformity. I'm specifically talking about an "HTTP-based implementation of the REST style." In this case, the HTTP spec gets to define the methods and their semantics. Any protocol riding on top of HTTP should only be "filling-out or fixing the details of underspecified bits of standard protocols"[1]. My point was that this interface is defined as the interface between system components - independent of a representation (e.g. that HTML leaves off DELETE doesn't mean it's not a part of the interface). Your problem(s) that I've seen seem to be related to the fact that you've constrained yourself to a protocol that doesn't give you your desired level of resource granularity, such that it results in undefined behavior. --tim
On Mon, Dec 21, 2009 at 2:36 AM, Eric J. Bowman <eric@...> wrote: > "Eric J. Bowman" wrote: >> >> Sorry, not PUT, I was thinking about something else. But there is a >> minor REST mismatch in AtomPub regarding DELETE not being hypertext- >> driven, an obvious coupling of client to server. But, as a small >> portion of an overall REST system, not enough to claim failure to >> apply the hypertext constraint -- just a nitpick. While Atom Protocol >> doesn't specify the behavior of DELETE on a collection, this >> disclaimer still scopes DELETE to any resource with an Atom >> representation. >> > > Going a bit OT: > > I keep forgetting that I wrote a minimally-featured Atom Protocol > client using Xforms, to address this REST mismatch. An Xforms REST > application follows the MVC architectural style on the client. An > XHTML interface is provided, which takes an Atom collection feed and > displays it as one big Xform allowing individual entries to be added, > edited or removed by directly manipulating the Atom resources, > depending on user role as provided by HTTP-Digest. A form button may > be added to any individual entry, which will call its DELETE method, > meeting the hypertext constraint that eludes other Atom Protocol > implementations. > > Part of the Xform allows the collection to be deleted in one of three > ways: DELETE all members, DELETE the collection but not its members, or > DELETE all members and then DELETE the collection. While having a > collection-targeted DELETE silently remove all member resources of the > collection, then remove the collection resource, has the "Roy stamp of > approval" I do not wish to go that route here. My way is visible, > because batch deletion occurs as separate DELETE requests to each > member resource. It seems to me this isn't a "REST mismatch" so much as a mismatch between your desires and what APP gives you. I don't know APP well, but it seems that the real problem is that APP doesn't expose the collection at the level of granularity that you desire. When I read your paragraph above, I see three "resources" (collection, contents, and collection+contents). If you want them to be able to be DELETEd independently, you'll need to craft their exposure individually. I don't know, other than the fact that you're fighting with a protocol on top of HTTP, it doesn't seem a lot different than any other REST-resource problem that might be fixed by changing the resources exposed, even if that means moving away from APP. --tim
On Mon, Dec 21, 2009 at 7:07 PM, Eric J. Bowman <eric@...> wrote: > "Eric J. Bowman" wrote: >> >> So my client extends Atom Protocol by self-describing the unspecified >> behavior of DELETE on a collection, in two different user-selectable >> ways, using hypertext to drive application state and avoiding Atom >> Protocol's REST mismatch on DELETE for both collections and member >> resources. Client and server are now decoupled, and may evolve >> independently. >> > > My system also extends Atom Protocol through the use of PATCH. The > system is a basic weblog, with multiple authors, plus registered and > unregistered users. Role-based security is implemented (using HTTP- > Digest) per HTTP method: Authors may POST new articles and PUT > edits to their own articles. Registered users may POST new comments > and, for a limited time, PUT edits to their own comments. Unregistered > users may POST comments. Only Administrator-authors may DELETE > anything. > > What I want, is for authors and registered users to be able to change > the tags associated with an article. If I follow Atom Protocol and do > this with PUT, then I'm breaking my security model by allowing any > author or registered user to potentially edit the article As I said in the other mail, it seems that your problem is created by following APP even when it doesn't give you the desired level of resource granularity. You *could* leave it the way it is and let only authors modify that resource, then expose a completely new resource [tags] which you allow more liberal access to. --tim
> So yes, a REST API must rely on media types to determine the semantics > of protocol methods. Using HTTP DELETE on a resource represented only > as text/html isn't RESTful now, but it may become so once HTML 5 has > added (hopefully at least) PUT and DELETE into the text/html realm, at > which point they can be hypertext-driven. I ended up taking time to read your entire message and I think the above paragraph represents our differences. I *think* this is wrong. That 'text/html' doesn't support DELETE is a flaw in a single representation - the "resource" is still DELETEable as I have a *link* and a DELETE method that's a part of the interface. If I have a URI to a resource, the HTTP interface allows me to DELETE it. I'll say again, I think the intent is that the methods of the uniform interface are the interface between *system components* and to be applied to resources independent of a representation. One way to effect behavior changes would be to change the granularity of resources being exposed - as opposed to specifying semantics for a specific representation or resource. I *think* this is what Roy was addressing when he wrote this: "Identifiers, methods, and media types are orthogonal concerns — methods are not given meaning by the media type. Instead, the media type tells the client either what method to use (e.g., anchor implies GET) or how to determine the method to use (e.g., form element says to look in method attribute). The client should already know what the methods mean (they are universal) and how to dereference a URI." --tim [1] - http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-730
This thread just exploded and it's taken until now to catch up. Jan, I don't see any conflict with having a SLA backing up a REST interface. I think that you can make a brittle REST architecture that hits all of the REST bullet points, but inevitably fails to evolve properly. Take for example here, the "apiv2" rel link. The fact that the service authors CHOSE to add an "apiv2" link. They did not HAVE to. They COULD have simply changed the media type, and 406'd the old clients. Obviously, "suddenly", all of the old client fail miserably, and are cut off from the service until they upgrade. No backward compatibility here. As for "evolutionary" software, it's pretty clear that it doesn't evolve. Rather you have backward compatibility that gives an illusion of evolution. The existing clients aren't changing, the service is simply being friendly by keeping them in mind and not locking them out. I don't see any way that REST differs from SOAP, or any other system in this regard. As you've observed, compliance and compatibility are hard coded in to the clients and server. If the protocol changes, the clients and servers need to be changed to remain compatible. Versioning and backward compatibility is the key to a robust, evolving infrastructure. I think REST is better for such a system than something like SOAP simply because I think it is easier for a more advanced client to leverage the latest services and APIs, as well as for a server to better maintain compatibility with older clients. Both of these are done through extensible types and con neg. As you get more and more servers and clients on different upgrade cycles, this capability become more important. It's easy to see how you might get consumers using services that you, as the provider, particularly in an "open" enterprise, didn't even really "know" were being serviced. In the end, though things like typed rels, and online documentation, ideally when something goes wrong, payload inspection will direct the people maintaining the consumers towards what they need to change to become compliant again and able to use the new service. Regards, Will Hartung (willh@...)
This thread just exploded and it's taken until now to catch up. Jan, I don't see any conflict with having a SLA backing up a REST interface. I think that you can make a brittle REST architecture that hits all of the REST bullet points, but inevitably fails to evolve properly. Take for example here, the "apiv2" rel link. The fact that the service authors CHOSE to add an "apiv2" link. They did not HAVE to. They COULD have simply changed the media type, and 406'd the old clients. Obviously, "suddenly", all of the old client fail miserably, and are cut off from the service until they upgrade. No backward compatibility here. As for "evolutionary" software, it's pretty clear that it doesn't evolve. Rather you have backward compatibility that gives an illusion of evolution. The existing clients aren't changing, the service is simply being friendly by keeping them in mind and not locking them out. I don't see any way that REST differs from SOAP, or any other system in this regard. As you've observed, compliance and compatibility are hard coded in to the clients and server. If the protocol changes, the clients and servers need to be changed to remain compatible. Versioning and backward compatibility is the key to a robust, evolving infrastructure. I think REST is better for such a system than something like SOAP simply because I think it is easier for a more advanced client to leverage the latest services and APIs, as well as for a server to better maintain compatibility with older clients. Both of these are done through extensible types and con neg. As you get more and more servers and clients on different upgrade cycles, this capability become more important. It's easy to see how you might get consumers using services that you, as the provider, particularly in an "open" enterprise, didn't even really "know" were being serviced. In the end, though things like typed rels, and online documentation, ideally when something goes wrong, payload inspection will direct the people maintaining the consumers towards what they need to change to become compliant again and able to use the new service. Regards, Will Hartung (willh@...)
Why not teach REST from a systems engineering perspective. The properties that define a RESTful architecture are leveraged by tools like HAProxy, Nginx, Squid, Varnish and various other intermediaries. Once you have a good working grasp on how caching, etags, HTTP methods and response codes impact the entire system, then you can focus on building an app. Being able to produce a service that fits into ecosystem becomes much more relevant then trying to drag someone to the conclusion without a tangible example. Just a thought. -Noah On Sun, Dec 20, 2009 at 8:26 PM, Eric J. Bowman <eric@...>wrote: > The problem that's been preoccupying my thoughts during the time I > spend experimenting with REST, is how to teach it. I don't think > anyone disputes the fact that REST is hard to learn. But why is that? > I've convinced myself it's not because the students are morons, but > that we, collectively as a community, have failed to teach it > properly. The best evidence of that, is the recent thread asking for > examples of good REST systems: It's infinitely easier to find REST > implementations that aren't, than it is to find good examples (I've > seen REST implemented effectively on Intranets where the client is a > known quantity) that we can point to. > > We don't teach it properly, because we didn't learn it properly > ourselves. Besides Roy, who here at any level of REST ability has a > background in software architecture? Personally, I think it took me so > many years to become comfortable with REST because it was my first > experience with software development guided by a defined architectural > style. I basically had to teach myself software architecture, but not > until well after I started fancying myself a REST developer. > > What I'm saying, is that REST must be taught in terms of applied > architecture, instead of by example, before there will ever be enough > good examples to point to. You can't learn XSLT by reading O'Reilly's > "XSLT Cookbook" of examples, yet we try teaching REST by hauling out > the good ol' shopping cart every time. This has obviously failed. > > I don't think it's necessary for a REST student to understand anything > about software architecture (except maybe a few terms), only to follow > an approach grounded in software architecture. The wonderful new > textbook, "Software Architecture: Foundations, Theory, and Practice" is > something that should be read by the community, but not for the purpose > of using that textbook to teach REST. The textbook uses REST to > illustrate the principles of software architecture, it doesn't teach > REST. But it can be used to inform us on how to better teach REST. > > The textbook has chapters on Modeling, Visualization, Analysis, > Implementation, and Deployment and Mobility. This is the disciplined > approach that I keep harping on about, of late. > > The Modeling chapter discusses modeling both architectures and > architectural styles. It says nothing about modeling specific to > REST. Roy's thesis uses modeling to illustrate the REST architectural > style. So the first challenge in teaching REST is to teach how to > model the components, connectors, resources and interfaces for a > proposed system. REST constrains the interaction between connectors, > and these constraints must be part of the model. > > The Visualization chapter explains the separation of modeling and > visualization, but says nothing about visualization within the context > of REST. The second challenge in teaching REST using a software- > architecture-centric approach, is to use the model as a basis for > visualizing a proposed system in terms of the Process, Connector and > Data views for REST as described in Roy's thesis. > > The Analysis chapter also has nothing REST-specific. It's fairly self- > explanatory, though. Modeling, Visualization and Analysis are not a > serial approach, but an iterative process. This is the stage where, if > the Model calls for the Atom media type, despite the lack of URIs at > this point, the documents may be written and validated to flesh out the > data model for analysis. How many hardware resources does the model > require? Does the model need to be adjusted up/down? The third > challenge in teaching REST is, does the model fit the system's goals? > > Finally, we get to Implementation, another chapter with nary a peep > about REST. (I say finally, because the Deployment chapter covers > topics that, frankly, anyone pursuing REST probably has hands-on > experience with, so I don't see it as a teaching challenge.) Yes, this > is where a URI allocation scheme is finally devised for the modeled, > visualized and analyzed resources, and methods implemented so we can > pass data over the wire. It is iterative with the previous methods -- > selecting off-the-shelf parts may require architectural adjustment due > to different design assumptions being made in a standard library. > > The textbook defines Implementation as the problem of maintaining a > mapping between the developed system and its architectural model, and > focuses on frameworks as the solution. It also says, "To imbue > [desired properties] in the target system, the implementation _must_ be > derived from its architecture." This is the fourth, and most important, > challenge in teaching REST. Is the reason so many systems claim to be > RESTful, but aren't, because 99% of developers simply don't *know* how > to derive an implementation from an architectural style, because they've > never been taught? I don't think they need to be taught, only given the > tools to understand how a RESTful implementation is derived -- that > these tools are derived from the tenets of software architecture may > remain hidden behind a generic interface (so to speak). > > My suggestion is to dredge up and dust off ye olde shopping-cart > example. Why do we insist on presenting it by defining it as what > methods to apply to what resources of interest to obtain what response > code and data, beginning by defining a URI allocation scheme, when we > know that URI allocation schemes have (almost) nothing to do with REST, > and Roy has told us that we should be discussing our resources in terms > of media types and link relations instead? At some point, it should be > presented in terms of Modeling, Visualizing, Analyzing, and > Implementing in a REST-specific fashion. I think this may address some > of the criticism of REST lacking some sort of formal guidelines. > > In brief: Define resources in terms of standard media types and link > relations, saving URI allocation and method selection for the > implementation phase. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Well said. It would be a great disservice to teach REST with limited/no emphasis on how it all is supposed to work on the real-world plumbing. It would be even be a greater disservice to teach REST as an all-or-nothing dogma. No more slaps on the wrist please! Subbu On Dec 22, 2009, at 12:44 PM, Noah Campbell wrote: > > > Why not teach REST from a systems engineering perspective. The properties that define a RESTful architecture are leveraged by tools like HAProxy, Nginx, Squid, Varnish and various other intermediaries. Once you have a good working grasp on how caching, etags, HTTP methods and response codes impact the entire system, then you can focus on building an app. Being able to produce a service that fits into ecosystem becomes much more relevant then trying to drag someone to the conclusion without a tangible example. > > Just a thought. > > -Noah > > On Sun, Dec 20, 2009 at 8:26 PM, Eric J. Bowman <eric@...> wrote: > The problem that's been preoccupying my thoughts during the time I > spend experimenting with REST, is how to teach it. I don't think > anyone disputes the fact that REST is hard to learn. But why is that? > I've convinced myself it's not because the students are morons, but > that we, collectively as a community, have failed to teach it > properly. The best evidence of that, is the recent thread asking for > examples of good REST systems: It's infinitely easier to find REST > implementations that aren't, than it is to find good examples (I've > seen REST implemented effectively on Intranets where the client is a > known quantity) that we can point to. > > We don't teach it properly, because we didn't learn it properly > ourselves. Besides Roy, who here at any level of REST ability has a > background in software architecture? Personally, I think it took me so > many years to become comfortable with REST because it was my first > experience with software development guided by a defined architectural > style. I basically had to teach myself software architecture, but not > until well after I started fancying myself a REST developer. > > What I'm saying, is that REST must be taught in terms of applied > architecture, instead of by example, before there will ever be enough > good examples to point to. You can't learn XSLT by reading O'Reilly's > "XSLT Cookbook" of examples, yet we try teaching REST by hauling out > the good ol' shopping cart every time. This has obviously failed. > > I don't think it's necessary for a REST student to understand anything > about software architecture (except maybe a few terms), only to follow > an approach grounded in software architecture. The wonderful new > textbook, "Software Architecture: Foundations, Theory, and Practice" is > something that should be read by the community, but not for the purpose > of using that textbook to teach REST. The textbook uses REST to > illustrate the principles of software architecture, it doesn't teach > REST. But it can be used to inform us on how to better teach REST. > > The textbook has chapters on Modeling, Visualization, Analysis, > Implementation, and Deployment and Mobility. This is the disciplined > approach that I keep harping on about, of late. > > The Modeling chapter discusses modeling both architectures and > architectural styles. It says nothing about modeling specific to > REST. Roy's thesis uses modeling to illustrate the REST architectural > style. So the first challenge in teaching REST is to teach how to > model the components, connectors, resources and interfaces for a > proposed system. REST constrains the interaction between connectors, > and these constraints must be part of the model. > > The Visualization chapter explains the separation of modeling and > visualization, but says nothing about visualization within the context > of REST. The second challenge in teaching REST using a software- > architecture-centric approach, is to use the model as a basis for > visualizing a proposed system in terms of the Process, Connector and > Data views for REST as described in Roy's thesis. > > The Analysis chapter also has nothing REST-specific. It's fairly self- > explanatory, though. Modeling, Visualization and Analysis are not a > serial approach, but an iterative process. This is the stage where, if > the Model calls for the Atom media type, despite the lack of URIs at > this point, the documents may be written and validated to flesh out the > data model for analysis. How many hardware resources does the model > require? Does the model need to be adjusted up/down? The third > challenge in teaching REST is, does the model fit the system's goals? > > Finally, we get to Implementation, another chapter with nary a peep > about REST. (I say finally, because the Deployment chapter covers > topics that, frankly, anyone pursuing REST probably has hands-on > experience with, so I don't see it as a teaching challenge.) Yes, this > is where a URI allocation scheme is finally devised for the modeled, > visualized and analyzed resources, and methods implemented so we can > pass data over the wire. It is iterative with the previous methods -- > selecting off-the-shelf parts may require architectural adjustment due > to different design assumptions being made in a standard library. > > The textbook defines Implementation as the problem of maintaining a > mapping between the developed system and its architectural model, and > focuses on frameworks as the solution. It also says, "To imbue > [desired properties] in the target system, the implementation _must_ be > derived from its architecture." This is the fourth, and most important, > challenge in teaching REST. Is the reason so many systems claim to be > RESTful, but aren't, because 99% of developers simply don't *know* how > to derive an implementation from an architectural style, because they've > never been taught? I don't think they need to be taught, only given the > tools to understand how a RESTful implementation is derived -- that > these tools are derived from the tenets of software architecture may > remain hidden behind a generic interface (so to speak). > > My suggestion is to dredge up and dust off ye olde shopping-cart > example. Why do we insist on presenting it by defining it as what > methods to apply to what resources of interest to obtain what response > code and data, beginning by defining a URI allocation scheme, when we > know that URI allocation schemes have (almost) nothing to do with REST, > and Roy has told us that we should be discussing our resources in terms > of media types and link relations instead? At some point, it should be > presented in terms of Modeling, Visualizing, Analyzing, and > Implementing in a REST-specific fashion. I think this may address some > of the criticism of REST lacking some sort of formal guidelines. > > In brief: Define resources in terms of standard media types and link > relations, saving URI allocation and method selection for the > implementation phase. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > > > > >
On 22 Dec 2009, at 20:44, Noah Campbell wrote: > Why not teach REST from a systems engineering perspective. The properties > that define a RESTful architecture are leveraged by tools like HAProxy, > Nginx, Squid, Varnish and various other intermediaries. Once you have a > good working grasp on how caching, etags, HTTP methods and response codes > impact the entire system, then you can focus on building an app. Being able > to produce a service that fits into ecosystem becomes much more relevant > then trying to drag someone to the conclusion without a tangible example. > > Just a thought. That's the technology, which of course has to be taught. But unless you go all the way up to the semantic level, you won't understand why the architectural decisions were taken. Henry > -Noah
> > Besides - what is a standard anyway? IETF? OASIS? W3C? Google? What > matters is that the hypermedia semantics used are properly specified > and made available on the Web so clients can "follow their nose". > The point I've been trying to make, is the starting point in REST development is not defining applicaton-specific media types intended for a single system. A disciplined approach exhausts the possibilities of re-using existing standards, before resorting to creation. " The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs. " Every REST constraint is defined in terms of benefits and consequences. If the consequences outweigh the benefits, then don't apply the constraint. If the benefits of the constraint are irrelevant to the system, then feel free not to apply the constraint. -Eric
Stefan Tilkov wrote: > > I second that. Quoting from one of Roy's posts [1]: > > "I should also note that the above is not yet fully RESTful, at least > how I use the term. All I have done is described the service > interfaces, which is no more than any RPC. In order to make it > RESTful, I would need to add hypertext to introduce and define the > service, describe how to perform the mapping using forms and/or link > templates, and provide code to combine the visualizations in useful > ways. I could even go further and define these relationships as a > standard, much like Atom has standardized a normal set of HTTP > relationships with expected semantics, but I have bigger fish to fry > right now." > > Seems to me that even Roy believes standardization is a desired, but > not mandatory, property of RESTful systems. > My interpretation of that post is quite different from yours. Roy describes how to implement a sparse-bit array, as a representation of a standard media type like image/gif or image/png. The problem is, there is no hypertext, and the media types don't support methods other than GET. The solution is to wrap these images within Atom, but Roy is hardly going to spend the time working on someone else's problem by fleshing his example out to be RESTful. I see no support for your statement that Roy doesn't see standardization as mandatory, since the gist of his entire solution is to use standard media types and link relations which encompass the expected semantics of HTTP methods. -Eric
On Tue, Dec 22, 2009 at 3:11 PM, Will Hartung <willh@...> wrote: > As for "evolutionary" software, it's pretty clear that it doesn't > evolve. Rather you have backward compatibility that gives an illusion > of evolution. The existing clients aren't changing, the service is > simply being friendly by keeping them in mind and not locking them > out. > You're right. Protocols evolve. Software doesn't. Thanks for reminding me of this. -- Nick
Noah Campbell wrote: > > Why not teach REST from a systems engineering perspective. The > properties that define a RESTful architecture are leveraged by tools > like HAProxy, Nginx, Squid, Varnish and various other > intermediaries. Once you have a good working grasp on how caching, > etags, HTTP methods and response codes impact the entire system, then > you can focus on building an app. Being able to produce a service > that fits into ecosystem becomes much more relevant then trying to > drag someone to the conclusion without a tangible example. > > Just a thought. > This may well be a solution for describing to someone what REST *is*. My hypothesis is that "REST" APIs fail to be RESTful due to a failure in mapping between architectural model, and implementation. I'm concerned with those who already think they know they want REST, but need help developing a system. My proposed solution is to teach them how to develop an architectural model, and help them map that model to their implementation. I do not believe that REST can be taught, as it is an architectural style, by describing implementations. -Eric
> My interpretation of that post is quite different from yours. Roy That is why it is more interesting and helpful to settle such debates by using real-world pros and cons. Subbu
I agree it definitely focuses on the technology. Consider it a "practical" introduction to REST. Like a hands-on physics lab to get your interested in what Physics has to offer. On Tue, Dec 22, 2009 at 12:58 PM, Story Henry <henry.story@...>wrote: > > On 22 Dec 2009, at 20:44, Noah Campbell wrote: > > > Why not teach REST from a systems engineering perspective. The > properties > > that define a RESTful architecture are leveraged by tools like HAProxy, > > Nginx, Squid, Varnish and various other intermediaries. Once you have a > > good working grasp on how caching, etags, HTTP methods and response codes > > impact the entire system, then you can focus on building an app. Being > able > > to produce a service that fits into ecosystem becomes much more relevant > > then trying to drag someone to the conclusion without a tangible example. > > > > Just a thought. > > > That's the technology, which of course has to be taught. But unless you go > all the way up to the semantic level, you won't understand why the > architectural decisions were taken. > > Henry > > > > -Noah > >
Hi Henry, I'm sorry we didn't get a chance to meet at the recent get-together you posted about here, I had to cancel a planned trip to the Bay Area at the last minute. I have lots of trouble following anything you write, because you think of application development in terms of RDF and I do not. I was going to re-boot this thread to get it back on track, you beat me to the GoodRelations ontology, though. But your approach does help me flesh out my thoughts, and you're right, media type should be left out of the Model, and addressed perhaps in the Analysis phase, where I propose that raw data is modeled as hypertext. -Eric Story Henry wrote: > > On 21 Dec 2009, at 04:26, Eric J. Bowman wrote: > > > My suggestion is to dredge up and dust off ye olde shopping-cart > > example. > > OK. > > > In brief: Define resources in terms of standard media types and > > link relations, saving URI allocation and method selection for the > > implementation phase. > > Nearly right, but I would de-emphasise media types, until the last > moment. > > Here is how to go about it. > > 1. Take a problem the is not client server specific. Ie: try > something that spans domains, that requires distributed cooperation > among agents. eg: Social Networks. Think big, and built simple. > > 2. define your models using RDF. Take an example on foaf: > > http://xmlns.com/foaf/0.1/ > > (and publish those models as linked data, so we have recursion) > > 3. Create Linked Data using those models. > Build linked data examples that span across domains. Ie. one > resource is defined on my site, the other on yours and link between > them. FOAF is a good example of this. > > see for example how the data in this file points to data others > have on their web site > > curl http://bblfish.net/people/henry/card > > You can choose one or more media types to do this, with content > negotiation. Ie: the same URL can return any number of > representations: html, rdf/xml, n3, ... > > 4. Create browsers of linked data > > eg, the foaf address book > https://sommer.dev.java.net/AddressBook.html > or web versions of the same > http://foaf-visualizer.org/ > > 5. Add security restfully > > eg, foaf+ssl > http://esw.w3.org/topic/foaf+ssl > > 6. Now you can do shopping, in a RESTful manner using the > GoodRelations ontology for example http://purl.org/goodrelations/ > You probably just need to define certain types of resources, as > being ShoppingCarts, and actions that one needs to do on those to > make it possible for people to create buying agents. > > > > > > > > >
On Tue, 22 Dec 2009 09:43:12 -0500 Tim Williams wrote: > > As I said in the other mail, it seems that your problem is created by > following APP even when it doesn't give you the desired level of > resource granularity. You *could* leave it the way it is and let only > authors modify that resource, then expose a completely new resource > [tags] which you allow more liberal access to. > I already have a /tags resource, which all resources have a <link rel= 'glossary'/> pointing to. It returns a <dl> where each <dt> is a link to the wiki page for the tag under the /tags/ hierarchy. The <dd>s are gleaned from the wiki pages. Authors and Registered Members may create and edit tags, and I could just allow them to PUT new article links on those wiki pages. Or do as you suggest, or otherwise bend over backwards for the sake of strict adherence to REST. But I've never suggested anyone do such a thing. The KISS solution to my problem is to use another method to implement partial-update, REST be damned. But I do this knowing full well the implications, both positive and negative, of my solution -- because I understand it to be a REST mismatch. -Eric
Hi David, You may well be the 1,000th person to post exactly the same sentiments to this group, over the years. The basic problem, is that any REST- based architectural Model has an infinite number of valid Implementations. Thus there can't be such a thing as a "reference implementation". The closest we come is systems built around Atom Protocol, and of course the GET-and-POST-based HTML Web sites, of which millions are RESTful yet none are sexy enough to really help anyone with the systems they're trying to implement. My goal here, is to figure out a way (not today, by any means) we can all agree on to Model REST architectures. If we can develop a REST architectural Model for a shopping cart, then any number of implementation ideas may be posted to the list and their mappings to the model evaluated in a common lingo. Nobody has to map the entire model, but the shortcomings of such an implementation can be agreed upon in terms of benefits and consequences. The conversation around here can then change. We can point to shopping cart implementations in the real world, and evaluate them against our Model. We can then discuss the consequences of a failed mapping, in terms of the goals of the system we're evaluating (e.g. Amazon). I may be off on a wild goose chase with this, but I think (as your post so painfully reminds all of us) it's obvious that REST has failed on the Web for anything more complex than blogging. We need to stop discussing REST in terms of implementation, and start discussing it in terms of how well implementations map to a REST architectural Model, and the benefits and consequences of the success or failure of such mapping to be implemented. -Eric David Otaguro wrote: > > Maybe I've just not found it, but one of the biggest headaches I've > seen in explaining REST is the lack of a generally agreed on, well > explained reference example demonstrating the RESTful approach for a > reasonably complex domain and how it differs/improves on a POXy RPC > approach. > > The usual examples I've seen are either so trivial as to be > effectively useless, or lack a consensus validating that the approach > really does embody the core ideals of REST. When all the major > published "REST-ish" APIs (e.g Amazon) end up with the criticism that > they're just POXy RPC, or confuse representation with resource, it > becomes hard to point to an example and say, "If you emulate the > thinking here, you won't be far from wrong." > > Again, I could be wrong, maybe there is something out there that > people can point to to say, "Here's REST done right for a complicated > problem domain". If so, and someone would do me the favor of > pointing me to it, I'd appreciate it. > > Dave. >
David Otaguro wrote: > > I think examples are an absolutely necessary part of explaining and > teaching architectural styles. > > Students need both the abstract definition and concepts underlying > the style AND some examples of use in order to see how those concepts > manifest in reality. Having just one without the other is where too > many professors fail... either they teach a concept and leave it as > an exercise to the reader to apply it (usually disastrous), or they > show examples without the underlying conceptual framework, and > students merely ape the example blindly. > > Dave. > See my response here: http://tech.groups.yahoo.com/group/rest-discuss/message/14346 REST development is entirely an exercise in applied architecture. I'm not interested in teaching what REST is, I'm interested in teaching folks how to implement systems from architectural Models. This is where all the pragmatism of using REST lies, implementers don't need to understand theory of software architecture if someone who already does helps them to Model a system. That architect can explain REST in terms of the resulting Implementation, to those who need to build it. -Eric
Tim Williams wrote: > > > > > Careful -- DELETE is a protocol-independent generic-interface > > method, the HTTP implementation of which doesn't automatically > > result in a uniform REST interface. As with most methods. My > > Xforms Atom Protocol client, discussed here: > > This is strange, I'm personally not after a holy grail of > cross-communications protocol interface uniformity. I'm specifically > talking about an "HTTP-based implementation of the REST style." In > this case, the HTTP spec gets to define the methods and their > semantics. Any protocol riding on top of HTTP should only be > "filling-out or fixing the details of underspecified bits of standard > protocols"[1]. My point was that this interface is defined as the > interface between system components - independent of a representation > (e.g. that HTML leaves off DELETE doesn't mean it's not a part of the > interface). Your problem(s) that I've seen seem to be related to the > fact that you've constrained yourself to a protocol that doesn't give > you your desired level of resource granularity, such that it results > in undefined behavior. > I get much more pushback than Roy, even though he's more extreme about it than I am, as in this post: http://roy.gbiv.com/untangled/2008/paper-tigers-and-hidden-dragons Roy implies that defining PUT, POST, PATCH, OPTIONS and DELETE on a resource defined by a representation that's an image/gif amounts to an RPC interface. I would call it a generic HTTP interface. But it definitely isn't a uniform REST interface, unless it's wrapped in some kind of hypertext media type that encompasses the desired methods. It has nothing to do with my given example. PATCH is an underspecified bit of protocol. DELETE's omission from text/html (or image/gif) is not an oversight you can just ignore. While it's always there in the generic interface, in order for the interface to be uniform, its use must be defined by the media type. -Eric
Tim Williams wrote: > > If I have a URI > to a resource, the HTTP interface allows me to DELETE it. > HTTP != REST. The generic interface allows your resource to be DELETEd by a variety of protocols, including HTTP. > > I *think* this is what Roy was addressing when he wrote this: > > "Identifiers, methods, and media types are orthogonal concerns — > methods are not given meaning by the media type. Instead, the media > type tells the client either what method to use (e.g., anchor implies > GET) or how to determine the method to use (e.g., form element says to > look in method attribute). The client should already know what the > methods mean (they are universal) and how to dereference a URI." > Atom Protocol doesn't give any meaning to the PUT or POST method. The client only needs to know how to make PUT and POST requests -- it does not need to know that Atom Protocol constrains POST to 'create' and PUT to 'update', i.e. the client could care less that we're using Atom Protocol. The media type instructs clients to use POST to 'create' and PUT to 'update'. REST requires that hypertext be used to make these instructions to the client explicit, so Atom Protocol has a REST mismatch. -Eric
On Tue, Dec 22, 2009 at 5:52 PM, Eric J. Bowman <eric@...> wrote: > Tim Williams wrote: >> >> If I have a URI >> to a resource, the HTTP interface allows me to DELETE it. >> > > HTTP != REST. The generic interface allows your resource to be DELETEd > by a variety of protocols, including HTTP. Ok, this is back to strange for me, I said in an earlier message I'm specifically talking about an "HTTP-based implementation of the REST style" - that disclaimer is what allows me to define *my* uniform interface in my examples as the HTTP methods. I have personally never felt the need to map my uniform methods to another communications protocol - is anyone really doing that? I gathered that was true for you as well since your examples continue to be APP which is itself HTTP-based. >> >> I *think* this is what Roy was addressing when he wrote this: >> >> "Identifiers, methods, and media types are orthogonal concerns — >> methods are not given meaning by the media type. Instead, the media >> type tells the client either what method to use (e.g., anchor implies >> GET) or how to determine the method to use (e.g., form element says to >> look in method attribute). The client should already know what the >> methods mean (they are universal) and how to dereference a URI." >> > > ... REST requires that hypertext be used to make these > instructions to the client explicit, so Atom Protocol has a REST > mismatch. I've never seen such a requirement and it's not clear how that resolves with Roy's comment below? "HTTP operations are generic: they are allowed or not, per resource, but they are always valid. Hypertext doesn’t usually tell you all the operations allowed on any given resource; it tells you which operation to use for each potential transition." --tim
Assume a university lesson planning system. At the simplest level we have class rooms and courses - both have unique public well known identifiers like room "P160" and course "43S09". We also have lessons describing a combination of a room, a course and a time interval. Now I want to make "this" week's lessons available as a ressource - for instance: /lessons/thisweek?room=P160&course=43S09 My question is: when both rooms and courses are ressouces themselves should we then pass the actual ressources to the search? Like this (with proper URL escaping of course): /lessons/thisweek?room=http://my.edu/rooms/P160&http://my.edu/courses/43S09 It doesn't look good, it fails if ressources are moved to new URLs, I don't like it myself, and I haven't seen anybody do it. But somehow it seems more cleaner or generic in the sense that *everything* is a ressource - also search arguments. Comments? Thanks, J�rn
Jørn, On Dec 23, 2009, at 6:43 AM, Jørn Wildt wrote: > Assume a university lesson planning system. At the simplest level we > have > class rooms and courses - both have unique public well known > identifiers > like room "P160" and course "43S09". We also have lessons describing a > combination of a room, a course and a time interval. > > Now I want to make "this" week's lessons available as a ressource - > for > instance: > > /lessons/thisweek?room=P160&course=43S09 > > My question is: when both rooms and courses are ressouces themselves > should > we then pass the actual ressources to the search? Like this (with > proper URL > escaping of course): > > /lessons/thisweek?room=http://my.edu/rooms/P160&http://my.edu/courses/43S09 > > It doesn't look good, I think it is the right thing to do because the URIs are the identifiers known by the client. The client should, of course, discover the identifiers from server-provided lists and not make them up based on out of band URI onstruction knowledge. > it fails if ressources are moved to new URLs, The server should not move resources to new URIs (Cool URIs don't change) or at least maintain a mapping from old to new. A mapping it would lso use for redirects from old to new resources. Viewed from another angle: if you can even vaguely expect that your URIs would change they are badly chosen; they should not include a 'key' for the mapped business object that is subject to change. If room no. or course no. might be reused in your domain, probably choose an artificial key. The URI is opaque to the client anyhow. > I don't > like it myself, and I haven't seen anybody do it. I did that and do it and its fine. > But somehow it seems more > cleaner or generic in the sense that *everything* is a ressource - > also > search arguments. Yes. Jan > > Comments? > > Thanks, Jørn > > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Will, excellent analysis. On Dec 22, 2009, at 9:13 PM, Will Hartung wrote: > This thread just exploded and it's taken until now to catch up. > > Jan, I don't see any conflict with having a SLA backing up a REST > interface. Me neither. But it needs to be clear what the SLA'ed contract really is. Take the AtomPub example: RFC 5023 *is* saying that a GET on a collection will return a feed. Is that normative? Or just a hint? If it is just a hint, why is it in the spec at all and what is the value of it from the client developer's POV? If you are the service owner, would you put into the SLA a penalty payment of some serious money if your service stops providing an Atom feed for a GET to a collection? If not, the whole information is meaningless from a contract POV. > > I think that you can make a brittle REST architecture that hits all of > the REST bullet points, but inevitably fails to evolve properly. > > Take for example here, the "apiv2" rel link. > > The fact that the service authors CHOSE to add an "apiv2" link. They > did not HAVE to. They COULD have simply changed the media type, and > 406'd the old clients. My issue: In a RESTful system, the service authors woule *never* have to make any promise, right? > > Obviously, "suddenly", all of the old client fail miserably, and are > cut off from the service until they upgrade. No backward compatibility > here. Yes, And if that happens, a legal department demands a basis for sorting out who violoated which obligation. They have a hard time accepting to build legal contracts on top of "REST style flexibility". OTH, as I mentioned before, if the potential failure of the clients would be officially accepted because the occasional SLA violation costs less than running a tightly coupled system then it might make sense to CxOs. With this approach, RFC 5023 should normatively state that clients can expect Atom feeds to be returned for GETs on collections and the service owners would just accept that there is a price to pay should the service return a 406 instead. > > As for "evolutionary" software, it's pretty clear that it doesn't > evolve. Rather you have backward compatibility that gives an illusion > of evolution. The existing clients aren't changing, the service is > simply being friendly by keeping them in mind and not locking them > out. > > I don't see any way that REST differs from SOAP, or any other system > in this regard. As you've observed, compliance and compatibility are > hard coded in to the clients and server. If the protocol changes, the > clients and servers need to be changed to remain compatible. My point is that REST differs from SOAP because this coupling is not being made explicit. In SOAP it is explicit because there is a WSDL that defines an interface that couples tightly. It just known that you cannot remove a method from an OO-style API without breaking your clients. For REST we usually argue that services can freely evolve without breaking clients. Which is wrong. > > Versioning and backward compatibility is the key to a robust, evolving > infrastructure. I think REST is better for such a system than > something like SOAP simply because I think it is easier for a more > advanced client to leverage the latest services and APIs, as well as > for a server to better maintain compatibility with older clients. Yes, definitely. > > Both of these are done through extensible types and con neg. As you > get more and more servers and clients on different upgrade cycles, > this capability become more important. It's easy to see how you might > get consumers using services that you, as the provider, particularly > in an "open" enterprise, didn't even really "know" were being > serviced. Yes. > > In the end, though things like typed rels, and online documentation, > ideally when something goes wrong, payload inspection will direct the > people maintaining the consumers towards what they need to change to > become compliant again and able to use the new service. Yes. I really only tried to say that the clients an in fact break and that it should be understood where and how the contract is established that causes them to fail. IMHO, current specifications that are not only targeted at pure human driven consumption (e.g. AtomPub or OpenSearch) are not doing a good job in this regard. (OpenSearch, for example, states nowhere that Atom or RSS are the formats a client should be able to handle. Yet, this seems to be some sort of common sense. The OSD FAQ page says something like "OpenSearch is a collection of simple formats for the sharing of search results"[1]. Sure yes, that is all I need to know for build useful stuff. But would you invest a couple of million Dollars into building clients for a service description such as this one? Tomorrow the service could stop sending both, Atom and RSS and just use something new and would not be liable for it in any way.) Jan [1] http://www.opensearch.org/Documentation/Frequently_asked_questions > > Regards, > > Will Hartung > (willh@...) -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
2009/12/23 Tim Williams <williamstw@...>: > I have personally never > felt the need to map my uniform methods to another communications > protocol - is anyone really doing that? Off-topic, and irrelevant to this discussion thread, but yes, someone is really doing that...
On Dec 22, 2009, at 10:22 PM, Eric J. Bowman wrote: > Stefan Tilkov wrote: >> >> I second that. Quoting from one of Roy's posts [1]: >> >> "I should also note that the above is not yet fully RESTful, at least >> how I use the term. All I have done is described the service >> interfaces, which is no more than any RPC. In order to make it >> RESTful, I would need to add hypertext to introduce and define the >> service, describe how to perform the mapping using forms and/or link >> templates, and provide code to combine the visualizations in useful >> ways. I could even go further and define these relationships as a >> standard, much like Atom has standardized a normal set of HTTP >> relationships with expected semantics, but I have bigger fish to fry >> right now." >> >> Seems to me that even Roy believes standardization is a desired, but >> not mandatory, property of RESTful systems. >> > > My interpretation of that post is quite different from yours. Roy > describes how to implement a sparse-bit array, as a representation of a > standard media type like image/gif or image/png. The problem is, there > is no hypertext, and the media types don't support methods other than > GET. The solution is to wrap these images within Atom, but Roy is > hardly going to spend the time working on someone else's problem by > fleshing his example out to be RESTful. > > I see no support for your statement that Roy doesn't see > standardization as mandatory, since the gist of his entire solution is > to use standard media types and link relations which encompass the > expected semantics of HTTP methods. I interpreted what he wrote to mean that his solution would be RESTful if he'd added "hypertext to introduce and define the service, describe how to perform the mapping using forms and/or link templates, and provide code to combine the visualizations in useful". Standardizing this would mean "even going further". I understand your viewpoint to be that anything not publicly standardized (i.e. a custom link relation, or media type, or verb) is by definition not RESTful. I don't think so, but of course I may be wrong - in my view, you can standardize e.g. within your company or some other domain. Probably we need Roy to provide an authoritative answer. Stefan > > -Eric
On Dec 22, 2009, at 10:46 PM, Subbu Allamaraju wrote: > That is why it is more interesting and helpful to settle such debates by using real-world pros and cons. > > Subbu I hear you, don't care much for word games, and agree that many great things happen to not conform to the REST style. I still think it has merit to discuss whether or not a specific property means something becomes not RESTful, if only to be clear on terminology. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
> I figured the language 'generic' and 'more specific' were meant to > match up with the conneg language of http? I interpret this as meaning that application/vnd.acme.type+xml is more specific, as a whole, than application/xml, for the purpose of differentiating what media type we're talking about. I don't believe the +xml itself is the more specific part. Hence my reasoning that application/xml takes lower priority than other media types in my implementation. The other interpretation would mean that application/vnd.acme.type+xml would be more specific than application/vnd.acme.type, which I don't believe to bring any benefits to the conneg side of things. I also rely on the fact that the specification does explicitly state that this is a "convention", and the fact that the specification doesn't redefine any of the rfcs that deal with media type ordering, which it would've had to in order to change the meaning of "generic" and "specific". I'd be interested to know if implementations in the wild do give weight to the +xml part or not. Seb
> I think Jan's point about using Expect headers is a good one. I, for > one, have never used Expect as a request header on POST or PUT [1] to > check for server compliance. Anyone have a living example of this? AFAIK, while it's doable from the client side of things on .net, IIS takes over the 100 response, making server processing impossible. I've yet to find a way to override this behaviour without going down in unmanaged code. Seb
I guess where I fall down is that I see the REST vs. RPC models as paradigmatically different than procedural vs. OO... conceptually, I like to think that I more or less grok what REST is, but I keep running into posts in this list that suggest that I'm still missing very crucial concepts. As such, it's not that I'm looking for a reference implementation per se, but rather just a reference example that I can test my understanding against, which is what I think you're trying to achieve. I agree completely with your approach... if we could get a RESTful shopping cart, then that gives a useful example that takes us from system ontology/domain (shopping cart) to RESTful model. While I think there might be differences about the semantic model for a shopping cart (ask 3 architects to model something and you get 9 answers), if we agree on a notional data model, it seems like the senior people on this list should easily be able to render that into a RESTful architectural style... Dave. On Dec 22, 2009, at 2:17 PM, Eric J. Bowman wrote: > Hi David, > > You may well be the 1,000th person to post exactly the same sentiments > to this group, over the years. The basic problem, is that any REST- > based architectural Model has an infinite number of valid > Implementations. Thus there can't be such a thing as a "reference > implementation". The closest we come is systems built around Atom > Protocol, and of course the GET-and-POST-based HTML Web sites, of which > millions are RESTful yet none are sexy enough to really help anyone > with the systems they're trying to implement. > > My goal here, is to figure out a way (not today, by any means) we can > all agree on to Model REST architectures. If we can develop a REST > architectural Model for a shopping cart, then any number of > implementation ideas may be posted to the list and their mappings to > the model evaluated in a common lingo. Nobody has to map the entire > model, but the shortcomings of such an implementation can be agreed > upon in terms of benefits and consequences. > > The conversation around here can then change. We can point to shopping > cart implementations in the real world, and evaluate them against our > Model. We can then discuss the consequences of a failed mapping, in > terms of the goals of the system we're evaluating (e.g. Amazon). I may > be off on a wild goose chase with this, but I think (as your post so > painfully reminds all of us) it's obvious that REST has failed on the > Web for anything more complex than blogging. > > We need to stop discussing REST in terms of implementation, and start > discussing it in terms of how well implementations map to a REST > architectural Model, and the benefits and consequences of the success > or failure of such mapping to be implemented. > > -Eric > > David Otaguro wrote: > > > > Maybe I've just not found it, but one of the biggest headaches I've > > seen in explaining REST is the lack of a generally agreed on, well > > explained reference example demonstrating the RESTful approach for a > > reasonably complex domain and how it differs/improves on a POXy RPC > > approach. > > > > The usual examples I've seen are either so trivial as to be > > effectively useless, or lack a consensus validating that the approach > > really does embody the core ideals of REST. When all the major > > published "REST-ish" APIs (e.g Amazon) end up with the criticism that > > they're just POXy RPC, or confuse representation with resource, it > > becomes hard to point to an example and say, "If you emulate the > > thinking here, you won't be far from wrong." > > > > Again, I could be wrong, maybe there is something out there that > > people can point to to say, "Here's REST done right for a complicated > > problem domain". If so, and someone would do me the favor of > > pointing me to it, I'd appreciate it. > > > > Dave. > > >
Maybe I've just not found it, but one of the biggest headaches I've seen in explaining REST is the lack of a generally agreed on, well explained reference example demonstrating the RESTful approach for a reasonably complex domain and how it differs/improves on a POXy RPC approach. The usual examples I've seen are either so trivial as to be effectively useless, or lack a consensus validating that the approach really does embody the core ideals of REST. When all the major published "REST-ish" APIs (e.g Amazon) end up with the criticism that they're just POXy RPC, or confuse representation with resource, it becomes hard to point to an example and say, "If you emulate the thinking here, you won't be far from wrong." Again, I could be wrong, maybe there is something out there that people can point to to say, "Here's REST done right for a complicated problem domain". If so, and someone would do me the favor of pointing me to it, I'd appreciate it. Dave. On Dec 22, 2009, at 1:22 PM, Eric J. Bowman wrote: > Stefan Tilkov wrote: > > > > I second that. Quoting from one of Roy's posts [1]: > > > > "I should also note that the above is not yet fully RESTful, at least > > how I use the term. All I have done is described the service > > interfaces, which is no more than any RPC. In order to make it > > RESTful, I would need to add hypertext to introduce and define the > > service, describe how to perform the mapping using forms and/or link > > templates, and provide code to combine the visualizations in useful > > ways. I could even go further and define these relationships as a > > standard, much like Atom has standardized a normal set of HTTP > > relationships with expected semantics, but I have bigger fish to fry > > right now." > > > > Seems to me that even Roy believes standardization is a desired, but > > not mandatory, property of RESTful systems. > > > > My interpretation of that post is quite different from yours. Roy > describes how to implement a sparse-bit array, as a representation of a > standard media type like image/gif or image/png. The problem is, there > is no hypertext, and the media types don't support methods other than > GET. The solution is to wrap these images within Atom, but Roy is > hardly going to spend the time working on someone else's problem by > fleshing his example out to be RESTful. > > I see no support for your statement that Roy doesn't see > standardization as mandatory, since the gist of his entire solution is > to use standard media types and link relations which encompass the > expected semantics of HTTP methods. > > -Eric >
Eric J. Bowman wrote: > So yes, a REST API must rely on media types to determine the semantics > of protocol methods. Using HTTP DELETE on a resource represented only > as text/html isn't RESTful now, but it may become so once HTML 5 has > added (hopefully at least) PUT and DELETE into the text/html realm, at > which point they can be hypertext-driven. The other solution is to use > FTP DELETE, since that protocol doesn't care about media type and won't > allow collection-delete. But this only works if you're following the > filesystem paradigm and don't care about the hypertext constraint. I think that even with the "hypertext constraint", there needs to be a notion of pure-data leaf nodes in the hypertext tree, and you need to be able to operate on those as well. The way we do that is to have the server provide implicit semantics - standardized methods for resources that don't provide their own semantics. It strikes me as inefficient and unnecessarily revisionist to force every "dumb" media type to be wrapped and manipulated through a hypertext proxy resource. -rg
I think examples are an absolutely necessary part of explaining and teaching architectural styles. Students need both the abstract definition and concepts underlying the style AND some examples of use in order to see how those concepts manifest in reality. Having just one without the other is where too many professors fail... either they teach a concept and leave it as an exercise to the reader to apply it (usually disastrous), or they show examples without the underlying conceptual framework, and students merely ape the example blindly. Dave. On Dec 22, 2009, at 1:41 PM, Eric J. Bowman wrote: > Noah Campbell wrote: > > > > Why not teach REST from a systems engineering perspective. The > > properties that define a RESTful architecture are leveraged by tools > > like HAProxy, Nginx, Squid, Varnish and various other > > intermediaries. Once you have a good working grasp on how caching, > > etags, HTTP methods and response codes impact the entire system, then > > you can focus on building an app. Being able to produce a service > > that fits into ecosystem becomes much more relevant then trying to > > drag someone to the conclusion without a tangible example. > > > > Just a thought. > > > > This may well be a solution for describing to someone what REST *is*. > My hypothesis is that "REST" APIs fail to be RESTful due to a failure > in mapping between architectural model, and implementation. I'm > concerned with those who already think they know they want REST, but > need help developing a system. My proposed solution is to teach them > how to develop an architectural model, and help them map that model to > their implementation. I do not believe that REST can be taught, as it > is an architectural style, by describing implementations. > > -Eric >
I'm building a HTTP document store for a public API, and
I'm trying to determine the most RESTful scheme for a uploading a (possibly new) document and associated metadata in one atomic transaction.
It feels to me like I should be doing a PUT, but due to the strict definition in rfc2616 section 9.6, it feels awkward.
My current implementation uses POST. Let's say I want to write "vacation.jpg", a few properties, and also store an associated blob of EXIF metadata. My current hacky implementation works like this (URIs truncated for brevity):
POST vacation.jpg?title=My%20Vacation
Content-type: multipart/form-data
---
Content-Disposition: form-data; name="document"
Content-type: image/jpg
...
---
Content-Disposition: form-data; name="exif"
Content-type: application/x-exif
...
---
...
201 Created
Content-location: vacation.jpg
Content-type: application/json
{
"document" : "vacation.jpg",
"*" : "vacation.jpg?content=*", // results in a multipart as posted
"basic" : "vacation.jpg?content=basic", // the urlencoded query
"exif" : "vacation.jpg?content=exif" // returns just the exif
}
Each of these metadata URIs supports standard PUT/GET/DELETE. (Side note; I don't love my returned entity, as the "content=" implementation actually supports arbitrary requesting combinations of multiple metadata entities.)
In any case, it all works fine, but my inclination is that I should really provide this via PUT. As it stands, though "GET vacation.jpg" just returns the main image/jpg document (without metadata). This would appear to violate rfc2616 section 9.6, which suggests that GET should return the same entity that was PUT, i.e. the multipart document.
Solving is possible, but ugly. I can change the PUT uri to:
PUT vacation.jpg?content=*
which would have the property that the exact same URL can be used for a GET and it would indeed result in the original multipart entity, and I'd rely on the clause that says "a PUT request on a general URI might result in several other URIs being defined by the origin server", one of which would be the main resource, "vacation.jpg".
Unfortunately, discovering the "vacation.jpg" URI would be out of band with the protocol headers (I wish there were a way to return content headers for each of the multipart entities sent!) and would have to be in the entity returned, as above. So now GET looks user-friendly, but PUT is ugly.
It looks weird and I can see this working, but although I'm
fundamentally doing a PUT, I feel that I may be stretching things a bit with the multipart entities, and maybe I should stick with POST after all given the current spec(s).
Thoughts?
On Mon, Dec 21, 2009 at 3:21 PM, Jan Algermissen
<algermissen1971@...> wrote:
>
> On Dec 22, 2009, at 12:11 AM, Roger Gonzalez wrote:
>
>> mike amundsen wrote:
>>>
>>> you build clients based on the media type.
>>
>> Any given resource may have multiple representations that have exactly
the
>> same media type; for example, an image resource may have an image/png
>> representing the full content as well as an image/png representing a
>> thumbnail. Content negotiation based only on media type isn't
sufficient.
>>
>
> These representations should be available at different resources because
> they are different things. I'd use:
>
>
> /foo/images/6676
> /foo/images/6676?view=thumbnail
>
> Then conneg works fine on both.
>
> Jan
>
>
>> -rg
I refer to RFC2616 Section 12:
* For that reason, HTTP has provisions for several mechanisms for
"content negotiation" -- the process of selecting the best
representation for a given response when there are multiple
representations available.
Note: This is not called "format negotiation" because the
alternate representations may be of the same media type, but use
different capabilities of that type, be in different languages,
etc.*
In other words, I don't think it should be mandatory to dictate a
different URI for each potential representation unless you're using
agent-driven negotiation. For server negotiation,
*an origin server [...] MAY vary the response based on any aspect
of the request, including information outside the request-header
fields or within extension header fields not defined by this
specification.*
For example,
X-MyApp-View: thumbnail
Vary: "X-MyApp-View"
The issue to me is that a resource should be considered to have a
potentially infinite set of potential representations that map to a
finite set of media types. If you try to turn it around such that
you're selecting by media type, then you're forced to move the
"infinite" part of the equation to the URI space, which I don't think
is a win.
Except for the following very subtle point: I don't actually think
that your example points to a different resource. RFC2396 says that a
URI with different query string does not represent a different target
resource, it represents the same resource, with the query interpreted
*by* the resource. (I reconcile this in my head as equivalent to
sending a application/x-form-urlencoded control message to the
resource.)
So in fact, I do it exactly as you say, except that I interpret it as
a request for the resource at /foo/images/6676 to please return a
representation based on the parameters I'm sending, in this case,
view=thumbnail. So, I'm not actually referencing a different
resource, I'm changing the message I send to the resource, which is
fundamentally the same as my contrived "X-MyApp-View" header version.
You can layer media type negotiation on top of that, but you're not
selecting the thumbnail representation because of the content type,
you're selecting it because of the control message sent. So it's all
good.
Yeesh but that's subtle.
-rg
mike amundsen wrote: > you build clients based on the media type. Any given resource may have multiple representations that have exactly the same media type; for example, an image resource may have an image/png representing the full content as well as an image/png representing a thumbnail. Content negotiation based only on media type isn't sufficient. -rg
I recommend checking out the Atom Publishing Protocol's solution for
handling media and related data [1]. I think the use of the Slug
header [2] is also a great way to deal with the friction between using
POST and PUT to upload resources to the server.
You should also check out the Link Header draft [3] as a way to return
links related resources. This is a handy solution for adding
hyperlinks to binary responses such as images.
mca
http://amundsen.com/blog/
[1] http://tools.ietf.org/html/rfc5023#section-9.6
[2] http://tools.ietf.org/html/rfc5023#section-9.7
[3] http://tools.ietf.org/html/draft-nottingham-http-link-header-06
On Sat, Dec 19, 2009 at 15:09, spamspambakedbeansandspam
<roger.gonzalez@...> wrote:
> I'm building a HTTP document store for a public API, and
> I'm trying to determine the most RESTful scheme for a uploading a (possibly new) document and associated metadata in one atomic transaction.
>
> It feels to me like I should be doing a PUT, but due to the strict definition in rfc2616 section 9.6, it feels awkward.
>
> My current implementation uses POST. Let's say I want to write "vacation.jpg", a few properties, and also store an associated blob of EXIF metadata. My current hacky implementation works like this (URIs truncated for brevity):
>
> POST vacation.jpg?title=My%20Vacation
> Content-type: multipart/form-data
>
> ---
> Content-Disposition: form-data; name="document"
> Content-type: image/jpg
> ....
> ---
> Content-Disposition: form-data; name="exif"
> Content-type: application/x-exif
>
> ....
>
> ---
> ....
> 201 Created
> Content-location: vacation.jpg
> Content-type: application/json
> {
> "document" : "vacation.jpg",
> "*" : "vacation.jpg?content=*", // results in a multipart as posted
> "basic" : "vacation.jpg?content=basic", // the urlencoded query
> "exif" : "vacation.jpg?content=exif" // returns just the exif
> }
>
> Each of these metadata URIs supports standard PUT/GET/DELETE. (Side note; I don't love my returned entity, as the "content=" implementation actually supports arbitrary requesting combinations of multiple metadata entities.)
>
> In any case, it all works fine, but my inclination is that I should really provide this via PUT. As it stands, though "GET vacation.jpg" just returns the main image/jpg document (without metadata). This would appear to violate rfc2616 section 9.6, which suggests that GET should return the same entity that was PUT, i.e. the multipart document.
>
> Solving is possible, but ugly. I can change the PUT uri to:
>
> PUT vacation.jpg?content=*
>
> which would have the property that the exact same URL can be used for a GET and it would indeed result in the original multipart entity, and I'd rely on the clause that says "a PUT request on a general URI might result in several other URIs being defined by the origin server", one of which would be the main resource, "vacation.jpg".
>
> Unfortunately, discovering the "vacation.jpg" URI would be out of band with the protocol headers (I wish there were a way to return content headers for each of the multipart entities sent!) and would have to be in the entity returned, as above. So now GET looks user-friendly, but PUT is ugly.
>
> It looks weird and I can see this working, but although I'm
> fundamentally doing a PUT, I feel that I may be stretching things a bit with the multipart entities, and maybe I should stick with POST after all given the current spec(s).
>
> Thoughts?
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
>
On Tue, Dec 22, 2009 at 11:12 PM, Jan Algermissen <algermissen1971@...> wrote: >> My question is: when both rooms and courses are ressouces themselves >> should >> we then pass the actual ressources to the search? Like this (with >> proper URL >> escaping of course): >> >> /lessons/thisweek?room=http://my.edu/rooms/P160&http://my.edu/courses/43S09 >> >> It doesn't look good, > > I think it is the right thing to do because the URIs are the > identifiers known by the client. The client should, of course, > discover the identifiers from server-provided lists and not make them > up based on out of band URI onstruction knowledge. I agree. I have implemented several systems that function like this and it works quite well. One rather nice feature of this approach is that it reduces the assumptions about the disposition of the room and course resources. It does not assume that room resources are implemented in the same container as the lesson plans. This frees you to make decisions in the future that might more difficult if you relied on more application specific. For example, spinning off some of these resources into a separate system to allow that functionality to be expanded on a separate release cycle, would mean that the lessons resource would need to interact with a search resource in the new system every time the lesson was requested. However, by using URIs you can avoid interacting with the remote resource at all when you only need the identification. Peter http://barelyenough.org
On Dec 22, 2009, at 1:41 AM, Roger Gonzalez wrote: > RFC2396 says that a > URI with different query string does not represent a different target > resource, it represents the same resource, with the query interpreted > *by* the resource. Actually, IMHO, no. Can you provide the quote for this statement. Jan -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
you can use PUT if you pass the complete resource to the server.. then your
method will continue idempotent...
otherwise, use POST
On Wed, Dec 23, 2009 at 4:17 PM, mike amundsen <mamund@...> wrote:
>
>
> I recommend checking out the Atom Publishing Protocol's solution for
> handling media and related data [1]. I think the use of the Slug
> header [2] is also a great way to deal with the friction between using
> POST and PUT to upload resources to the server.
>
> You should also check out the Link Header draft [3] as a way to return
> links related resources. This is a handy solution for adding
> hyperlinks to binary responses such as images.
>
> mca
> http://amundsen.com/blog/
>
> [1] http://tools.ietf.org/html/rfc5023#section-9.6
> [2] http://tools.ietf.org/html/rfc5023#section-9.7
> [3] http://tools.ietf.org/html/draft-nottingham-http-link-header-06
>
>
> On Sat, Dec 19, 2009 at 15:09, spamspambakedbeansandspam
> <roger.gonzalez@... <roger.gonzalez%40gmail.com>> wrote:
> > I'm building a HTTP document store for a public API, and
> > I'm trying to determine the most RESTful scheme for a uploading a
> (possibly new) document and associated metadata in one atomic transaction.
> >
> > It feels to me like I should be doing a PUT, but due to the strict
> definition in rfc2616 section 9.6, it feels awkward.
> >
> > My current implementation uses POST. Let's say I want to write
> "vacation.jpg", a few properties, and also store an associated blob of EXIF
> metadata. My current hacky implementation works like this (URIs truncated
> for brevity):
> >
> > POST vacation.jpg?title=My%20Vacation
> > Content-type: multipart/form-data
> >
> > ---
> > Content-Disposition: form-data; name="document"
> > Content-type: image/jpg
> > ....
> > ---
> > Content-Disposition: form-data; name="exif"
> > Content-type: application/x-exif
> >
> > ....
> >
> > ---
> > ....
> > 201 Created
> > Content-location: vacation.jpg
> > Content-type: application/json
> > {
> > "document" : "vacation.jpg",
> > "*" : "vacation.jpg?content=*", // results in a multipart as posted
> > "basic" : "vacation.jpg?content=basic", // the urlencoded query
> > "exif" : "vacation.jpg?content=exif" // returns just the exif
> > }
> >
> > Each of these metadata URIs supports standard PUT/GET/DELETE. (Side
> note; I don't love my returned entity, as the "content=" implementation
> actually supports arbitrary requesting combinations of multiple metadata
> entities.)
> >
> > In any case, it all works fine, but my inclination is that I should
> really provide this via PUT. As it stands, though "GET vacation.jpg" just
> returns the main image/jpg document (without metadata). This would appear
> to violate rfc2616 section 9.6, which suggests that GET should return the
> same entity that was PUT, i.e. the multipart document.
> >
> > Solving is possible, but ugly. I can change the PUT uri to:
> >
> > PUT vacation.jpg?content=*
> >
> > which would have the property that the exact same URL can be used for a
> GET and it would indeed result in the original multipart entity, and I'd
> rely on the clause that says "a PUT request on a general URI might result in
> several other URIs being defined by the origin server", one of which would
> be the main resource, "vacation.jpg".
> >
> > Unfortunately, discovering the "vacation.jpg" URI would be out of band
> with the protocol headers (I wish there were a way to return content headers
> for each of the multipart entities sent!) and would have to be in the entity
> returned, as above. So now GET looks user-friendly, but PUT is ugly.
> >
> > It looks weird and I can see this working, but although I'm
> > fundamentally doing a PUT, I feel that I may be stretching things a bit
> with the multipart entities, and maybe I should stick with POST after all
> given the current spec(s).
> >
> > Thoughts?
> >
> >
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
> >
> >
>
>
--
------------------------------------------
Felipe Gaúcho
10+ Java Programmer
CEJUG Senior Advisor
From a contract point of view, it would be more complex. I would argue that a service owner would want the flexibility and language a RESTful SLA contract would require because it moves provides forward compatibility with requirements on the client (i.e. they must support all meaningful HTTP responses gracefully...i.e. 201, 301, 307, 401). From a client perspective it becomes more of a hassle because it requires more robust error handling. Stepping back and looking at the sum of both parts, I think this is a good thing and enterprise would want this maturity in between business groups. It leads to less cost in terms of maintenance and service interruption, but this is my speculation. Unfortunately I don't have a concrete evidence this would be the case. -Noah On Tue, Dec 22, 2009 at 10:46 PM, Jan Algermissen <algermissen1971@...>wrote: > Will, > > excellent analysis. > > > On Dec 22, 2009, at 9:13 PM, Will Hartung wrote: > > This thread just exploded and it's taken until now to catch up. >> >> Jan, I don't see any conflict with having a SLA backing up a REST >> interface. >> > > Me neither. But it needs to be clear what the SLA'ed contract really is. > Take the AtomPub example: RFC 5023 *is* saying that a GET on a collection > will return a feed. Is that normative? Or just a hint? If it is just a hint, > why is it in the spec at all and what is the value of it from the client > developer's POV? > > If you are the service owner, would you put into the SLA a penalty payment > of some serious money if your service stops providing an Atom feed for a GET > to a collection? If not, the whole information is meaningless from a > contract POV. > > > > >> I think that you can make a brittle REST architecture that hits all of >> the REST bullet points, but inevitably fails to evolve properly. >> >> Take for example here, the "apiv2" rel link. >> >> The fact that the service authors CHOSE to add an "apiv2" link. They >> did not HAVE to. They COULD have simply changed the media type, and >> 406'd the old clients. >> > > My issue: In a RESTful system, the service authors woule *never* have to > make any promise, right? > > > >> Obviously, "suddenly", all of the old client fail miserably, and are >> cut off from the service until they upgrade. No backward compatibility >> here. >> > > Yes, And if that happens, a legal department demands a basis for sorting > out who violoated which obligation. They have a hard time accepting to build > legal contracts on top of "REST style flexibility". > > OTH, as I mentioned before, if the potential failure of the clients would > be officially accepted because the occasional SLA violation costs less than > running a tightly coupled system then it might make sense to CxOs. > > With this approach, RFC 5023 should normatively state that clients can > expect Atom feeds to be returned for GETs on collections and the service > owners would just accept that there is a price to pay should the service > return a 406 instead. > > > >> As for "evolutionary" software, it's pretty clear that it doesn't >> evolve. Rather you have backward compatibility that gives an illusion >> of evolution. The existing clients aren't changing, the service is >> simply being friendly by keeping them in mind and not locking them >> out. >> >> I don't see any way that REST differs from SOAP, or any other system >> in this regard. As you've observed, compliance and compatibility are >> hard coded in to the clients and server. If the protocol changes, the >> clients and servers need to be changed to remain compatible. >> > > My point is that REST differs from SOAP because this coupling is not being > made explicit. In SOAP it is explicit because there is a WSDL that defines > an interface that couples tightly. It just known that you cannot remove a > method from an OO-style API without breaking your clients. For REST we > usually argue that services can freely evolve without breaking clients. > Which is wrong. > > > >> Versioning and backward compatibility is the key to a robust, evolving >> infrastructure. I think REST is better for such a system than >> something like SOAP simply because I think it is easier for a more >> advanced client to leverage the latest services and APIs, as well as >> for a server to better maintain compatibility with older clients. >> > > Yes, definitely. > > >> Both of these are done through extensible types and con neg. As you >> get more and more servers and clients on different upgrade cycles, >> this capability become more important. It's easy to see how you might >> get consumers using services that you, as the provider, particularly >> in an "open" enterprise, didn't even really "know" were being >> serviced. >> > > Yes. > > > >> In the end, though things like typed rels, and online documentation, >> ideally when something goes wrong, payload inspection will direct the >> people maintaining the consumers towards what they need to change to >> become compliant again and able to use the new service. >> > > Yes. > > I really only tried to say that the clients an in fact break and that it > should be understood where and how the contract is established that causes > them to fail. IMHO, current specifications that are not only targeted at > pure human driven consumption (e.g. AtomPub or OpenSearch) are not doing a > good job in this regard. > > (OpenSearch, for example, states nowhere that Atom or RSS are the formats a > client should be able to handle. Yet, this seems to be some sort of common > sense. The OSD FAQ page says something like "OpenSearch is a collection of > simple formats for the sharing of search results"[1]. Sure yes, that is all > I need to know for build useful stuff. But would you invest a couple of > million Dollars into building clients for a service description such as this > one? Tomorrow the service could stop sending both, Atom and RSS and just use > something new and would not be liable for it in any way.) > > Jan > > [1] http://www.opensearch.org/Documentation/Frequently_asked_questions > > > >> Regards, >> >> Will Hartung >> (willh@...) >> > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > >
On Mon, Dec 21, 2009 at 3:11 PM, Roger Gonzalez <roger.gonzalez@...> wrote: > Any given resource may have multiple representations that have exactly > the same media type; for example, an image resource may have an > image/png representing the full content as well as an image/png > representing a thumbnail. Content negotiation based only on media type > isn't sufficient. What other mechanisms are there available for content negotiation? The standard ones seems to all be on mime type. Regards, Will Hartung (willh@...)
On Dec 21, 2009, at 3:11 PM, Roger Gonzalez wrote: > Any given resource may have multiple representations that have exactly > the same media type; for example, an image resource may have an > image/png representing the full content as well as an image/png > representing a thumbnail. Content negotiation based only on media type > isn't sufficient. Exactly. No silver bullets here. Your example highlights why treating a representation as a different resource is sometimes necessary. Subbu
On Wed, Dec 23, 2009 at 7:45 AM, Peter Williams <pezra@...> wrote: > One rather nice feature of this approach is that it reduces the > assumptions about the disposition of the room and course resources. > It does not assume that room resources are implemented in the same > container as the lesson plans. This frees you to make decisions in > the future that might more difficult if you relied on more application > specific. For example, spinning off some of these resources into a > separate system to allow that functionality to be expanded on a > separate release cycle, would mean that the lessons resource would > need to interact with a search resource in the new system every time > the lesson was requested. However, by using URIs you can avoid > interacting with the remote resource at all when you only need the > identification. Another interesting aspect of URI is that, even though it's suggested that they do not change, they CAN change. When a client, that happens to record URIs, references one, and the host server sends them a 301, the client is then free to update their URI in place. There's no mechanism for that when you just send a "43S09". Because if you build up http://my.edu/courses/43S09, and you get a 301, you don't "know" why the 301 moved. Could be because it was moved to http://socal.my.edu/courses/43S09 or http://my.edu/labs/43S09 or http://my.edu/course/54S08. All sorts of reasons a URI can move or change. But at if you record URIs there's a facility to communicate that change lazily to clients. If you move from http://my.edu/courses/43S09 to http://socal.my.edu/courses/43S09, then after 2 months of no traffic at http://my.edu/courses, you can probably shut down that end point safely. One more reason why I think that clients must do a lot of the heavy lifting in a REST system in terms of protocol support. Regards, Will Hartung (willh@...)
2009/12/23 Subbu Allamaraju <subbu@...> > > > > On Dec 21, 2009, at 3:11 PM, Roger Gonzalez wrote: > > > Any given resource may have multiple representations that have exactly > > the same media type; for example, an image resource may have an > > image/png representing the full content as well as an image/png > > representing a thumbnail. Content negotiation based only on media type > > isn't sufficient. > > Exactly. No silver bullets here. Your example highlights why treating a > representation as a different resource is sometimes necessary. > > Subbu > > Now I got completely confused (probably because my not-so-good english?). A representation is a representation *of* a resource, so how can you treat a representation *as* a different resource? You mean to have 2 resources giving different representations (with the same mime-type) of what would otherwise be just one resource? In the example above, you're saying that having 2 resources, both returning "image/png" /foo/images/6676 /foo/images/6676/thumbnail is the equivalent of having /foo/images/6676 returning (if such things existed) "image-full/png" or "image-thumb/png" ? Finally, one more clarification, these two URL > /foo/images/6676 > /foo/images/6676?view=thumbnail represent the same resource, two different resources, or both are Restfull and depends of the implementation?
"each representation identified by its own URI" (2616#12.2) On Dec 23, 2009, at 11:02 AM, António Mota wrote: > 2009/12/23 Subbu Allamaraju <subbu@...> > > > > On Dec 21, 2009, at 3:11 PM, Roger Gonzalez wrote: > > > Any given resource may have multiple representations that have exactly > > the same media type; for example, an image resource may have an > > image/png representing the full content as well as an image/png > > representing a thumbnail. Content negotiation based only on media type > > isn't sufficient. > > Exactly. No silver bullets here. Your example highlights why treating a representation as a different resource is sometimes necessary. > > Subbu > > > Now I got completely confused (probably because my not-so-good english?). A representation is a representation *of* a resource, so how can you treat a representation *as* a different resource? You mean to have 2 resources giving different representations (with the same mime-type) of what would otherwise be just one resource? > > In the example above, you're saying that having 2 resources, both returning "image/png" > > /foo/images/6676 > /foo/images/6676/thumbnail > > is the equivalent of having > > /foo/images/6676 > > returning (if such things existed) "image-full/png" or "image-thumb/png" ? > > Finally, one more clarification, these two URL > > > /foo/images/6676 > > /foo/images/6676?view=thumbnail > > represent the same resource, two different resources, or both are Restfull and depends of the implementation? >
Tim Williams wrote: > > > HTTP != REST.  The generic interface allows your resource to be > > DELETEd by a variety of protocols, including HTTP. > > Ok, this is back to strange for me, I said in an earlier message I'm > specifically talking about an "HTTP-based implementation of the REST > style" - that disclaimer is what allows me to define *my* uniform > interface in my examples as the HTTP methods. I have personally never > felt the need to map my uniform methods to another communications > protocol - is anyone really doing that? I gathered that was true for > you as well since your examples continue to be APP which is itself > HTTP-based. > Yeah, I'm giving an HTTP example. I almost posted, but erased, a file- upload system which takes the generic interfaces of FTP and HTTP and combines them into a uniform RESTish interface (minus the hypertext constraint, of course) by assigning 'create' to FTP PUT and 'replace' to HTTP PUT. Just because you're using HTTP's generic interface doesn't mean you're building a uniform REST interface. > > > ... REST requires that hypertext be used to make these > > instructions to the client explicit, so Atom Protocol has a REST > > mismatch. > > I've never seen such a requirement and it's not clear how that > resolves with Roy's comment below? > The requirement is called the hypertext constraint. > > "HTTP operations are generic: they are allowed or not, per resource, > but they are always valid. Hypertext doesn’t usually tell you all the > operations allowed on any given resource; it tells you which operation > to use for each potential transition." > Putting on my Roy Decoder Ring, and using my Atom Protocol example -- you dereference a resource, which allows GET, PUT, POST, PATCH and DELETE. But, due to your role, the representation you receive may only tell you about GET and POST operations you may use for each potential transition. DELETE is always there in the HTTP generic interface, but it only becomes part of a uniform REST interface if the client is told of the potential DELETE transition using hypertext. -Eric
António Mota wrote: > > > I have personally never > > felt the need to map my uniform methods to another communications > > protocol - is anyone really doing that? > > Off-topic, and irrelevant to this discussion thread, but yes, someone > is really doing that... > Apparently it's highly relevant. Decoding Roy again: " In general, any protocol element that uses a URI for identification must allow any URI scheme to be used for the sake of that identification. " For example, I ought to be able to GET a representation, save it to disk, and retrieve it using the "file://" URI scheme. Or, take my shared hosting account. I use FTP to upload my work, even though it's entirely meant as an HTTP system. So any resource on my hosting account allows both HTTP and FTP "to be used for the sake of identification". -Eric
Jan Algermissen wrote:
>
> On Dec 22, 2009, at 1:41 AM, Roger Gonzalez wrote:
>
>> RFC2396 says that a
>> URI with different query string does not represent a different target
>> resource, it represents the same resource, with the query interpreted
>> *by* the resource.
>
>
> Actually, IMHO, no. Can you provide the quote for this statement.
>
> Jan
>
Well, I've been quoting RFC2396, which says this in section 3.4:
/The query component is a string of information to be interpreted by
the resource./
But I hadn't realized it was updated by RFC3986, which says:
/The query component contains non-hierarchical data that, along with
data in the path component (Section 3.3), serves to identify a
resource within the scope of the URI's scheme and naming authority
(if any).
/
I will note that RFC2616 3.2.2 says this about the http url:
/http_URL = "http:" "//" host [ ":" port ] [ abs_path [ "?" query ]]
/
/The semantics are that the identified resource is located at the
server listening for TCP connections on that port of that host, and
the Request-URI for the resource is abs_path (section 5.1.2)./
I believe you're free to resolve URI-to-resource any way that you like,
but I personally find it makes the most sense to follow the
long-established convention of treating the query as a message to the
base resource to do something special. (This is of course opaque to the
client, where a URI with a different query is of course different, and
the client can only use URI equivalence rules.)
-rg
Will Hartung wrote: > > What other mechanisms are there available for content negotiation? The > standard ones seems to all be on mime type. > The most common implementation of content negotiation is for compression, the Accept-Encoding request header will typically contain tokens, i.e. "Accept-Encoding: GZIP, DEFLATE". -Eric
On Wed, Dec 23, 2009 at 11:52 AM, Eric J. Bowman <eric@...>wrote: > > > Will Hartung wrote: > > > > What other mechanisms are there available for content negotiation? The > > standard ones seems to all be on mime type. > > > > The most common implementation of content negotiation is for > compression, the Accept-Encoding request header will typically contain > tokens, i.e. "Accept-Encoding: GZIP, DEFLATE". > With HTTP, the "Accept-Charset" and "Accept-Language" headers can also be used in a similar manner, for content negotiation on the acceptable character set and language. > > -Eric > Craig > >
Jan Algermissen wrote: > > For REST we usually argue that services can freely > evolve without breaking clients. Which is wrong. > In a REST system, clients and servers may evolve independently. This doesn't mean that the evolution of the Atom Protocol system I described in another thread to include PATCH, breaks clients. Existing AtomPub clients won't grok the new feature, but will otherwise understand the system. This is graceful degradation -- clients may evolve to understand the PATCH feature, but the server developer doesn't have to wait for that to happen before the feature is implemented. -Eric
On Wed, Dec 23, 2009 at 11:57 AM, Craig McClanahan <craigmcc@...> wrote: > On Wed, Dec 23, 2009 at 11:52 AM, Eric J. Bowman <eric@...> wrote: >> The most common implementation of content negotiation is for >> compression, the Accept-Encoding request header will typically contain >> tokens, i.e. "Accept-Encoding: GZIP, DEFLATE". > > With HTTP, the "Accept-Charset" and "Accept-Language" headers can also be used in a similar manner, for content negotiation on the acceptable character set and language. But none of those seem to be particularly relevant in the case mentioned, about a full image vs a thumbnail image, unless I guess we can add "Accept-Encoding: thumbnail". But, also, at least with Accept-Encoding, there's no mandate that you MUST GZIP the content, just that it CAN be accepted, right? This gets a little muddy, particularly because of the role GZIP plays, that is as mostly as a transport optimization than necessarily an actual resource representation. Granted, it IS a different representation, but historically it's a wrapper that often stripped (decoded) to get to the "real" representation. Regards, Will Hartung (willh@...)
RFC 2396 is obsoleted by RFC 3986, which is what you should be referring to... -Eric
Roger Gonzalez wrote: > > It strikes me as > inefficient and unnecessarily revisionist to force every "dumb" media > type to be wrapped and manipulated through a hypertext proxy > resource. > This isn't revisionist; it's part of the REST architectural style. Of course it's less efficient, the decsion is whether or not the benefits of the style outweigh the consequences: " The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs. " If this degradation in efficiency outweighs the benefits of the uniform REST interface for your system, then don't apply this constraint. -Eric
2009/12/23 Eric J. Bowman <eric@...>: > António Mota wrote: >> >> > I have personally never >> > felt the need to map my uniform methods to another communications >> > protocol - is anyone really doing that? >> >> Off-topic, and irrelevant to this discussion thread, but yes, someone >> is really doing that... >> > > Apparently it's highly relevant. Decoding Roy again: > > " > In general, any protocol element that uses a URI for identification > must allow any URI scheme to be used for the sake of that > identification. > " My intention was not to say it was irrelevant for REST, as I have tried to make that point (using the same interface applied to several protocols other than HTTP) several times in this list. I was saying irrelevant to (quoting Tim) "specifically talking about an "HTTP-based implementation of the REST style" - that disclaimer is what allows me to define *my* uniform interface in my examples as the HTTP methods." So I thought he wanted to discuss this in the realm of REST/HTTP only. If it is in the more embracing realm of REST, then I agree it is relevant to see how the uniform interface may constraint other protocols. ______________________________________________________
On Dec 23, 2009, at 6:55 PM, Noah Campbell wrote: > From a contract point of view, it would be more complex. > > I would argue that a service owner would want the flexibility and > language a RESTful SLA contract would require because it moves > provides forward compatibility with requirements on the client (i.e. > they must support all meaningful HTTP responses gracefully...i.e. > 201, 301, 307, 401). Yes, I agree that with REST SLAs should explicitly put the burdon on the client - otherwise we'd just introduce the coupling that REST aims to avoid. This would effectively mean that the client should expect (though rareley if ever) that its assumptions (which might be based on a hint rather than a MUST) might fail and that this is not a contract violation by the server but the price to pay for getting all the other loose coupling goodness. > > From a client perspective it becomes more of a hassle because it > requires more robust error handling. Right. And also the acceptance of errors. Erros != broken contract. > > > Stepping back and looking at the sum of both parts, I think this is > a good thing and enterprise would want this maturity in between > business groups. It leads to less cost in terms of maintenance and > service interruption, but this is my speculation. Unfortunately I > don't have a concrete evidence this would be the case. Agreed. And this is a position that I think can be articulated in a meeting with enterprise people because it makes expicit what is traded for what. (Analogous to the overbooking example). It also provides a framework for people to develop more server constraining contracts (e.g. collection MUST! return feed) and understand what the cost of that is. Glad this lead to something before the Christmas break :-) Jan > > -Noah > > On Tue, Dec 22, 2009 at 10:46 PM, Jan Algermissen <algermissen1971@... > > wrote: > Will, > > excellent analysis. > > > On Dec 22, 2009, at 9:13 PM, Will Hartung wrote: > > This thread just exploded and it's taken until now to catch up. > > Jan, I don't see any conflict with having a SLA backing up a REST > interface. > > Me neither. But it needs to be clear what the SLA'ed contract really > is. Take the AtomPub example: RFC 5023 *is* saying that a GET on a > collection will return a feed. Is that normative? Or just a hint? If > it is just a hint, why is it in the spec at all and what is the > value of it from the client developer's POV? > > If you are the service owner, would you put into the SLA a penalty > payment of some serious money if your service stops providing an > Atom feed for a GET to a collection? If not, the whole information > is meaningless from a contract POV. > > > > > I think that you can make a brittle REST architecture that hits all of > the REST bullet points, but inevitably fails to evolve properly. > > Take for example here, the "apiv2" rel link. > > The fact that the service authors CHOSE to add an "apiv2" link. They > did not HAVE to. They COULD have simply changed the media type, and > 406'd the old clients. > > My issue: In a RESTful system, the service authors woule *never* > have to make any promise, right? > > > > Obviously, "suddenly", all of the old client fail miserably, and are > cut off from the service until they upgrade. No backward compatibility > here. > > Yes, And if that happens, a legal department demands a basis for > sorting out who violoated which obligation. They have a hard time > accepting to build legal contracts on top of "REST style flexibility". > > OTH, as I mentioned before, if the potential failure of the clients > would be officially accepted because the occasional SLA violation > costs less than running a tightly coupled system then it might make > sense to CxOs. > > With this approach, RFC 5023 should normatively state that clients > can expect Atom feeds to be returned for GETs on collections and the > service owners would just accept that there is a price to pay should > the service return a 406 instead. > > > > As for "evolutionary" software, it's pretty clear that it doesn't > evolve. Rather you have backward compatibility that gives an illusion > of evolution. The existing clients aren't changing, the service is > simply being friendly by keeping them in mind and not locking them > out. > > I don't see any way that REST differs from SOAP, or any other system > in this regard. As you've observed, compliance and compatibility are > hard coded in to the clients and server. If the protocol changes, the > clients and servers need to be changed to remain compatible. > > My point is that REST differs from SOAP because this coupling is not > being made explicit. In SOAP it is explicit because there is a WSDL > that defines an interface that couples tightly. It just known that > you cannot remove a method from an OO-style API without breaking > your clients. For REST we usually argue that services can freely > evolve without breaking clients. Which is wrong. > > > > Versioning and backward compatibility is the key to a robust, evolving > infrastructure. I think REST is better for such a system than > something like SOAP simply because I think it is easier for a more > advanced client to leverage the latest services and APIs, as well as > for a server to better maintain compatibility with older clients. > > Yes, definitely. > > > Both of these are done through extensible types and con neg. As you > get more and more servers and clients on different upgrade cycles, > this capability become more important. It's easy to see how you might > get consumers using services that you, as the provider, particularly > in an "open" enterprise, didn't even really "know" were being > serviced. > > Yes. > > > > In the end, though things like typed rels, and online documentation, > ideally when something goes wrong, payload inspection will direct the > people maintaining the consumers towards what they need to change to > become compliant again and able to use the new service. > > Yes. > > I really only tried to say that the clients an in fact break and > that it should be understood where and how the contract is > established that causes them to fail. IMHO, current specifications > that are not only targeted at pure human driven consumption (e.g. > AtomPub or OpenSearch) are not doing a good job in this regard. > > (OpenSearch, for example, states nowhere that Atom or RSS are the > formats a client should be able to handle. Yet, this seems to be > some sort of common sense. The OSD FAQ page says something like > "OpenSearch is a collection of simple formats for the sharing of > search results"[1]. Sure yes, that is all I need to know for build > useful stuff. But would you invest a couple of million Dollars into > building clients for a service description such as this one? > Tomorrow the service could stop sending both, Atom and RSS and just > use something new and would not be liable for it in any way.) > > Jan > > [1] http://www.opensearch.org/Documentation/Frequently_asked_questions > > > > Regards, > > Will Hartung > (willh@...) > > -------------------------------------- > Jan Algermissen > > Mail: algermissen@... > Blog: http://algermissen.blogspot.com/ > Home: http://www.jalgermissen.com > -------------------------------------- > > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
2009/12/23 Subbu Allamaraju <subbu@...>: > "each representation identified by its own URI" (2616#12.2) > That means that other resources are returning their own representation in place, or on behalf of, the original resource, that was not capable of serving the original request. Is still responsibility of the client to choose and make that second request to the new resource, and the original resource, after sending the 303 with other alternative resources (and corresponding representations) is no longer accountable for the representations sent to the client... So that is more a case of redirection, instead of "representation *as* a different resource", because is actually a representation *of* a different resource. Nevertheless, since this happen at content negotiation time, it really implies that media-type only content negotiation may be not sufficient. But that is more clearly stated by saying that "server-drive negotiation" may not be sufficient, and in those cases user-agent negotiation, or a mix of the two, should be appropriate, because it really doesn't depend on the media-type "per se" but of the server being capable of producing a media-type representation for a particular resource. I hope I could explain myself correctly....
Stefan Tilkov wrote: > > I understand your viewpoint to be that anything not publicly > standardized (i.e. a custom link relation, or media type, or verb) is > by definition not RESTful. I don't think so, but of course I may be > wrong - in my view, you can standardize e.g. within your company or > some other domain. Probably we need Roy to provide an authoritative > answer. > I would like to think that there are enough experts on this list, yourself and myself included, to hash this sort of thing out without requiring Roy to referee. Besided, I do believe Roy explained this point until he was blue in the face, in the comments here: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven I recommend reading the entire comment thread, to put my excerpts in their proper context. In the context of using standardized media types, Roy has plenty to say... " To some extent, people get REST wrong because I failed to include enough detail on media type design within my dissertation... You don't get to decide what POST means -- that is decided by the resource. Its purpose is supposed to be described in the same context in which you found the URI that you are posting to. Presumably, that context (a hypertext representation in some media type understood by your client) tells you or your agent what to expect from the POST using some combination of standard elements/relations and human-readable text. The HTTP response will tell you what happened as a result... ... Perhaps [this comment] will help clarify the role of standards in RESTful systems: Of course the client has prior knowledge. Every protocol, every media type definition, every URI scheme, and every link relationship type constitutes prior knowledge that the client must know (or learn) in order to make use of that knowledge. REST doesn't eliminate the need for a clue. What REST does is concentrate that need for prior knowledge into readily standardizable forms. That is the essential distinction between data-oriented and control-oriented integration. It has value because it is far easier to standardize representation and relation types than it is to standardize objects and object-specific interfaces. In other words, there are fewer things to learn and they can be recombined in unanticipated ways while remaining understandable to the client. ... In terms of testing a specification, the hardest part is identifying when a RESTful protocol is actually dependent on out-of-band information... What I look for are requirements on processing behavior that are defined outside of the media type specification. One of the easiest ways to see that is when a protocol calls for the use of a generic media type (like application/xml or application/json) and then requires that it be processed in a way that is special to the protocol/API. ... The media type identifies a specification that defines how a representation is to be processed. That is out-of-band information (all communication is dependent on some prior knowledge). What you are missing is that each representation contains the specific instructions for interfacing with a given service, provided in-band. The media type is a generic processing model that every agent can learn if there aren't too many of them (hence the need for standards). " The only thing Roy describes as acceptably domain-specific, are vocabularies contained within standard media types: "Exposing that vocabulary in the representations makes it easy to learn and be adopted by others. Some of it will be standardized, some of it will be domain-specific, but ultimately the agents will have to be adaptable to new vocabulary. " I hope this is all the backing I need for my stance: this list is full of examples given in terms of URIs, HTTP methods and response codes, and hypothetical media types with a target audience of a single system. We need to stop doing this, or at least point out that to do so is fundamentally at odds with the REST style, and stress using standard (or at least standardizable) media types, because method use must be encompassed within the definition of the media type. Otherwise, the REST community is just as responsible as the API designers, for the sorry state of affairs where 99% of REST APIs don't conform to the style. I believe this is a solid foundation for my hypothesis that we're teaching REST wrong. "Rebooting REST" thread to follow. -Eric
Dare understands: " This notion of building software that scales to Web-wide usage is critical to understanding Roy's points above. The first point above is that a RESTful API should primarily be concerned about data payloads and not defining how URI end points should handle various HTTP methods. For one, sticking to defining data payloads which are then made standard MIME types gives maximum reusability of the technology. The specifications for RSS 2.0 (application/xml+rss) and the Atom syndication format (application/atom+xml) primarily focus on defining the data format and how applications should process feeds independent of how they were retrieved. In addition, both formats are aimed at being standard formats that can be utilized by any Web site as opposed to being tied to a particular vendor or Web site which has aided their adoption. Unfortunately, few have learned from these lessons and we have people building RESTful APIs with proprietary data formats that aren't meant to be shared. My current favorite example of this is social graph/contacts APIs which seem to be getting reinvented every six months. Google has the Contacts Data API, Yahoo! has their Address Book API, Microsoft has the Windows Live Contacts API, Facebook has their friends REST APIs and so on. Each of these APIs claims to be RESTful in its own way yet they are helping to fragment the Web instead of helping to grow it. There have been some moves to address this with the OpenSocial influenced Portable Contacts API but it too shies away from standard MIME types and instead creates dependencies on URL structures to dictate how the data payloads should be retrieved/processed. " http://www.25hoursaday.com/weblog/2008/10/24/RESTAPIDesignInventMediaTypesNotProtocolsAndUnderstandTheImportanceOfHyperlinks.aspx The REST style is derived from what made the Web successful in the first place. That's why the emphasis is on standardized or standardizable media types -- this makes the web grow, not fragment. +1 to Dare. -Eric
On Dec 23, 2009, at 11:55 PM, Eric J. Bowman wrote: > I hope this is all the backing I need for my stance: this list is full > of examples given in terms of URIs, HTTP methods and response codes, > and hypothetical media types with a target audience of a single > system. We need to stop doing this, or at least point out that to do > so is fundamentally at odds with the REST style, and stress using > standard (or at least standardizable) media types, because method use > must be encompassed within the definition of the media type. > > Otherwise, the REST community is just as responsible as the API > designers, for the sorry state of affairs where 99% of REST APIs don't > conform to the style. I believe this is a solid foundation for my > hypothesis that we're teaching REST wrong. "Rebooting REST" thread to > follow. > I fully agree with you regarding the role of media types, and the lack if importance they've been assigned to in the past (with myself being guilty of this, too). I also think the single-system approach is the wrong one. I only disagree that despite the best of intentions, people who create something new (within their bounded context) for which there is no standard can't call their systems RESTful. But your "(or at least standardizable)" quote above suggests that we're actually don't disagree that much after all. Stefan > -Eric
Eric J. Bowman wrote:
> Roger Gonzalez wrote:
>
>> It strikes me as
>> inefficient and unnecessarily revisionist to force every "dumb" media
>> type to be wrapped and manipulated through a hypertext proxy
>> resource.
>>
>>
>
> This isn't revisionist; it's part of the REST architectural style. Of
> course it's less efficient, the decsion is whether or not the benefits
> of the style outweigh the consequences:
>
> "
> The trade-off, though, is that a uniform interface degrades efficiency,
> since information is transferred in a standardized form rather than one
> which is specific to an application's needs.
> "
>
> If this degradation in efficiency outweighs the benefits of the uniform
> REST interface for your system, then don't apply this constraint.
>
> -Eric
>
Your selective quotation of my message misses my point. I'm not talking
about the uniform interface between components (as Roy is, which I agree
is valuable), I'm talking about your assertion that the uniform
interface does not apply to a resource whose client-side representation
is not "hypertexty", and that DELETE on an image isn't legal. I believe
that it does apply, and that it is legal.
I feel you're overreading "hypertext is the engine" as "every
representation must be hypertext". I strongly believe that it's
perfectly legitimate to have initial hypertext point you to absolutely
any resource, even those with a dumb media type, and you should be
allowed to interact with that resource using the uniform interface.
It's a leaf node in the hypertext graph. There are layers of contract
involved; the per-resource contract (what control messages will this
particular resource accept; i.e. what content types) but then there are
higher-level contracts (what operations can be performed on a given
resource).
In an OO sense, given a Collection<User>, it wouldn't make sense for
User to have to implement DeletableFromCollections in order to be
deleted. That actually interferes with abstraction; the User shouldn't
even need to know whether it is ever in a collection, or sitting around
as a temp object, or whatever. Same goes for resources. The confusion
is that we think we're talking directly to the resource, but we really
aren't, we're talking through an agent.
PUT: hey agent, here's a resource representation, m
Roy sez: "REST connectors provide a generic interface for accessing and
manipulating the value set of a resource."
Therefore, there is nothing wrong with:
GET /bob/assets
application/assets+json
{
"assets": ["http://server/bob/vacation.jpg",
"http://server/bob/wife.gif"]
}
DELETE /bob/vacation.jpg
The resource "/bob/assets" acted as the hypertext engine, linking me to
a bunch of image resources, and I sent a control message to the http
connector requesting that one of them be deleted. The media type of the
resource I deleted was frankly irrelevant. The notion that I can't
delete an image/jpg off a server until someone creates a new "hypertext
image" format that advertises its own contract is what I object to.
(Besides, image/jpg is just a representation, the backing resource is
anonymous to the client, so I fail to see why the "manipulating the
value set of a resource" has to do with one representation rather than
the more important aspect: the uniform interface for the connector.)
-rg
There is no "best" architecture, there is only the architecture that is best for your system. All quotes are by Roy, taken from the comment thread here: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven The terms Modeling, Visualization, Analysis, Implementation and mapping are taken from the "Software Architecture: Foundations, Theory, and Practice" textbook. All jargon not covered in that book is defined in Dr. Fielding's dissertation. It is my belief that following a disciplined approach to REST development, where the focus is on resource modeling, may result in a system that falls short of REST. Yet, due to its nature as a distributed hypermedia, the result may be a beneficial and proper architecture for that system. " The purpose of resource modeling is to figure out what resources you have that are worth identifying, representing, and manipulating. " I would like to offer a formal definition for Resource Oriented Architecture (ROA) as an umbrella style for a plethora of architectures which do not apply the full set of REST constraints, and therefore cannot be considered REST. (I considered Data Oriented Architecture, but that idea was DOA... I'm open to suggestions. :-) The ROA styles are derived from REST and may be deemed REST-derived, REST-inspired, REST-oriented or RESTish, but not REST or RESTful. " That doesn't mean that I think everyone should design their own systems according to the REST architectural style. REST is intended for long-lived network-based applications that span multiple organizations. If you don't see a need for the constraints, then don't use them. That's fine with me as long as you don't call the result a REST API. I have no problem with systems that are true to their own architectural style. " The most important concept in REST is that systems meet the constraints of the underlying architecture of the Web, allowing GET to be optimized to the fullest extent possible. Roy has identified the native Web architecture as the client-cache-stateless-server set of constraints. These constraints are inviolable to any ROA style, as would be the identification of resources constraint -- no query-driven resource matrices or RPC here (unless you're using Roy's very strict definition of RPC, vs. mine, which is any endpoint that's only intended as a POST handler). " Query is not a substitute for identification of resources. " ROA defines the following as optional constraints of the style: layered system, code-on-demand, manipulation of resources through representations, self-descriptive messages, and hypermedia as the engine of application state. This results in (I think) 36 allowable architectural styles, compared with 2 in REST (REST or REST+CoD). I've chosen the minimal set of constraints which allow GET to be optimized for scaling on the Web. " REST is software design on the scale of decades: every detail is intended to promote software longevity and independent evolution. Many of the constraints are directly opposed to short-term efficiency. Unfortunately, people are fairly good at short-term design, and usually awful at long-term design. Most don't think they need to design past the current release. There are more than a few software methodologies that portray any long-term thinking as wrong-headed, ivory tower design (which it can be if it isn't motivated by real requirements). " If the system's real requirements don't lead to REST, so be it. However, any distributed hypermedia system ought to be able to be modeled as REST. Visualization and Analysis of a REST architectural Model doesn't have to lead to a full mapping of the Model in the Implementation. But, the absent mappings can at least be understood in terms of benefits and tradeoffs. If, over time, a benefit comes to outweigh its tradeoffs, the Implementation may be more fully mapped to the Model, rather than having to devise a new Model. So the goal of a disciplined approach to REST development is the creation of a REST architectural Model to guide system Implementation. There is no requirement that the resulting Implementation contain a full mapping to the Model. If it does, it's REST. If it does not, it is at least an ROA architecture derived from (or inspired by, if Roy prefers ;-) REST. The resulting system remains true to its architectural style, and has somewhat of a blueprint to guide further development in the right direction. At least the resulting system is capable of optimizing the hell out of GET. By rebooting REST, I mean that discussions on this list should be less implementation-oriented and more resource-modeling-oriented. Everyone's assignment over the Holidays is to devise a REST architectural Model for the good ol' shopping cart problem. I use bubble charts, but think I should give UML a try. If folks pitch in on this, the conversation achieves two results. First, a consensus approach to formally modeling REST resources (perhaps using UML, perhaps not). Second, a consensus architectural Model of a REST shopping cart. I tend to see the shopping-cart problem in terms of tabular data, so I'm likely going to want to Implement my shopping cart using XHTML. Henry Story tends to see the shopping-cart problem in terms of RDF tuples, so he'll likely choose another media type. Both approaches are valid, so a Model should be agnostic to different approaches taken at the Implementation level. I have my own Visualization and Analysis which leads me to Implement an XHTML shopping cart, whereas Henry has his own Visualization and Analysis which leads him to Implement an RDF shopping cart. " A distributed queue is an implementation choice. You can certainly implement some applications by having them interact with a queue-like resource in a RESTful manner. However, if your client relies on the resource being a queue, then it certainly isn't a RESTful API. Do you see the difference? Encoding knowledge within clients and servers of the other side's implementation mechanism is what we are trying to avoid. " Such a Model then becomes what people seem to want so badly from REST: a reference Implementation. Except what we actually give them is a reference Model. Anyone can post their take on an Implementation of that Model, and the results can be discussed in terms of how well the Implementation maps to the Model. Any Implementation on the Web may be linked to and discussed in the same terms. The URIs will likely all be very different, with less variation in media type selection, and even less variation in method selection. Instead of continuing down the same path of describing REST in terms of Implementation, which has obviously failed, the conversation is changed to one of how well an Implementation maps to a known Model in terms of benefits and tradeoffs. The goal is to teach how to Implement the REST architectural style guided by a Model. All efforts at REST are doomed to fail if resources are not Modeled properly before URIs are defined. All efforts at REST which _do_ begin with properly Modeling resources are doomed to succeed, so long as the Implementation stays true to the resulting ROA architectural Model. If anyone follows what I'm getting at. People misconstrue the declaration of "Not REST" to be a value judgment against the system. Personally, I only mean it as a value judgment on the selection of a buzzword where it does not apply. When I state that the Talis n2 API is "Not ROA" then, yeah, I'm passing a value judgment against the system, for its failure to allow GET optimization at all. The same goes for all systems that don't meet the client-cache- stateless-server and identification of resources constraints. They fail to leverage the native Web architecture for one, and fail to even approach REST by neglecting to properly identify resources. Deriving an approach to REST which degrades gracefully to result in an ROA style, while emphasizing the use of standard media types, will result in REST motivating the growth and interoperability of the Web. Which is better than the current state of affairs, where failure to understand REST is resulting in fragmentation of the Web. Only if this effort, and subsequent efforts at changing the conversation fail, will the meme "REST is hard to learn" be proven. All we know for a fact, is that the current conversation has not succeeded. There is no "best" architecture, there is only the architecture that is best for your system (and it better optimize the hell out of GET). -Eric (Merry Christmas and/or Happy Holidays, everyone!)
I have no idea if your proposed use of DELETE is RESTful or not, as you
have not given me any notion of the specification of application/assets+
json. Does the media type definition encompass DELETE? Even if it
does, you still aren't instructing the client how to DELETE an image
through a hypertext representation. You're relying on out-of-band
knowledge hard-coded to a client's DELETE facility, the media type
isn't defining any sort of selection mechanism or button to push to
drive application state.
It is trivial in Xforms to create a standard listbox of image URIs,
allowing one or more to be selected, with a DELETE button for the user
to press when the selection of images to remove is complete. Each
image is removed with a separate HTTP DELETE and success/failure is
reported back to the user for each image selected for removal. It does
not matter that image/jpeg doesn't encompass DELETE, what matters is
that hypertext instructed the client how to carry out the user request
by using application/xhtml+xml, which encompasses DELETE. That's REST.
-Eric
Roger Gonzalez wrote:
>
> Eric J. Bowman wrote:
> > Roger Gonzalez wrote:
> >
> >> It strikes me as
> >> inefficient and unnecessarily revisionist to force every "dumb"
> >> media type to be wrapped and manipulated through a hypertext proxy
> >> resource.
> >>
> >>
> >
> > This isn't revisionist; it's part of the REST architectural style.
> > Of course it's less efficient, the decsion is whether or not the
> > benefits of the style outweigh the consequences:
> >
> > "
> > The trade-off, though, is that a uniform interface degrades
> > efficiency, since information is transferred in a standardized form
> > rather than one which is specific to an application's needs.
> > "
> >
> > If this degradation in efficiency outweighs the benefits of the
> > uniform REST interface for your system, then don't apply this
> > constraint.
> >
> > -Eric
> >
> Your selective quotation of my message misses my point. I'm not
> talking about the uniform interface between components (as Roy is,
> which I agree is valuable), I'm talking about your assertion that the
> uniform interface does not apply to a resource whose client-side
> representation is not "hypertexty", and that DELETE on an image isn't
> legal. I believe that it does apply, and that it is legal.
>
> I feel you're overreading "hypertext is the engine" as "every
> representation must be hypertext". I strongly believe that it's
> perfectly legitimate to have initial hypertext point you to
> absolutely any resource, even those with a dumb media type, and you
> should be allowed to interact with that resource using the uniform
> interface. It's a leaf node in the hypertext graph. There are layers
> of contract involved; the per-resource contract (what control
> messages will this particular resource accept; i.e. what content
> types) but then there are higher-level contracts (what operations can
> be performed on a given resource).
>
> In an OO sense, given a Collection<User>, it wouldn't make sense for
> User to have to implement DeletableFromCollections in order to be
> deleted. That actually interferes with abstraction; the User
> shouldn't even need to know whether it is ever in a collection, or
> sitting around as a temp object, or whatever. Same goes for
> resources. The confusion is that we think we're talking directly to
> the resource, but we really aren't, we're talking through an agent.
>
> PUT: hey agent, here's a resource representation, m
>
> Roy sez: "REST connectors provide a generic interface for accessing
> and manipulating the value set of a resource."
>
> Therefore, there is nothing wrong with:
>
> GET /bob/assets
> application/assets+json
> {
> "assets": ["http://server/bob/vacation.jpg",
> "http://server/bob/wife.gif"]
> }
> DELETE /bob/vacation.jpg
>
> The resource "/bob/assets" acted as the hypertext engine, linking me
> to a bunch of image resources, and I sent a control message to the
> http connector requesting that one of them be deleted. The media
> type of the resource I deleted was frankly irrelevant. The notion
> that I can't delete an image/jpg off a server until someone creates a
> new "hypertext image" format that advertises its own contract is what
> I object to. (Besides, image/jpg is just a representation, the
> backing resource is anonymous to the client, so I fail to see why the
> "manipulating the value set of a resource" has to do with one
> representation rather than the more important aspect: the uniform
> interface for the connector.)
>
> -rg
>
Started replying, but deleted most of it. What is it that your proposal is bringing through UML diagrams that's going to help implementers in the wild understand how to do all this stuff? Why is getting away from implementations and driving towards modelling helping? I'm sure there's a compelling core scenario that your proposal solve, I just don't find it in the needs my community is exhibiting. Seb -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Eric J. Bowman Sent: 24 December 2009 01:19 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Rebooting REST There is no "best" architecture, there is only the architecture that is best for your system. All quotes are by Roy, taken from the comment thread here: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven The terms Modeling, Visualization, Analysis, Implementation and mapping are taken from the "Software Architecture: Foundations, Theory, and Practice" textbook. All jargon not covered in that book is defined in Dr. Fielding's dissertation. It is my belief that following a disciplined approach to REST development, where the focus is on resource modeling, may result in a system that falls short of REST. Yet, due to its nature as a distributed hypermedia, the result may be a beneficial and proper architecture for that system. " The purpose of resource modeling is to figure out what resources you have that are worth identifying, representing, and manipulating. " I would like to offer a formal definition for Resource Oriented Architecture (ROA) as an umbrella style for a plethora of architectures which do not apply the full set of REST constraints, and therefore cannot be considered REST. (I considered Data Oriented Architecture, but that idea was DOA... I'm open to suggestions. :-) The ROA styles are derived from REST and may be deemed REST-derived, REST-inspired, REST-oriented or RESTish, but not REST or RESTful. " That doesn't mean that I think everyone should design their own systems according to the REST architectural style. REST is intended for long-lived network-based applications that span multiple organizations. If you don't see a need for the constraints, then don't use them. That's fine with me as long as you don't call the result a REST API. I have no problem with systems that are true to their own architectural style. " The most important concept in REST is that systems meet the constraints of the underlying architecture of the Web, allowing GET to be optimized to the fullest extent possible. Roy has identified the native Web architecture as the client-cache-stateless-server set of constraints. These constraints are inviolable to any ROA style, as would be the identification of resources constraint -- no query-driven resource matrices or RPC here (unless you're using Roy's very strict definition of RPC, vs. mine, which is any endpoint that's only intended as a POST handler). " Query is not a substitute for identification of resources. " ROA defines the following as optional constraints of the style: layered system, code-on-demand, manipulation of resources through representations, self-descriptive messages, and hypermedia as the engine of application state. This results in (I think) 36 allowable architectural styles, compared with 2 in REST (REST or REST+CoD). I've chosen the minimal set of constraints which allow GET to be optimized for scaling on the Web. " REST is software design on the scale of decades: every detail is intended to promote software longevity and independent evolution. Many of the constraints are directly opposed to short-term efficiency. Unfortunately, people are fairly good at short-term design, and usually awful at long-term design. Most don't think they need to design past the current release. There are more than a few software methodologies that portray any long-term thinking as wrong-headed, ivory tower design (which it can be if it isn't motivated by real requirements). " If the system's real requirements don't lead to REST, so be it. However, any distributed hypermedia system ought to be able to be modeled as REST. Visualization and Analysis of a REST architectural Model doesn't have to lead to a full mapping of the Model in the Implementation. But, the absent mappings can at least be understood in terms of benefits and tradeoffs. If, over time, a benefit comes to outweigh its tradeoffs, the Implementation may be more fully mapped to the Model, rather than having to devise a new Model. So the goal of a disciplined approach to REST development is the creation of a REST architectural Model to guide system Implementation. There is no requirement that the resulting Implementation contain a full mapping to the Model. If it does, it's REST. If it does not, it is at least an ROA architecture derived from (or inspired by, if Roy prefers ;-) REST. The resulting system remains true to its architectural style, and has somewhat of a blueprint to guide further development in the right direction. At least the resulting system is capable of optimizing the hell out of GET. By rebooting REST, I mean that discussions on this list should be less implementation-oriented and more resource-modeling-oriented. Everyone's assignment over the Holidays is to devise a REST architectural Model for the good ol' shopping cart problem. I use bubble charts, but think I should give UML a try. If folks pitch in on this, the conversation achieves two results. First, a consensus approach to formally modeling REST resources (perhaps using UML, perhaps not). Second, a consensus architectural Model of a REST shopping cart. I tend to see the shopping-cart problem in terms of tabular data, so I'm likely going to want to Implement my shopping cart using XHTML. Henry Story tends to see the shopping-cart problem in terms of RDF tuples, so he'll likely choose another media type. Both approaches are valid, so a Model should be agnostic to different approaches taken at the Implementation level. I have my own Visualization and Analysis which leads me to Implement an XHTML shopping cart, whereas Henry has his own Visualization and Analysis which leads him to Implement an RDF shopping cart. " A distributed queue is an implementation choice. You can certainly implement some applications by having them interact with a queue-like resource in a RESTful manner. However, if your client relies on the resource being a queue, then it certainly isn't a RESTful API. Do you see the difference? Encoding knowledge within clients and servers of the other side's implementation mechanism is what we are trying to avoid. " Such a Model then becomes what people seem to want so badly from REST: a reference Implementation. Except what we actually give them is a reference Model. Anyone can post their take on an Implementation of that Model, and the results can be discussed in terms of how well the Implementation maps to the Model. Any Implementation on the Web may be linked to and discussed in the same terms. The URIs will likely all be very different, with less variation in media type selection, and even less variation in method selection. Instead of continuing down the same path of describing REST in terms of Implementation, which has obviously failed, the conversation is changed to one of how well an Implementation maps to a known Model in terms of benefits and tradeoffs. The goal is to teach how to Implement the REST architectural style guided by a Model. All efforts at REST are doomed to fail if resources are not Modeled properly before URIs are defined. All efforts at REST which _do_ begin with properly Modeling resources are doomed to succeed, so long as the Implementation stays true to the resulting ROA architectural Model. If anyone follows what I'm getting at. People misconstrue the declaration of "Not REST" to be a value judgment against the system. Personally, I only mean it as a value judgment on the selection of a buzzword where it does not apply. When I state that the Talis n2 API is "Not ROA" then, yeah, I'm passing a value judgment against the system, for its failure to allow GET optimization at all. The same goes for all systems that don't meet the client-cache- stateless-server and identification of resources constraints. They fail to leverage the native Web architecture for one, and fail to even approach REST by neglecting to properly identify resources. Deriving an approach to REST which degrades gracefully to result in an ROA style, while emphasizing the use of standard media types, will result in REST motivating the growth and interoperability of the Web. Which is better than the current state of affairs, where failure to understand REST is resulting in fragmentation of the Web. Only if this effort, and subsequent efforts at changing the conversation fail, will the meme "REST is hard to learn" be proven. All we know for a fact, is that the current conversation has not succeeded. There is no "best" architecture, there is only the architecture that is best for your system (and it better optimize the hell out of GET). -Eric (Merry Christmas and/or Happy Holidays, everyone!) ------------------------------------ Yahoo! Groups Links
D'oh!!! How'd we manage to overlook negotiating based on the username provided by HTTP-Digest headers? This is the mechanism by which REST applications may be personalized, without the use of cookies or user- specific URIs, and it's easy to change cache-control from public to private in the response. -Eric Craig McClanahan wrote: > > On Wed, Dec 23, 2009 at 11:52 AM, Eric J. Bowman wrote: > > > > > > > Will Hartung wrote: > > > > > > What other mechanisms are there available for content > > > negotiation? The standard ones seems to all be on mime type. > > > > > > > The most common implementation of content negotiation is for > > compression, the Accept-Encoding request header will typically > > contain tokens, i.e. "Accept-Encoding: GZIP, DEFLATE". > > > > With HTTP, the "Accept-Charset" and "Accept-Language" headers can > also be used in a similar manner, for content negotiation on the > acceptable character set and language. > > > > > -Eric > > > > Craig >
That's called authentication and not negotiation. Personalized representations are not variants. The best practice on the web is to use variants when the information content in variants is the same, but they differ in how that information is encoded. You can "Vary" by "Authorization" for cacheability sake, but its practical benefits are almost nil due to the way Authorization headers are computed (except in the case of Basic auth). Subbu On Dec 23, 2009, at 6:34 PM, Eric J. Bowman wrote: > D'oh!!! How'd we manage to overlook negotiating based on the username > provided by HTTP-Digest headers? This is the mechanism by which REST > applications may be personalized, without the use of cookies or user- > specific URIs, and it's easy to change cache-control from public to > private in the response. > > -Eric > > Craig McClanahan wrote: >> >> On Wed, Dec 23, 2009 at 11:52 AM, Eric J. Bowman wrote: >> >>> >>> >>> Will Hartung wrote: >>>> >>>> What other mechanisms are there available for content >>>> negotiation? The standard ones seems to all be on mime type. >>>> >>> >>> The most common implementation of content negotiation is for >>> compression, the Accept-Encoding request header will typically >>> contain tokens, i.e. "Accept-Encoding: GZIP, DEFLATE". >>> >> >> With HTTP, the "Accept-Charset" and "Accept-Language" headers can >> also be used in a similar manner, for content negotiation on the >> acceptable character set and language. >> >>> >>> -Eric >>> >> >> Craig >> > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Wed, Dec 23, 2009 at 6:11 PM, Sebastien Lambla <seb@...> wrote: > Started replying, but deleted most of it. > > What is it that your proposal is bringing through UML diagrams that's going > to help implementers in the wild understand how to do all this stuff? > > Why is getting away from implementations and driving towards modelling > helping? > > I'm sure there's a compelling core scenario that your proposal solve, I just > don't find it in the needs my community is exhibiting. Because REST is an architecture, and an implementation is a manifestation of an architecture. As one example he gave, two different designers coming up with two separate implementations of the same model -- one via RDF another through XHTML. But the underlying design, ideally, is the same. He's suggesting something like UML to pull folks out of posting HTTP transactions to "model" the system. HTTP, again, is an implementation detail (granted, popular, ubiquitous...). People learn things in different ways. One way is to have someone tell them the theory, then they can apply it to validate their interpretations. Arguably, that's what many have been doing with Roy's thesis. Others, want to see the theory manifested through examples, and from those example learn the theory through seeing it's application. There are some people you can tell them all day long what the theory is, from a high level, and they just..don't...get it. At all. They don't "work" at that level. Show them a dozen examples, and their subtle variations, and these people can see the "theory" shake out from the common elements they encounter, and apply it. I think it will be a great experiment to see REST applied from the Model point of view rather than discussing the merits or interpretation of RFC 1234 sub paragraph 87(c). Then we can learn how others may well implement the model, and how they may vary. We may well learn that a Shopping Cart simply is not a good example of a REST based system. Who knows. I think it's a sound plan. Regards, Will Hartung (willh@...)
Eric - while the example you give about Xforms is valid, but I would not stretch that to say that a server that allows clients to DELETE a resource based on some information not conveyed in a prior representation is not RESTful.
Subbu
On Dec 23, 2009, at 5:33 PM, Eric J. Bowman wrote:
> I have no idea if your proposed use of DELETE is RESTful or not, as you
> have not given me any notion of the specification of application/assets+
> json. Does the media type definition encompass DELETE? Even if it
> does, you still aren't instructing the client how to DELETE an image
> through a hypertext representation. You're relying on out-of-band
> knowledge hard-coded to a client's DELETE facility, the media type
> isn't defining any sort of selection mechanism or button to push to
> drive application state.
>
> It is trivial in Xforms to create a standard listbox of image URIs,
> allowing one or more to be selected, with a DELETE button for the user
> to press when the selection of images to remove is complete. Each
> image is removed with a separate HTTP DELETE and success/failure is
> reported back to the user for each image selected for removal. It does
> not matter that image/jpeg doesn't encompass DELETE, what matters is
> that hypertext instructed the client how to carry out the user request
> by using application/xhtml+xml, which encompasses DELETE. That's REST.
>
> -Eric
>
> Roger Gonzalez wrote:
>>
>> Eric J. Bowman wrote:
>>> Roger Gonzalez wrote:
>>>
>>>> It strikes me as
>>>> inefficient and unnecessarily revisionist to force every "dumb"
>>>> media type to be wrapped and manipulated through a hypertext proxy
>>>> resource.
>>>>
>>>>
>>>
>>> This isn't revisionist; it's part of the REST architectural style.
>>> Of course it's less efficient, the decsion is whether or not the
>>> benefits of the style outweigh the consequences:
>>>
>>> "
>>> The trade-off, though, is that a uniform interface degrades
>>> efficiency, since information is transferred in a standardized form
>>> rather than one which is specific to an application's needs.
>>> "
>>>
>>> If this degradation in efficiency outweighs the benefits of the
>>> uniform REST interface for your system, then don't apply this
>>> constraint.
>>>
>>> -Eric
>>>
>> Your selective quotation of my message misses my point. I'm not
>> talking about the uniform interface between components (as Roy is,
>> which I agree is valuable), I'm talking about your assertion that the
>> uniform interface does not apply to a resource whose client-side
>> representation is not "hypertexty", and that DELETE on an image isn't
>> legal. I believe that it does apply, and that it is legal.
>>
>> I feel you're overreading "hypertext is the engine" as "every
>> representation must be hypertext". I strongly believe that it's
>> perfectly legitimate to have initial hypertext point you to
>> absolutely any resource, even those with a dumb media type, and you
>> should be allowed to interact with that resource using the uniform
>> interface. It's a leaf node in the hypertext graph. There are layers
>> of contract involved; the per-resource contract (what control
>> messages will this particular resource accept; i.e. what content
>> types) but then there are higher-level contracts (what operations can
>> be performed on a given resource).
>>
>> In an OO sense, given a Collection<User>, it wouldn't make sense for
>> User to have to implement DeletableFromCollections in order to be
>> deleted. That actually interferes with abstraction; the User
>> shouldn't even need to know whether it is ever in a collection, or
>> sitting around as a temp object, or whatever. Same goes for
>> resources. The confusion is that we think we're talking directly to
>> the resource, but we really aren't, we're talking through an agent.
>>
>> PUT: hey agent, here's a resource representation, m
>>
>> Roy sez: "REST connectors provide a generic interface for accessing
>> and manipulating the value set of a resource."
>>
>> Therefore, there is nothing wrong with:
>>
>> GET /bob/assets
>> application/assets+json
>> {
>> "assets": ["http://server/bob/vacation.jpg",
>> "http://server/bob/wife.gif"]
>> }
>> DELETE /bob/vacation.jpg
>>
>> The resource "/bob/assets" acted as the hypertext engine, linking me
>> to a bunch of image resources, and I sent a control message to the
>> http connector requesting that one of them be deleted. The media
>> type of the resource I deleted was frankly irrelevant. The notion
>> that I can't delete an image/jpg off a server until someone creates a
>> new "hypertext image" format that advertises its own contract is what
>> I object to. (Besides, image/jpg is just a representation, the
>> backing resource is anonymous to the client, so I fail to see why the
>> "manipulating the value set of a resource" has to do with one
>> representation rather than the more important aspect: the uniform
>> interface for the connector.)
>>
>> -rg
>>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
"Eric J. Bowman" wrote: > > D'oh!!! How'd we manage to overlook negotiating based on the username > provided by HTTP-Digest headers? This is the mechanism by which REST > applications may be personalized, without the use of cookies or user- > specific URIs, and it's easy to change cache-control from public to > private in the response. > In the Atom Protocol implementation I described in my thread, content negotiation is based on username, but responses are not personalized. They are role-based. User role determines the available state transitions in the representation, excluding the interface to methods not allowed for their role. All variants are the same media type, but cache-control remains set to public. Conneg only varies on the Authorization header, media type doesn't enter into the equation. -Eric
Subbu, please respond to my response that I sent before I read this reply. Thanks, Eric
Subbu Allamaraju wrote: > > Eric - while the example you give about Xforms is valid, but I would > not stretch that to say that a server that allows clients to DELETE a > resource based on some information not conveyed in a prior > representation is not RESTful. > What you're suggesting doesn't apply the hypertext constraint, it relies on out-of-band information to create a function in client chrome, a definite coupling compared to hypertext-driven DELETE. -Eric
Stefan Tilkov wrote: > > I fully agree with you regarding the role of media types, and the > lack if importance they've been assigned to in the past (with myself > being guilty of this, too). I also think the single-system approach > is the wrong one. > Yeah, it only took about a year for the ramifications of Roy's blog post to fully sink in, as it stood my understanding of REST on its head for the first time in a long time (and I hope the last time). The difficulty of coming to grips with how wrong I've been in the past impeded my ability to come to grips with what Roy was saying. I wasn't far off, but I was failing to recognize the importance of standard media types in both the derivation and application of REST. I think the best thing Roy can do for REST at this point, is to set Waka aside for a bit and publish the "missing chapter" on media type development. Surely it exists as notes, if not in draft form, Roy? Pretty, pretty please with sugar on top, dredge it up and dust it off, I'm dying to read it! > > I only disagree that despite the best of intentions, people who > create something new (within their bounded context) for which there > is no standard can't call their systems RESTful. But your "(or at > least standardizable)" quote above suggests that we're actually don't > disagree that much after all. > This is the point I was trying to make earlier. My extension to Atom Protocol to allow for PATCH is absolutely worthless in the absence of a standardization effort. Which is why I have to make sure my proposal isn't restricted to implementing my feature, but defines something useful in a generic sense that's of value to others. That the REST style encourages this sort of thinking is a Good Thing. Until such time as a standard is accepted, I will refer to my system as REST* because the system only becomes RESTful at that point, as if by magic... If there's no standardization effort involved, then the proprietary result fragments the Web and does not achieve the goals of REST, and as a consequence, cannot be called REST or even REST* because of the lack of intent to use standards, a key element of the style. -Eric
Please, folks. If you are supportive of this effort, contribution and constructive criticism is welcome in this thread. If you are dismissive of the effort and wish to offer any sort of criticism, please post that to the "REST isn't hard to learn, it's just taught wrong" thread, where it is on-topic. Thanks, Eric
"Sebastien Lambla" wrote: > > What is it that your proposal is bringing through UML diagrams that's > going to help implementers in the wild understand how to do all this > stuff? > Changing the conversation between Architects and Developers is required, since REST development is all about how well a system implements an architectural style. Resource modeling before URI modeling is, I think, of critical importance. > > Why is getting away from implementations and driving towards modelling > helping? > Who's to say? All I know is the current approach is an abysmal failure, so there's nowhere to go but up. Please post continuation or response to this debate in the other thread; I created this new thread precisely because the last one went off-topic into intense criticism. Separation of concerns, and all... Thanks, Eric
> What you're suggesting doesn't apply the hypertext constraint, it > relies on out-of-band information to create a function in client > chrome, a definite coupling compared to hypertext-driven DELETE. > > -Eric > > And why is it that the Xforms solution doesn't? How does my machine client know that a DELETE button has a certain meaning? Is it just implied because the word DELETE is used? I'm a little confused there.
"Eric J. Bowman" wrote: > > I tend to see the shopping-cart problem in terms of tabular data, so > I'm likely going to want to Implement my shopping cart using XHTML. > The reason I like to solve problems as tabular data, is it allows me to use standard media types which include HTML 4.01 tables. This markup is the state-of-the-art solution to the problem of human-readable and machine-readable tabular data: http://www.ferg.org/section508/accessible_tables.html The author is given a choice between different machine-readability (accessibility) algorithms. The chosen algorithm is machine- discoverable via introspection. Tabular data for a shopping cart may be additionally marked up using RDFa and the GoodRelations ontology, but none of this should be included in a Resource Model, I don't think. All I know going in, is that I want the most semantically-rich markup I can write to describe the tabular data inherent to a shopping cart application, using a standard media type like application/xhtml+xml or text/html. But these implementation details aren't the place to start, because I haven't modeled any resources to guide me. -Eric
Eb wrote: > > > What you're suggesting doesn't apply the hypertext constraint, it > > relies on out-of-band information to create a function in client > > chrome, a definite coupling compared to hypertext-driven DELETE. > > > And why is it that the Xforms solution doesn't? How does my machine > client know that a DELETE button has a certain meaning? Is it just > implied because the word DELETE is used? I'm a little confused there. > A machine client could care less if the button is labeled "foo". It just has to introspect the Xforms "Model" markup, typically located in the document <head>, to understand exactly what happens when that button is pushed -- one or more resources in a defined collection has its DELETE method individually called in sequence. It's a self- documenting API that's easy for a machine client to interpret. -Eric
Will Hartung wrote: > > We may well learn that a Shopping Cart simply is not a good example of > a REST based system. Who knows. > More likely, we discover that there isn't one uber-model of a REST shopping cart, but we settle on Model A and Model B or something. > > I think it's a sound plan. > Thanks, Will. -Eric
Now here's an interesting thread: http://lists.w3.org/Archives/Public/public-lod/2009Nov/0124.html Implementing content negotiation based on an experimental X-Accept- Datetime header could be done in a way that qualifies as REST* pending acceptance of some sort of standard that adds the Accept-Datetime header to HTTP, at which time the '*' may be removed. -Eric
But if DELETE is part of the well-known REST uniform interface, why cannot be used on it's own? It is out-of-band information, but that is OK as part of the definition of the interface. I remembered reading I think a answer from Roy that started by "of course there is out-of-band information...". There will always have to be some kind of out-of-band information... Also, what is wrong with client/server coupling if you are using a method of the uniform interface? It is not expectable that the uniform interface verb that now is defined by DELETE will be changed to something else by the server, like for example the server can do for URI structures. And also, what you say suggests that the same resource whose representation can be retrieved in, say, mediatype/a and mediatype/b, can be deleted when retrieved with a and not with b, depending on those media types definition? I feel it should be a "property" of the resource to be or not deletable, and not a "property" of the media-type. Did I understood correctly? 2009/12/24 Eric J. Bowman <eric@...> > > > Subbu Allamaraju wrote: > > > > Eric - while the example you give about Xforms is valid, but I would > > not stretch that to say that a server that allows clients to DELETE a > > resource based on some information not conveyed in a prior > > representation is not RESTful. > > > > What you're suggesting doesn't apply the hypertext constraint, it > relies on out-of-band information to create a function in client > chrome, a definite coupling compared to hypertext-driven DELETE. > > -Eric > >
>> > ... REST requires that hypertext be used to make these >> > instructions to the client explicit, so Atom Protocol has a REST >> > mismatch. >> >> I've never seen such a requirement and it's not clear how that >> resolves with Roy's comment below? >> > > The requirement is called the hypertext constraint. You're getting more mileage our of the hypertext constraint than I am. Suppose your interpretation is correct and the allowable methods must be explicitly declared in the hypertext - what architectural value (property evoked) does that provide over just providing a link to a resource with it implied that all methods are valid, but perhaps not allowed? And is that worth the cost of not being able to re-use many(any?) existing media types? >> "HTTP operations are generic: they are allowed or not, per resource, >> but they are always valid. Hypertext doesn’t usually tell you all the >> operations allowed on any given resource; it tells you which operation >> to use for each potential transition." >> > > Putting on my Roy Decoder Ring, and using my Atom Protocol example -- > you dereference a resource, which allows GET, PUT, POST, PATCH and > DELETE. But, due to your role, the representation you receive may only > tell you about GET and POST operations you may use for each potential > transition. DELETE is always there in the HTTP generic interface, but > it only becomes part of a uniform REST interface if the client is told > of the potential DELETE transition using hypertext. Thanks Eric, it's taken 3 tries but this last sentence is starting to resonate. I'll have to ponder this one a bit - I'm not sure why I've limited my view of hypertext-driven state transition to be a transition between resources (e.g. GET) and haven't fully considered other states that might also be driven from hypertext. Thanks, --tim
2009/12/23 António Mota <amsmota@...>: > 2009/12/23 Tim Williams <williamstw@...>: > >> I have personally never >> felt the need to map my uniform methods to another communications >> protocol - is anyone really doing that? > > Off-topic, and irrelevant to this discussion thread, but yes, someone > is really doing that... Since its my topic (I changed the subject to ask my question), it would seem that I know best what bit of clarification I was originally seeking. And so, I declare it to be on-topic and completely relevant:) Therefore, feel free to point out some examples of "uniform interface methods" transcending a particular communications protocol. --tim
On Dec 23, 2009, at 6:48 PM, Eric J. Bowman wrote: > In the Atom Protocol implementation I described in my thread, content > negotiation is based on username, but responses are not personalized. > They are role-based. User role determines the available state > transitions in the representation, excluding the interface to methods > not allowed for their role. All variants are the same media type, but > cache-control remains set to public. Conneg only varies on the > Authorization header, media type doesn't enter into the equation. As I replied before, this is not negotiation. This is authentication. In authentication schemes like digest and OAuth, Authorization headers are based on nonces, and hence varying by Authorization header (with CC: public) leads to poor cache hit ratio. So, I am curious to see how well this worked with shared caches. Subbu
On Dec 23, 2009, at 7:13 PM, Eric J. Bowman wrote: > Until such time as a standard is accepted, I will refer to my system as > REST* because the system only becomes RESTful at that point, as if by > magic... If there's no standardization effort involved, then the > proprietary result fragments the Web and does not achieve the goals of > REST, and as a consequence, cannot be called REST or even REST* because > of the lack of intent to use standards, a key element of the style. Sorry, but I must say that this is a fallacious approach for building networked applications. The goal of building *completely* decoupled applications (with *no* out-of-band knowledge) is as unsound as the "local-is-remote" approach that RPC tried. The set of problems that demand that degree of decoupling is small, and extending that notion to every application (to be branded RESTful to satisfy a particular interpretation of REST) is prohibitively expensive. Even in the case of the web, where things seem to work in an autonomous fashion, we need a "human" user to guess the semantics and drive the hypermedia engine for every application. Let's say, we start with your assertion that (a) everything must be communicated in-band, and (b) the media type must be standard. Let's apply this to Flickr as an example. To completely isolate clients from using any out-of-band information, you will have to come up with a "specific" format and a media type for all representations (some some fragments there of) in Flickr, and then standardize them. We will then repeat this process all over again for every application that wants to be RESTful. The net result is a zillion standards. The paradox of this approach is that, a zillion standards are as good as no standards. It does not matter whether such standardization effort is focused on media types or formats or taxonomies. A more reasonable thing to do is start off with some standard media types and formats, and mix them up with some conventions to make representations specific to each application domain. For example, using an atom:link in XML is a convention. Using RFC-3339 for dates and times is another convention. Mixing RDFa and microformats with HTML and XHTML is yet another convention. Such conventions don't aim to eliminate decoupling, but they reduce the amount of of specificity but leave room for evolution and promote interoperability. Subbu
Subbu Allamaraju wrote: > > On Dec 23, 2009, at 6:48 PM, Eric J. Bowman wrote: > > > In the Atom Protocol implementation I described in my thread, > > content negotiation is based on username, but responses are not > > personalized. They are role-based. User role determines the > > available state transitions in the representation, excluding the > > interface to methods not allowed for their role. All variants are > > the same media type, but cache-control remains set to public. > > Conneg only varies on the Authorization header, media type doesn't > > enter into the equation. > > As I replied before, this is not negotiation. This is authentication. > In authentication schemes like digest and OAuth, Authorization > headers are based on nonces, and hence varying by Authorization > header (with CC: public) leads to poor cache hit ratio. So, I am > curious to see how well this worked with shared caches. > It didn't work at all, at first. However, if the intermediary is instructed that it must-revalidate, it makes a HEAD request to the origin server, the response from which has a Content-Location header containing an URL. If that resource is cached, the intermediary serves it to the requesting client. This is negotiation based on authentication, which comes with the tradeoff of requiring must- revalidate. -Eric
António Mota wrote: > > But if DELETE is part of the well-known REST uniform interface, why > cannot be used on it's own? It is out-of-band information, but that > is OK as part of the definition of the interface. I remembered > reading I think a answer from Roy that started by "of course there is > out-of-band information...". There will always have to be some kind > of out-of-band information... > I have extracted and posted lots of Roy's comments, in these two posts: http://tech.groups.yahoo.com/group/rest-discuss/message/14392 http://tech.groups.yahoo.com/group/rest-discuss/message/14388 You're after the second link, and the link to the original for all his comments at the top of the first link. Roy's quote about out-of-band information finishes as follows, and he clarifies further: " REST doesn't eliminate the need for a clue. What REST does is concentrate that need for prior knowledge into readily standardizable forms. That is the essential distinction between data-oriented and control-oriented integration. ... The media type identifies a specification that defines how a representation is to be processed. That is out-of-band information (all communication is dependent on some prior knowledge). What you are missing is that each representation contains the specific instructions for interfacing with a given service, provided in-band. The media type is a generic processing model that every agent can learn if there aren't too many of them (hence the need for standards). " The reason you can't DELETE willy-nilly despite DELETE being part of the generic (cross-protocol) interface (a key distinction from your description of it being automatically part of a uniform REST interface) is because a REST API must be hypertext-driven. The client MUST be instructed what to DELETE *in-band* within the hypertext representation. So your choice of media type for a hypertext-driven RESTful DELETE must encompass the use of the DELETE method. The media type of the target of deletion isn't relevant in a RESTful DELETE (but we know the target has a DELETE method to call because it's part of the generic interface of whatever URI scheme is used to identify the resource), only the media type of the hypertext representation driving the change in application state is relevant. The required prior knowledge for the client to perform a hypertext-driven RESTful DELETE comes from how fully the client implements various standard media types. Not all text/html clients understand HTML 5's use of DELETE, and not all application/xhtml+xml clients understand Xforms' use of DELETE. DELETE in application/xhtml+xml is standardized; Xforms 1.1 is a mature standard with Xforms 1.2 already under development, due to the success of Xforms 1.0 (granted, only behind the firewall, not on the Web). DELETE in text/html is readily standardizable; HTML 5 is nowhere near a mature standard, but its use in a REST system would at this time have to be called REST* by my definition because there's no way of knowing whether any specific use of HTML 5 will make the final draft. To obtain a uniform REST interface (which is, itself, a generic interface as Roy explains in his thesis), the constraint of hypermedia as the engine of application state must be applied. Meaning that the DELETE call itself must be in-band within the hypertext representation driving changes in application state. I can't stress that enough. In order for Firefox to fully implement the application/xhtml+xml media type requires the use of an Xforms plugin which allows the use of any method allowed by the protocol involved (different URI schemes may be used to identify resources, i.e. Firefox already knew how to do FTP GET, now it knows how to do FTP DELETE, same as with HTTP, while allowing HTTP PATCH to boot -- all nice and standardized by the plugin). Until the text/html media type is extended by HTML 5 to encompass DELETE, using text/html for anything other than GET or POST is nonstandard, yet clearly standardizable if the representation is HTML 5. I hope my verbose clarifications don't just cause more confusion. :-( You could just take Roy's authoritative comment at face value, just substitue DELETE for POST, and read "posting to" as "deleting": " You don't get to decide what POST means -- that is decided by the resource. Its purpose is supposed to be described in the same context in which you found the URI that you are posting to. Presumably, that context (a hypertext representation in some media type understood by your client) tells you or your agent what to expect from the POST using some combination of standard elements/relations and human-readable text. The HTTP response will tell you what happened as a result... " All resources have a DELETE method, that's a given. What matters in REST is the context from which that method is called. In my example, the URIs I'm DELETEing are within context -- they appear in standard listbox <option> elements where the listbox has a delete button linked to a <model> defining its action as looping through the selected URIs, calling each target's DELETE method. This is the difference between a uniform REST interface, and a generic HTTP interface -- the hypertext constraint that Roy's up on his high horse about throughout the referenced weblog post and its comments: " In terms of testing a specification, the hardest part is identifying when a RESTful protocol is actually dependent on out-of-band information... What I look for are requirements on processing behavior that are defined outside of the media type specification. One of the easiest ways to see that is when a protocol calls for the use of a generic media type (like application/xml or application/json) and then requires that it be processed in a way that is special to the protocol/API. " Roy is using the term "processing behavior" in a way which I read to encompass protocol methods. The application/xml and application/json media types simply aren't capable of instructing a client how to DELETE anything, so if the client "just knows" how to DELETE those media types (or some target URL contained within), then the client is not being instructed by hypertext, and the API is coupling client to server through out-of-band information. The client MUST be instructed what target resource (of any media type) to DELETE by some in-band self- documenting hypertext API (such as that provided by using the Xforms vocabulary within the application/xhtml+xml media type, in my example) in order to achieve a uniform REST interface. > > Also, what is wrong with client/server coupling if you are using a > method of the uniform interface? It is not expectable that the > uniform interface verb that now is defined by DELETE will be changed > to something else by the server, like for example the server can do > for URI structures. > What's wrong is that you aren't making the distinction between the generic interface, and the specific subset of the generic interface defined by REST's uniform interface uber-constraint. The server may very well change over time how it handles a DELETE request on a collection, but in REST all representations are updated by the server, such that clients are instructed differently than before instead of breaking their hard-coded DELETE implementation mechanism. All the client needs to do is refresh its representation, instead of the client or developer needing to care that the server's DELETE implementation mechanism has changed regarding collections. " Do you see the difference? Encoding knowledge within clients and servers of the other side's implementation mechanism is what we are trying to avoid. " > > And also, what you say suggests that the same resource whose > representation can be retrieved in, say, mediatype/a and mediatype/b, > can be deleted when retrieved with a and not with b, depending on > those media types definition? I feel it should be a "property" of the > resource to be or not deletable, and not a "property" of the > media-type. > Being deletable _is_ a property of the resource, as determined by whether or not that resource implements the DELETE method, not by media type. It can even be expressed in an HTTP Allow: header. What matters is that a media type be used which is capable of instructing a client *how* and *what* to DELETE, by presenting the client with some hypertext representation presenting the target URL(s) and teaching it that deletion exists as an option for changing application state. The application is what the (human or machine) user is trying to accomplish, i.e. delete some resource(s). The application is transitioning frome one steady-state to another. The first steady- state presents a collection of URLs that mey be deleted. The next steady-state (assuming successful deletion) presents a shortened collection of URLs that may be deleted. (Or, the hypertext interface may define itself as the resource to be deleted, and this action may result in the removal of all variant representations. Or, the hypertext interface may define some specific variant of itself (by the variant's own URL) as the target for deletion -- regardless of that target's media type.) The number of transitional states is proportional to the number of resources selected for deletion. These transitional states and the next steady-state do not require their own URIs, the "page" is dynamically updated -- a success response to an individual DELETE causes a resource to be removed from the list the user is viewing, while failure causes the resource to remain in the list (a machine is more interested in having the HTTP response codes present it with transitional-state knowledge). However, if other users are capable of deleting resources, the "page" will need to be refreshed to obtain an accurate representation of resource state (application state is on the client; resource state is on the server -- they are not the same). Or, to liberally (and hopefully, correctly) re-phrase Roy: " You don't get to decide what DELETE means -- that is decided by the resource (which may be a collection whose DELETE implementation functions as a bulk-DELETE of all members, or just results in the deletion of only the collection resource itself). DELETE's purpose is supposed to be described in the same context in which you found the URI that you are deleting (like a <form> presenting a list of URLs to delete and specifying the DELETE action method for a corresponding button). Presumably, that <form> tells you or your agent what to expect from the DELETE using some combination of standard elements/relations and human-readable text. The HTTP response will tell you what happened as a result... " HTH anyone trying to come to grips with the hypertext constraint. -Eric
While there is a latency tradeoff, it's limited to logged-in users. The origin server, in absence of an Authorization header, doesn't return a challenge-response -- it returns the representation for the Unregistered Visitor role, with Vary: Authorization and Cache-Control: public, but does _not_ set must-revalidate. Intermediaries cache a public representation for the case of no Authorization header. The system is expected to have far more random traffic than logged-in traffic, so this is where I care the most about cache hit ratio without the latency induced by must-revalidate. My fallback method would be to set a short-expire cookie indicating user role, and Vary: Cookie. The representations themselves don't have to be secure, but of course PUT, POST, PATCH and DELETE requests are restricted by Authorization header. -Eric "Eric J. Bowman" wrote: > > Subbu Allamaraju wrote: > > > > On Dec 23, 2009, at 6:48 PM, Eric J. Bowman wrote: > > > > > In the Atom Protocol implementation I described in my thread, > > > content negotiation is based on username, but responses are not > > > personalized. They are role-based. User role determines the > > > available state transitions in the representation, excluding the > > > interface to methods not allowed for their role. All variants are > > > the same media type, but cache-control remains set to public. > > > Conneg only varies on the Authorization header, media type doesn't > > > enter into the equation. > > > > As I replied before, this is not negotiation. This is > > authentication. In authentication schemes like digest and OAuth, > > Authorization headers are based on nonces, and hence varying by > > Authorization header (with CC: public) leads to poor cache hit > > ratio. So, I am curious to see how well this worked with shared > > caches. > > > > It didn't work at all, at first. However, if the intermediary is > instructed that it must-revalidate, it makes a HEAD request to the > origin server, the response from which has a Content-Location header > containing an URL. If that resource is cached, the intermediary > serves it to the requesting client. This is negotiation based on > authentication, which comes with the tradeoff of requiring must- > revalidate. > > -Eric >
Subbu Allamaraju wrote: > > On Dec 23, 2009, at 7:13 PM, Eric J. Bowman wrote: > > > Until such time as a standard is accepted, I will refer to my > > system as REST* because the system only becomes RESTful at that > > point, as if by magic... If there's no standardization effort > > involved, then the proprietary result fragments the Web and does > > not achieve the goals of REST, and as a consequence, cannot be > > called REST or even REST* because of the lack of intent to use > > standards, a key element of the style. > > Sorry, but I must say that this is a fallacious approach for building > networked applications. The goal of building *completely* decoupled > applications (with *no* out-of-band knowledge) is as unsound as the > "local-is-remote" approach that RPC tried. > Uh, look back at those quotes of Roy's I excerpted, as he makes the argument better than I can. The key to the REST style is that out-of- band information be encompassed within standard methods, media types, and link relations. This is what the self-descriptive messaging constraint is all about, isn't it? My shared understanding of well- known media types I see in HTTP headers tells me an awful lot about processing the payload without my knowing any specifics of the system; application/vnd.hypothetical tells me nothing unless I _do_ know the specifics of the system... " Do you see the difference? Encoding knowledge within clients and servers of the other side's implementation mechanism is what we are trying to avoid. " How does any non-standardizable media type that's tied to a specific implementation where each side has encoded knowledge of the other side's implementation mechanism qualify as "decoupled"? > > The set of problems that > demand that degree of decoupling is small, and extending that notion > to every application (to be branded RESTful to satisfy a particular > interpretation of REST) is prohibitively expensive. > The entire problem of the Web itself demands the decoupling illustrated by browsers evolving the capability of displaying inline images whose media types evolved one-at-a-time from image/gif through image/jpeg to image/png, despite years of constant radical change within the text/html media type's definition. Along came text/css and text/ javascript (initially and enduringly as application/x-javascript) and syndication media types like RSS and Atom, then application/json. The whole fact that we can build cross-browser Rich Internet Applications today is due to a shared understanding of an evolving set of standard media types. Why wouldn't this phenomenon be key to the REST style? The REST style not only calls for standards-based evolution, it has been defined by what happened on the real-world Web -- the fact that the decoupling provided by standard media types is responsible for the spectacular growth and success of the Web. I'm going to have to throw another Roy quote at you: " REST is software design on the scale of decades: every detail is intended to promote software longevity and independent evolution. Many of the constraints are directly opposed to short-term efficiency... Most don't think they need to design past the current release. " A shared understanding of well-known media types is what allows decoupling, decoupling is what allows independent evolution (as we've seen with browsers and what have come to be known as Rich Internet Applications), and every detail of REST (like standardization) is intended to promote this. Nowhere can anything I have said be construed as suggesting that REST be extended to every system. I am in fact saying the opposite. The purpose of starting a project by modeling resources and deriving a REST architectural model is to guide the development of the implementation. There may be no need for the initial release to contain a full mapping of the implementation to the model. Subsequent releases are guided by defining what additional mapping(s) to the model will be included in the revised implementation. The end result of achieving a REST system may take an amount of time best measured in years. Which is no big thing, because REST is meant to guide the development of systems whose lifespan may well encompass decades. But until all REST constraints in an architectural model are reflected in mappings to the implementation, the system cannot be considered REST, only a derivative of the style. Which doesn't necessarily matter. To say that an implementation lacks mappings to a REST architectural model is not to pass a value judgment against the system. It is meant to provide a measure by which to judge an implementation against the Platonic Ideal for distributed hypermedia systems. If the needs of the system at any point in time are being met by its implementation, then there is no need to map it to additional REST constraints, is there? The problem is that the needs of a system tend to change over time. Unanticipated growth could create the urgent need to apply another of REST's constraints. The disciplined approach is to create additional mappings to the architectural model in the implementation, which has hopefully allowed for this by the developers' recognition at some point in the past that growth requires change. Informed decisions to ignore constraints out-of-the-box but allow for their addition in a future release, can only be made for Web systems in terms of the benefits and tradeoffs of applying REST's constraints (as we lack any other vocabulary). My entire proposed model-centric approach to REST is to provide the basis for these informed decisions through the visualization and analysis of a REST architectural model and its implementation, as the system evolves over time. The current conversation on this list is the epitome of current- release design: starting by identifying the end result as REST and winging it from there, as reflected in the state of real-world REST APIs at this time, rather than starting by modeling resources and following an informed approach. Argument may be made that no new API needs full-on REST conformity out-of-the-box, therefore REST is more desirable as the goal of a release cycle. Since REST is just an architectural style, an abstract, REST must be modeled somehow in order to be used to guide that release cycle. > > Even in the case > of the web, where things seem to work in an autonomous fashion, we > need a "human" user to guess the semantics and drive the hypermedia > engine for every application. > No we don't. Advances like the GoodRelations ontology show us that it's possible to build a machine-readable interface for any number of specific shopping cart implementations. I go through socks pretty fast. I like 'em fresh, besides, I haven't seen anybody darn a sock since before my Grandmom died (although in these times, we may see a resurgence in sock darning). So I need a new six-pack of Sock-brand mid-calf white athletic socks in size XL delivered to my shack in the boonies every six weeks. I ought to be able to create an agent allowing me to enter (or search for) store URIs who sell Socks-brand socks online, at my convenience as configuration. Once every six weeks, my agent places an order at the store with the lowest combination of price and delivery charge, and once every six weeks the local UPS lady delivers new socks to my door. I don't care where or how the order was placed, where it was out-of- stock, or what the price variation was -- so long as my Socks arrive at regular intervals and I'm not overpaying for them. As Web technology continues to evolve, this becomes possible -- best- price cross-supplier automated resupply. Despite the fact that each merchant likely has its own human-driven interface, the fact that the merchants have collectively accepted an ontology for use within standard media types is what makes it possible for agents to get what they need from the same representations intended for human manipulation. I'm a bachelor guy, so I'll be an early adopter of any technology that does my shopping for me. I hate not being able to find my specific brand and flavor/scent of toothpaste/deodorant, despite the Wal-Mart 20 miles away. So please don't tell me I'm doomed to forever having to actually *shop* for basic necessities, online or off... :-( > > Let's say, we start with your assertion that (a) everything must be > communicated in-band, and (b) the media type must be standard. Let's > apply this to Flickr as an example... > I'd rather stick with Dare Obasanjo's example I excerpted and linked to above in this thread, as it stayed on-topic to Roy's blog post. There clearly exist several takes on the problem of a Contacts API, none of which are RESTful, and all of which are vendor-specific. But there's nothing about a Contacts API that can't be implemented in the REST style. So there's no reason a REST architectural model of a Contacts API can't be derived from these existing efforts, for the purpose of guiding a standardization effort. I don't see how this leads to zillions of standards, but I do see how it leads away from the existing fragmentation to a Web where clients can easily interact with any vendor Contacts List through the same generic REST API instead of being hard- coded to support each vendor-specific non-REST Contacts API. This derived REST architectural model guides use and/or development of media types and link relations. A new media type would have to describe how methods are used. Any new link relation or application of an existing link relation would need to be described by the new media type. Or, the REST architectural model guides the development towards standard media types fleshed out with a Contact-API-specific ontology within, to allow more choice in media types and link relations within implementations. If vendors can agree on methods and media types, while each providing vendor-specific ontology within, interoperability between implementations is much easier for a net positive effect. Implementations derived from this model are the visualizations which are then analyzed for interoperability. The real-world analog would be the development of Atom Protocol, where analysis of the interoperability of evolving implementations led to revisions of the protocol. The evolving standard for a RESTful Contacts API is guided by the REST architectural model. A RESTful Contacts API built using experimental media types or link relations is considered REST* because the standardization effort exists, but since it's subject to change based on REST how can it be REST until it's finalized? Paradoxical, I know. Since the development of the new standard is guided by and evaluated against REST, the standard isn't complete until this is accomplished. So how can an implementation of an evolving standard be considered REST when the standards process itself hasn't yet come to that conclusion? Only when the experimental bits become standard, may the asterisk be removed, because only then is out-of-band information encompassed within a well-known media type and decoupling of clients and servers achieved. Experimental implementations being used to guide development of a new media type can't be REST because the self-descriptive messaging constraint is broken until the standard is finalized. Until then, clients are coupled to servers by the use of the nonstandard media type. There, I've officially run out of ways to re-state that... > > A more reasonable thing to do is start off with some standard media > types and formats, and mix them up with some conventions to make > representations specific to each application domain. For example, > using an atom:link in XML is a convention. Using RFC-3339 for dates > and times is another convention. Mixing RDFa and microformats with > HTML and XHTML is yet another convention. Such conventions don't aim > to eliminate decoupling, but they reduce the amount of of specificity > but leave room for evolution and promote interoperability. > I don't understand how someone who wrote that paragraph can be disagreeing with me on the importance of standards to the self- descriptive messaging constraint (or the critical importance of that constraint to the REST style). You obviously understand the importance of self-documenting hypertext-driven APIs. I obviously understand it too, as written above and because I chose these quotes from Roy to post: " It has value because it is far easier to standardize representation and relation types than it is to standardize objects and object-specific interfaces. In other words, there are fewer things to learn and they can be recombined in unanticipated ways while remaining understandable to the client. ... Exposing that vocabulary in the representations makes it easy to learn and be adopted by others. Some of it will be standardized, some of it will be domain-specific, but ultimately the agents will have to be adaptable to new vocabulary. " So it seems to me that we're disagreeing on where to start. It is my belief that the logical starting place is the modeling of resources. Some minimal REST architectural model must exist, before it can inform decisions about which media types to use, how to recombine them in unanticipated ways (like my use of application/atomcat+xml as a delta to PATCH categories within application/atom+xml), as well as where this approach falls short and calls for the extension or creation of a media type, and where to draw the line between standardized and domain- specific ontologies within a media type. There are too many possible implementations of any REST system to start with implementation details. -Eric
"Eric J. Bowman" wrote: > > The end result of achieving a REST system may take an amount of time > best measured in years. Which is no big thing, because REST is meant > to guide the development of systems whose lifespan may well encompass > decades. But until all REST constraints in an architectural model are > reflected in mappings to the implementation, the system cannot be > considered REST, only a derivative of the style. Which doesn't > necessarily matter. > What I meant to say was, applied REST architecture is ideal for guiding the development, maintenance and upgrading of systems whose lifespan may well encompass decades. And of course, for a system to be a derivative of the REST style doesn't necessarily matter because it may well fit the needs of the system. There is no "best" architecture, there is only the architecture that is best for your system (at any given stage in its lifecycle). I don't mean to imply that a system's architectural model is fixed before it is ever implemented, it too is free to evolve over time. All we know at this point about HTTP 2.0 and Waka, is that following REST should allow us to cleanly upgrade a system when the time comes, while achieving graceful degradation for HTTP 1.x clients. My interest lies in learning applied software architecture in the REST idiom, driven by this uncertainty over the time horizon. Surely new protocols come with new benefits, but at what cost? Will I need to derive a new implementation from the same model, and maintain the old as legacy? I believe that this issue is best addressed through the disciplines of both REST and applied software architecture, to provide a formalism to inform system-design decisions over the long term. (The new "Software Architecture" reference is quite timely, since I've been thinking this way for months before it was published, as evidenced by the nature of my posts here over the course of '09.) So my questions are now like, how do I account for protocol evolution in a REST system, using the modeling-driven approach of applied REST architecture? I've already figured that URI allocation scheme is implementation- specific. But perhaps URI scheme belongs in the Model? Transitioning to one or more new protocols (in the URI-scheme sense) over time may be made to degrade gracefully, by running two separate implementations side-by-side, adding, say, waka:// to an http:// system (1 or 2). Or can I upgrade my implementation to cleanly account for the new protocol? The URI allocation scheme likely is the same for all, just as it can be the same for ftp://. So URI scheme is a resource property, and likely does belong in the Model. This question, of side-by-side deployment or not, motivates my pursuit of applied software architecture in the REST idiom, whether my system is RESTful or not... The REST-derived (ROA) system I'm developing doesn't need all the REST constraints out-of-the-box. But if it succeeds on the Web, its nature as a distributed hypermedia system will make REST a design imperative (I plan for success in everything I do) that must be achieved over time, while at the same time the system is evolving to include new features... at some point a new protocol comes along, and I won't want to shut the system down for upgrades any more than Google does... I don't see any way I can juggle all those balls over the long term, without having some kind of reference to guide me. What reference exists to solving these problems? REST and...??? Since REST is just an abstract, an architectural style not an implementation guide, there must be some method of following it. This method is called applied software architecture, and the best textbook I've seen on the subject provides a formal approach to Modeling, Visualization, Analysis, Implementation, and Deployment of a system derived from a known architectural style, like REST. It indeed utilizes REST to illustrate applied software architecture (as a guide to developing architectural styles, though, not system architectures). The textbook may be used as source material to develop not a "What Is" for REST, but a "How To" for REST using common concepts and vocabulary. Since the benefits of REST aren't worth the tradeoffs in my system implementation at this point in time, the immediate goal of my project isn't REST itself, but a REST architectural model to guide me. The point of starting with this Reference Model is to ensure the longevity of my system, by taking a proactive approach to system evolution and maintenance, instead of a reactive approach -- utterly pragmatic. The only formal guide I've found for transposing an architectural style into an implementation starts with modeling, so that's now my logical starting point for a REST or REST-oriented development project. Still waiting up for Santa, Eric
Three key points: a. Media types provide protocol level visibility. This is fundamental. I don't think I argued against this. b. The Goodrelations ontology example below actually illustrates the conventions I am referring to (thanks for the example). For instance, the protocol sees the representation in Example 1 of http://www.ebusiness-unibw.org/wiki/Rdfa4google as a text/html representation. In other words, Goodrelations is invisible at the protocol level, and applications that understand that ontology can decipher extra meaning. There is a weakening of media types of here, but that's okay since the protocol does not need to know about it. Web servers, caches, proxies, user agents et. al. work fine treating the representations as text/html. c. Taking one step further, in some cases, such shared understanding may be informal and local. Most custom applications belong here. These applications can layer some shared understanding into representations of standard media types. I would not argue for standardization of such shared understanding in terms of new media types or some other ontology as long as protocol level visibility is maintained for correct execution and interpretation of the protocol. Informal fine. I see this as a sliding scale. Subbu On Dec 24, 2009, at 11:50 PM, Eric J. Bowman wrote: > Subbu Allamaraju wrote: >> >> On Dec 23, 2009, at 7:13 PM, Eric J. Bowman wrote: >> >>> Until such time as a standard is accepted, I will refer to my >>> system as REST* because the system only becomes RESTful at that >>> point, as if by magic... If there's no standardization effort >>> involved, then the proprietary result fragments the Web and does >>> not achieve the goals of REST, and as a consequence, cannot be >>> called REST or even REST* because of the lack of intent to use >>> standards, a key element of the style. >> >> Sorry, but I must say that this is a fallacious approach for building >> networked applications. The goal of building *completely* decoupled >> applications (with *no* out-of-band knowledge) is as unsound as the >> "local-is-remote" approach that RPC tried. >> > > Uh, look back at those quotes of Roy's I excerpted, as he makes the > argument better than I can. The key to the REST style is that out-of- > band information be encompassed within standard methods, media types, > and link relations. This is what the self-descriptive messaging > constraint is all about, isn't it? My shared understanding of well- > known media types I see in HTTP headers tells me an awful lot about > processing the payload without my knowing any specifics of the system; > application/vnd.hypothetical tells me nothing unless I _do_ know the > specifics of the system... > > " > Do you see the difference? Encoding knowledge within clients and > servers of the other side's implementation mechanism is what we are > trying to avoid. > " > > How does any non-standardizable media type that's tied to a specific > implementation where each side has encoded knowledge of the other > side's implementation mechanism qualify as "decoupled"? > >> >> The set of problems that >> demand that degree of decoupling is small, and extending that notion >> to every application (to be branded RESTful to satisfy a particular >> interpretation of REST) is prohibitively expensive. >> > > The entire problem of the Web itself demands the decoupling illustrated > by browsers evolving the capability of displaying inline images whose > media types evolved one-at-a-time from image/gif through image/jpeg to > image/png, despite years of constant radical change within the > text/html media type's definition. Along came text/css and text/ > javascript (initially and enduringly as application/x-javascript) and > syndication media types like RSS and Atom, then application/json. The > whole fact that we can build cross-browser Rich Internet Applications > today is due to a shared understanding of an evolving set of standard > media types. Why wouldn't this phenomenon be key to the REST style? > > The REST style not only calls for standards-based evolution, it has been > defined by what happened on the real-world Web -- the fact that the > decoupling provided by standard media types is responsible for the > spectacular growth and success of the Web. I'm going to have to throw > another Roy quote at you: > > " > REST is software design on the scale of decades: every detail is > intended to promote software longevity and independent evolution. Many > of the constraints are directly opposed to short-term efficiency... > Most don't think they need to design past the current release. > " > > A shared understanding of well-known media types is what allows > decoupling, decoupling is what allows independent evolution (as we've > seen with browsers and what have come to be known as Rich Internet > Applications), and every detail of REST (like standardization) is > intended to promote this. > > Nowhere can anything I have said be construed as suggesting that REST > be extended to every system. I am in fact saying the opposite. The > purpose of starting a project by modeling resources and deriving a REST > architectural model is to guide the development of the implementation. > There may be no need for the initial release to contain a full mapping > of the implementation to the model. Subsequent releases are guided by > defining what additional mapping(s) to the model will be included in > the revised implementation. > > The end result of achieving a REST system may take an amount of time > best measured in years. Which is no big thing, because REST is meant > to guide the development of systems whose lifespan may well encompass > decades. But until all REST constraints in an architectural model are > reflected in mappings to the implementation, the system cannot be > considered REST, only a derivative of the style. Which doesn't > necessarily matter. > > To say that an implementation lacks mappings to a REST architectural > model is not to pass a value judgment against the system. It is meant > to provide a measure by which to judge an implementation against the > Platonic Ideal for distributed hypermedia systems. If the needs of the > system at any point in time are being met by its implementation, then > there is no need to map it to additional REST constraints, is there? > > The problem is that the needs of a system tend to change over time. > Unanticipated growth could create the urgent need to apply another of > REST's constraints. The disciplined approach is to create additional > mappings to the architectural model in the implementation, which has > hopefully allowed for this by the developers' recognition at some point > in the past that growth requires change. > > Informed decisions to ignore constraints out-of-the-box but allow for > their addition in a future release, can only be made for Web systems in > terms of the benefits and tradeoffs of applying REST's constraints (as > we lack any other vocabulary). My entire proposed model-centric > approach to REST is to provide the basis for these informed decisions > through the visualization and analysis of a REST architectural model and > its implementation, as the system evolves over time. > > The current conversation on this list is the epitome of current- > release design: starting by identifying the end result as REST and > winging it from there, as reflected in the state of real-world REST > APIs at this time, rather than starting by modeling resources and > following an informed approach. Argument may be made that no new API > needs full-on REST conformity out-of-the-box, therefore REST is more > desirable as the goal of a release cycle. Since REST is just an > architectural style, an abstract, REST must be modeled somehow in order > to be used to guide that release cycle. > >> >> Even in the case >> of the web, where things seem to work in an autonomous fashion, we >> need a "human" user to guess the semantics and drive the hypermedia >> engine for every application. >> > > No we don't. Advances like the GoodRelations ontology show us that > it's possible to build a machine-readable interface for any number of > specific shopping cart implementations. I go through socks pretty > fast. I like 'em fresh, besides, I haven't seen anybody darn a sock > since before my Grandmom died (although in these times, we may see a > resurgence in sock darning). So I need a new six-pack of Sock-brand > mid-calf white athletic socks in size XL delivered to my shack in the > boonies every six weeks. > > I ought to be able to create an agent allowing me to enter (or search > for) store URIs who sell Socks-brand socks online, at my convenience as > configuration. Once every six weeks, my agent places an order at the > store with the lowest combination of price and delivery charge, and > once every six weeks the local UPS lady delivers new socks to my door. > I don't care where or how the order was placed, where it was out-of- > stock, or what the price variation was -- so long as my Socks arrive > at regular intervals and I'm not overpaying for them. > > As Web technology continues to evolve, this becomes possible -- best- > price cross-supplier automated resupply. Despite the fact that each > merchant likely has its own human-driven interface, the fact that the > merchants have collectively accepted an ontology for use within > standard media types is what makes it possible for agents to get what > they need from the same representations intended for human manipulation. > > I'm a bachelor guy, so I'll be an early adopter of any technology that > does my shopping for me. I hate not being able to find my specific > brand and flavor/scent of toothpaste/deodorant, despite the Wal-Mart > 20 miles away. So please don't tell me I'm doomed to forever having to > actually *shop* for basic necessities, online or off... :-( > >> >> Let's say, we start with your assertion that (a) everything must be >> communicated in-band, and (b) the media type must be standard. Let's >> apply this to Flickr as an example... >> > > I'd rather stick with Dare Obasanjo's example I excerpted and linked to > above in this thread, as it stayed on-topic to Roy's blog post. There > clearly exist several takes on the problem of a Contacts API, none of > which are RESTful, and all of which are vendor-specific. But there's > nothing about a Contacts API that can't be implemented in the REST > style. So there's no reason a REST architectural model of a Contacts > API can't be derived from these existing efforts, for the purpose of > guiding a standardization effort. I don't see how this leads to > zillions of standards, but I do see how it leads away from the existing > fragmentation to a Web where clients can easily interact with any vendor > Contacts List through the same generic REST API instead of being hard- > coded to support each vendor-specific non-REST Contacts API. > > This derived REST architectural model guides use and/or development of > media types and link relations. A new media type would have to > describe how methods are used. Any new link relation or application of > an existing link relation would need to be described by the new media > type. Or, the REST architectural model guides the development towards > standard media types fleshed out with a Contact-API-specific ontology > within, to allow more choice in media types and link relations within > implementations. If vendors can agree on methods and media types, > while each providing vendor-specific ontology within, interoperability > between implementations is much easier for a net positive effect. > > Implementations derived from this model are the visualizations which are > then analyzed for interoperability. The real-world analog would be the > development of Atom Protocol, where analysis of the interoperability of > evolving implementations led to revisions of the protocol. The evolving > standard for a RESTful Contacts API is guided by the REST architectural > model. A RESTful Contacts API built using experimental media types > or link relations is considered REST* because the standardization effort > exists, but since it's subject to change based on REST how can it be > REST until it's finalized? Paradoxical, I know. > > Since the development of the new standard is guided by and evaluated > against REST, the standard isn't complete until this is accomplished. > So how can an implementation of an evolving standard be considered REST > when the standards process itself hasn't yet come to that conclusion? > Only when the experimental bits become standard, may the asterisk be > removed, because only then is out-of-band information encompassed > within a well-known media type and decoupling of clients and servers > achieved. > > Experimental implementations being used to guide development of a new > media type can't be REST because the self-descriptive messaging > constraint is broken until the standard is finalized. Until then, > clients are coupled to servers by the use of the nonstandard media type. > There, I've officially run out of ways to re-state that... > >> >> A more reasonable thing to do is start off with some standard media >> types and formats, and mix them up with some conventions to make >> representations specific to each application domain. For example, >> using an atom:link in XML is a convention. Using RFC-3339 for dates >> and times is another convention. Mixing RDFa and microformats with >> HTML and XHTML is yet another convention. Such conventions don't aim >> to eliminate decoupling, but they reduce the amount of of specificity >> but leave room for evolution and promote interoperability. >> > > I don't understand how someone who wrote that paragraph can be > disagreeing with me on the importance of standards to the self- > descriptive messaging constraint (or the critical importance of that > constraint to the REST style). You obviously understand the importance > of self-documenting hypertext-driven APIs. I obviously understand it > too, as written above and because I chose these quotes from Roy to post: > > " > It has value because it is far easier to standardize representation and > relation types than it is to standardize objects and object-specific > interfaces. In other words, there are fewer things to learn and they > can be recombined in unanticipated ways while remaining understandable > to the client. > > ... > > Exposing that vocabulary in the representations makes it easy to learn > and be adopted by others. Some of it will be standardized, some of it > will be domain-specific, but ultimately the agents will have to be > adaptable to new vocabulary. > " > > So it seems to me that we're disagreeing on where to start. It is my > belief that the logical starting place is the modeling of resources. > Some minimal REST architectural model must exist, before it can inform > decisions about which media types to use, how to recombine them in > unanticipated ways (like my use of application/atomcat+xml as a delta > to PATCH categories within application/atom+xml), as well as where this > approach falls short and calls for the extension or creation of a media > type, and where to draw the line between standardized and domain- > specific ontologies within a media type. There are too many possible > implementations of any REST system to start with implementation details. > > -Eric
"Eric J. Bowman" wrote: > > The reason you can't DELETE willy-nilly despite DELETE being part of > the generic (cross-protocol) interface (a key distinction from your > description of it being automatically part of a uniform REST > interface) is because a REST API must be hypertext-driven. The > client MUST be instructed what to DELETE *in-band* within the > hypertext representation. > Unless, of course, you're applying REST's optional Code on Demand constraint. Careful -- I'm not changing positions or invalidating anything I've said; I see this conversation as an excellent teaching moment for CoD. Roy's dissertation defines this constraint: " REST allows client functionality to be extended by downloading and executing code in the form of applets or scripts. This simplifies clients by reducing the number of features required to be pre- implemented. Allowing features to be downloaded after deployment improves system extensibility. However, it also reduces visibility, and thus is only an optional constraint within REST. " Thus does REST allow for pragmatism, over rigid adherence to theory. Roy's example, though, is a relic of its times, when Java applets were all the rage. So let's use CoD to solve the problem of browser-based RESTful DELETE in a backwards-compatible fashion that doesn't rely on Xforms or HTML 5, neither of which is very compatible with the current browser state of affairs, let alone backwards-compatible to older clients. We create an HTML 4.01 representation, using a standard selectbox and a DELETE button. Since HTML 4.01 doesn't define DELETE, the button can't be directly linked to a browser's DELETE facility -- which it does have, via XHR -- which results in a hypertext-driven API where the DELETE action must be described within Javascript. So the DELETE button is just triggering a script. The loss of visibility is obvious. To decipher the API, the Javascript must be deciphered against a DOM view of the HTML 4.01 document (like with firebug). This is made easier somewhat, by using standard JS libraries like jquery or mootools, which both allow CSS selectors to be used in the scripts, abstracting away the DOM-direct selection inherent to JS. Whereas anyone familiar with hypertext can refer to human- readable markup to decipher an Xforms interface, even without prior knowledge of Xforms, by simply driving the app and viewing source. This leads to a machine-readability problem. In order to determine the nature of the possible state transitions, an agent would need to have some sort of parser which could interpret the API defined in the JS. Such a parser would not only have to be able to decipher XHR requests abstracted away differently by every JS library out there, but also be able to decipher custom XHR JS in order to function. This is essentially the same problem which has led to the enthusiasm for RDFa. Each microformat requires its own custom parser to introspect and read metadata. RDFa provided a generic framework for expressing embedded microformat/microdata metadata. An RDFa parser works the same for every vocabulary, making it easy to write clients which take action based upon the metadata output of the parser. Xforms provides the same sort of benefit for machine-readable APIs, allowing a generic parser to accurately interpret the interface regardless of protocol and without any knowledge of system specifics. The difference between the Xforms/HTML 5 and HTML 4.01 + CoD approaches is visibility, in this case the Xforms/HTML 5 approach results in a self-documenting API, while the HTML 4.01 + CoD approach does not. The consequence is one of serendipitous re-use, gained by using a media type that encompasses the desired methods' use instead of hiding the application behavior behind a scripting language. " An optional constraint allows us to design an architecture that supports the desired behavior in the general case, but with the understanding that it may be disabled within some contexts. " The general case, in this case, is browsers with Javascript activated that aren't behind firewalls that block Javascript. Where neither exists, the context is "no Javascript" which disables the desired application behavior. A self-documenting hypertext API does not suffer from this issue, although for the DELETE problem it's more theoretical than pragmatic at this time, since it's less likely to work in the real world than an XHR CoD solution. Just understand that the CoD constraint is used to extend clients to allow for nonstandard methods and media types. In this case, using HTML 4.01 to DELETE constitutes a nonstandard method. In a recent thread, I discussed using Java applets to deal with proprietary media types RESTfully through CoD. Properly used and understood, CoD's a valuable tool, but it doesn't begin to make most AJAX-y sites RESTful. -Eric
Subbu Allamaraju wrote: > > a. Media types provide protocol level visibility. This is > fundamental. I don't think I argued against this. > I thought you were arguing against my assertion that *standard* media types provide protocol-level visibility, not just any old media type? > > b. The Goodrelations ontology example below actually illustrates the > conventions I am referring to... > While also illustrating the danger posed. The example markup represents a most hideous abuse of the HTML host language. I've never seen a <span> wrapping other, indented <span>s referred to as "structured markup" before. The native semantics of HTML are used to create structured markup, i.e. a list of items is a <ul> element wrapping <li> elements -- providing block-element structure that doesn't rely on indenting. The <span> element is an inline element, no matter if there is a line break or indentation or, forfend, <br/> (a mostly-irrelevant element in light of CSS) separating them. I haven't studied GoodRelations enough yet, but I certainly hope they haven't specified this crazy approach to semantic-free HTML markup. They also present a horrific example of bad table markup, inaccessible in any way to assistive devices that can't interpret GoodRelations ontology, instead of gracefully degrading or providing enhancement to existing accessible, semantic markup. My fear is that things move in a direction where standard element semantics <ul> and <li> are utterly ignored in favor of metadata <span typeof='ListOfThings'><span typeof= 'ListedThing' content='foo'></span></span>. Start with a semantically marked-up table that uses accessibility attributes, _then_ enhance it to include GoodRelations metadata, instead of throwing out semantic, accessible markup in favor of cryptic metadata presented in tags like <span> which have no semantics or structure, to show a <table> of open/close times by day-of-week. GoodRelations also fails to put content inside of elements, instead tucking it away inside of attributes. Metadata attributes should be used to describe element content, not attribute content. IMNSHO. Which is why I stated that GoodRelations only shows us what is possible, not what is desirable. </rant> -Eric
Erik Wilde, one of the chairs of WS-REST, was kind enough as to accept a suggestion from me to include a mention to Multi-Protocol REST on the list of topics on the Call for Papers of the workshop. My thanks to him. http://www.ws-rest.org/CfP I wish I had the capability of writing a paper on the subject myself. If someone is tempted to write about it and wishes to discuss the limited experience I have with it, please let me know. Cheers. ______________________________________________________ Melhores cumprimentos / Beir beannacht / Best regards António Manuel dos Santos Mota ______________________________________________________
Hi Antonio, I'm a bit late in this thread, but wondered what you meant by "expensive" exactly regarding the Restlet framework. In many cases, you can rely on a single "org.restlet.jar" file for: - Restlet API - Restlet Engine (basic HTTP client and server included) - not other dependency beside Java SE needed Then, all the rest are optional extensions (including one for JAX-RS). The Restlet API itself is quite compact and easy to learn. It isn't intrusive and can be used as a library where you pick a few classes or as a framework. Is it that you want to provide your own connectors for new URI schemes? This is quite easy to do as we proved with SMTP, POP3, JDBC or more recently SIP. Best regards, Jerome Louvel -- Restlet ~ Founder and Lead developer ~ http://www.restlet.org Noelios Technologies ~ Co-founder ~ http://www.noelios.com Le 26/11/2009 13:06, Ant�nio Mota a �crit : > I myself I'm not a big fan of frameworks, I even wrote elsewhere about > what I consider a anti-pattern that I called Framework Oriented Design > Architecture, or FODA for short (Portuguese speakers will appreciate the > irony...). > > Basically, what I call FODA is a more or less current practice of > choosing a Framework (or two or three) and then design the architecture > around the framework, rather than doing the opposite, and then having to > "fit" the architecture to what the framework(s) can or not do, instead > of the business model that it was supposed to fit. > > The best example of this is the myriad of applications that start by > choose "Spring + Hibernate" without taking into account the limitations > of those two frameworks, and then conform to those limitations that in > turn limit the business value of the solution. > > That being said, it is undoubtedly true that frameworks are very useful > in avoiding writing "plumbing" code and speeding up development. Like > Spring Core when correctly used (I'm not so sure about Hibernate...) > > So of course, IMO, frameworks have their space in REST as in any > development style, as long they do not dictate the overall architecture. > So I would say, design your architecture first (simply putting the ideas > in your head in a consistent order, defining clearly the ends to which > it aims, even designing some fancy squares and circles and lines in a > napkin, not necessarily a "formal" design - but remember you'll need > that formality later in the process) and from them look not only at the > frameworks that gives you what you need, but also *how* they do it, > because probably you will need *some* of the functionalities of the > framework but you will want to avoid to commit yourself to the whole > stack it provides - or you risk falling into a FODA. > > For instance, since the beginning we knew that for business reasons we > had to support not only HTTP but a few other methods of communicating > with our clients/business partners, and we build the design with that in > mind. Had we chosen a framework in the first place and we had to deal > with big problems down the road - like choosing Jersey, that gives what > we need but only for HTTP, or Restlet, that supports some other > protocols but is way too "expensive" (not in money, but technologically > speaking) to one of the goals of our design - to be simple and "light". > And expansible. > > So we end up using Spring (core, beans, context), Spring Batch, Spring > Web/MVC (only for the HTTP connector), big chunks of Jersey, > Spring-Integration (on hold now), Hibernate (against my will) and a few > others like JackRabbit, Funanbol, jBPM and others for very specific things. > > I hope this helps you in analysing the frameworks. > > > berend@... wrote: >> >>>>>>> "dhillon" == dhillon sjsu<narpal.dhillon@... >> <mailto:narpal.dhillon%40ymail.com>> writes: >> >> dhillon> Hello I am new to the REST development. I have general >> dhillon> question that which is the most popular and mostly used >> dhillon> framework for RESTful web services. e.g. Jerset, Restlet >> dhillon> or Rails. I am not asking for a specific language, but in >> dhillon> general. >> >> REST and framework don't belong in the same sentence. That's the short >> answer. >> >> The longer answer is that you don't need one, nor want one. If your >> framework cannot tell you the HTTP method, doesn't allow (or makes it >> heard) you to query or specify headers, it's probably not useful for >> REST either. >> >> -- >> Cheers, >> >> Berend de Boer >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi Jan, I've quickly skimmed through the previous answers, but thought you might find our Restlet extension for RDF interesting: http://wiki.restlet.org/docs_2.0/13-restlet/28-restlet/270-restlet.html Especially, it contains an RdfClientResource class that approaches the goal of a generic REST client, leveraging the Web of Data (Linked Data): http://www.restlet.org/documentation/snapshot/jse/ext/org/restlet/ext/rdf/RdfClientResource.html See for example the attached code which illustrates a basic Linked Data crawler. As been said here, it doesn't remove the need to understand ontologies and doesn't provide support for other actions than browsing. Best regards, Jerome Louvel -- Restlet ~ Founder and Lead developer ~ http://www.restlet.org Noelios Technologies ~ Co-founder ~ http://www.noelios.com Le 10/10/2009 01:21, Jan Vincent a �crit : > > > Given the guidelines REST proposes, is there a generic client for > RESTful services? I know one may simply use some HTTP client and work > from there. However, I tend to see this practice as being quite > tedious. In the SOAP/WSDL world, for instance, there's code > generation. Though many of you would hate that (and it's > understandable why), perhaps in the REST world, there is one that > automatically reads the proper hyperlinks we give them on some parts > of the resource representations provided in some URL. Of course, all > this is done dynamically. Given one thing to do for instance, this > client would go from some URL (perhaps '/' of the site), then follow > some link from there and so on. Of course, should the server advice > the client to cache the response, it would do so accordingly. Again, > doing all these by hand may seem tedious. > > FYI though, from the server perspective, I see webmachine > (http://bitbucket.org/justin/webmachine/ > <http://bitbucket.org/justin/webmachine/> > ) as a pretty good example as to what I'm looking for. > > Jan Vincent Liwanag > jvliwanag@... <mailto:jvliwanag%40gmail.com> > >
Is there any standard RESTful way of doing claims based authorization a'la SAML and CardSpace? The authorization schemes I have seen so far usually encodes a user reference and nothing more - there's no secure way to assert claims like email=xxx@... or employeenumber=12345 or age-below-twenty. I guess you can use SAML "HTTP Redirect (GET) Binding", but that generates such a huge URL that it seems impractical to use (it's a base-64 encoding of a zip-encoding of a SAML XML document). As I understand it a RESTful authorization scheme must be stateless, so you cannot rely on any kind of session use. This means you have to transfer all the claims on each and every request which again means a potentially big overhead. What is needed is a standard way of encoding multiple claims in a compact, secure, trusted way such that they can be transferred on each request without too much overhead (including whatever crypto stuff is needed). Maybe you could create a temporary ressource somewhere with the claims, then at least you only had to transfer the claims URL, not all the claims, and the server could then cache these claims. Any ideas or references? It even occurs to me that claims could be more RESTful than username/password since they don't require any out-of-band setup of user accounts. All that is needed is a standard for claims and then everything should work if the claims are issued by an authority that the web service trusts. No need for any human interaction - the server just sends a challenge "show me your claims (and I accept them from authority X, Y and Z)" whereafter the client sends the claims. These claims can even be obtained without human interaction if the client and the claims server trust each other. Comments? Thanks, J�rn
2009/12/24 Eric J. Bowman <eric@...> > " > You don't get to decide what POST means -- that is decided by the > resource. Its purpose is supposed to be described in the same context > in which you found the URI that you are posting to. Presumably, that > context (a hypertext representation in some media type understood by > your client) tells you or your agent what to expect from the POST using > some combination of standard elements/relations and human-readable > text. The HTTP response will tell you what happened as a result... > " The confusion, specifically in regards to this statement and contrasting it with DELETE, is that POST is a much more "wide open" verb than DELETE. POST has an expectation of a media type argument, and has different behaviors. DELETE has none of these. There is no payload other than the URI, and there's no result other than success or failure. Arguably, the availability of a DELETE link in the hypertext informs the client that a DELETE is possible, but that is all. The Standard Interface defines what the individual verbs do, but does not define when they are or are not available. Just because everyone "knows" what DELETE does, does not mean that any and all resources can, or should be deleted. You could see an interesting argument between a server and a client when a client tells teh server to delete a resource, and then complains that the server actually did delete the resource rather than return an error. The server can say "Why did you delete the resource, I've never sent you anything that told you that you could." because the server never sent a media type with a DELETE link in it. Thus the distinction between Standard Interface and actual behavior. But this is still confusing regarding non-hypertext media types, notably common things like JPGs etc., which typically do not have a representation that tells the client what it can and can not do. I guess the only way for this information to be conveyed would be on, say, an index page. For example: http://example.com/images can return a result listing all of the images, a link to POST to add an image, and individual links to PUT and DELETE all of the existing images. But, clearly, you can not get this information by getting the resource itself, http://example.com/images/pic1.jpg, as that media type is not hypertext capable. I guess at this point you'd need to rely on Link headers, but I don't know the standardization status of those yet. Regards, Will Hartung (willh@...)
On Dec 29, 2009, at 6:16 PM, Will Hartung wrote: > 2009/12/24 Eric J. Bowman <eric@...> >> " >> You don't get to decide what POST means -- that is decided by the >> resource. Its purpose is supposed to be described in the same context >> in which you found the URI that you are posting to. Presumably, that >> context (a hypertext representation in some media type understood by >> your client) tells you or your agent what to expect from the POST >> using >> some combination of standard elements/relations and human-readable >> text. The HTTP response will tell you what happened as a result... >> " > > The confusion, specifically in regards to this statement and > contrasting it with DELETE, is that POST is a much more "wide open" > verb than DELETE. POST has an expectation of a media type argument, > and has different behaviors. DELETE has none of these. There is no > payload other than the URI, and there's no result other than success > or failure. Yes, POST is a generic 'prcess this' and DELETE is not. However, note that there can be a sematic attached to DELETE based on the hypermedia context of the target resource. If you know from some hypermedia that resource R represents an order you might well know from the associated hypermedia specification that calling DELETE on the order resource results in order cancelation (and noy just the technical removal of all representations). You could do the same with POST by defining in the spec that POSTing a certain payload to the order (e.g. a cancelation message) results in order cancelation. The benefit of using DELETE is the added visibility. DELETE has some semantics that are independent from any side effects on the server, for example that it is idempotent, that caches can erase any copy of that resource they might hold on to, etc. When designing hypermedia semantics, it is a benefit to make use of the other HTTP verbs if they fit the domain semantics of the interaction. Canceling an order can be done with a DELETE because everything that is true for DELETE is also true for 'cancel order'. By using DELETE instead of POST you gain the additional visibility. In fact, by using DELETE you gain visibility - POST (the 'process this' style) has no visibility at all. Jan > > Arguably, the availability of a DELETE link in the hypertext informs > the client that a DELETE is possible, but that is all. > > The Standard Interface defines what the individual verbs do, but does > not define when they are or are not available. Just because everyone > "knows" what DELETE does, does not mean that any and all resources > can, or should be deleted. > > You could see an interesting argument between a server and a client > when a client tells teh server to delete a resource, and then > complains that the server actually did delete the resource rather than > return an error. The server can say "Why did you delete the resource, > I've never sent you anything that told you that you could." because > the server never sent a media type with a DELETE link in it. > > Thus the distinction between Standard Interface and actual behavior. > > But this is still confusing regarding non-hypertext media types, > notably common things like JPGs etc., which typically do not have a > representation that tells the client what it can and can not do. I > guess the only way for this information to be conveyed would be on, > say, an index page. > > For example: http://example.com/images can return a result listing all > of the images, a link to POST to add an image, and individual links to > PUT and DELETE all of the existing images. > > But, clearly, you can not get this information by getting the resource > itself, http://example.com/images/pic1.jpg, as that media type is not > hypertext capable. > > I guess at this point you'd need to rely on Link headers, but I don't > know the standardization status of those yet. > > Regards, > > Will Hartung > (willh@...) > > > ------------------------------------ > > Yahoo! Groups Links > > > -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
In [1] Roy says: "That is why it is more efficient in a true REST-based architecture for there to be a hundred different methods with distinct (non- duplicating), universal semantics, than it is to include method semantics within the body of a POST." Re-reading the post I understand Roy to be saying that a 'good' method allows an intermediary to anticipate the state of the resource after the request. GET, PUT, DELETE have this property and PATCH might (not sure). Neither POST(a) nor POST(p) have. Does anyone have an idea what other universal methods one might want to have (for any REST arch/for HTTP specifically)? Since Roy says 'a hundred' and I cant see any I thought I should better ask... Is HTTP itself limiting the possible set for some reason? For example because it only allows a request to be targeted towards a single identifier (which makes COPY and MOVE non-natural) Jan [1] http://tech.groups.yahoo.com/group/rest-discuss/message/4732 -------------------------------------- Jan Algermissen Mail: algermissen@... Blog: http://algermissen.blogspot.com/ Home: http://www.jalgermissen.com --------------------------------------
Will Hartung wrote: > > Roy wrote: > > > > You don't get to decide what POST means -- that is decided by the > > resource. Its purpose is supposed to be described in the same > > context in which you found the URI that you are posting to. > > Presumably, that context (a hypertext representation in some media > > type understood by your client) tells you or your agent what to > > expect from the POST using some combination of standard > > elements/relations and human-readable text. The HTTP response will > > tell you what happened as a result... > > > > The confusion, specifically in regards to this statement and > contrasting it with DELETE, is that POST is a much more "wide open" > verb than DELETE. POST has an expectation of a media type argument, > and has different behaviors. DELETE has none of these. There is no > payload other than the URI, and there's no result other than success > or failure. > Roy's statement applies to any request method. Your client doesn't even get to decide what GET means. Remember the brouhaha with Google Web Accelerator deleting resources? Some Web systems are designed with non- idempotent GETs, for whatever reason. While this may be documented using "standard elements/relations and human-readable text... described in the same context in which you found the URI that you are" GETting, it breaks the generic interface (by violating the HTTP spec). As for DELETE, I've already described its different behaviors pertaining to collections. DELETE requests may contain more than just target URI and method. Other headers may also come into play... > > Arguably, the availability of a DELETE link in the hypertext informs > the client that a DELETE is possible, but that is all. > No, the presence of DELETE in an Allow: header informs the client that a DELETE is possible, but that is all. A self-documenting, hypertext-driven REST API may instruct the client to do a HEAD request on each URL appearing in a <form> listing deletable resources, and further instruct the client that it must perform a conditional DELETE (to avoid deleting a resource that someone else just altered, always consider time and multi-user). If the Allow: header is implemented, the hypertext may instruct the client to exclude any resource from the deletable collection that didn't explicitly Allow: DELETE when the HEAD request was made. Yes, DELETE results in success or failure, however it's up to DELETE's implementation for a given resource to determine the failure mode... perhaps 401 to initiate challenge-response. Informing the user as to why the DELETE failed differentiates the uniform REST interface from the generic HTTP interface. Calling the DELETE method of a resource out-of-band of the hypertext application may even have caused the failure, as we shall see... (I haven't checked RFC 2616bis lately, but AFAIK the Allow: header may be sent with GET and HEAD requests, not just as part of a 405 response.) > > The Standard Interface defines what the individual verbs do, but does > not define when they are or are not available. Just because everyone > "knows" what DELETE does, does not mean that any and all resources > can, or should be deleted. > The generic interface defines the possibilities for individual methods, but does not define what they do within the context of an application. A client script coded against libcurl can make a HEAD request against a resource, and infer from the Allow: header that DELETE has actually been implemented for the resource. The script may then make a standard DELETE request against the resource, which may fail for any variety of reasons (the user isn't privileged enough, or the request wasn't conditional, etc.) which all come down to the failure of the client to be instructed in the _use_ of the interface in-band by hypertext. Sure, in theory, curl can DELETE any resource out there, because DELETE is part of the generic interface. But in practice, the Web isn't that simple, which you stated as, "Thus the distinction between Standard Interface and actual behavior." > > You could see an interesting argument between a server and a client > when a client tells teh server to delete a resource, and then > complains that the server actually did delete the resource rather than > return an error. The server can say "Why did you delete the resource, > I've never sent you anything that told you that you could." because > the server never sent a media type with a DELETE link in it. > Typical REST systems have multiple clients -- a simple weblog has its native (X)HTML interface as one client, and any number of feed readers/ aggregators. (I don't mean client in the "user" sense, here.) A REST developer understands that the visible nature of REST APIs lead to serendipitous re-use, and doesn't really care if external developers are following the hypertext constraint... What matters is the native hypertext client, as that's what self-documents all the self-descriptive messaging that's going on to drive application state, so hypothetically: Any resource which implements HTTP DELETE can require such a request to originate from a specific URL using REFERER. So you can restrict DELETE to originate from the native hypertext client. Graceful degradation is provided by a 403 response linking to the hypertext client, and explaining that it must be used in order to DELETE resources on the system. Granted, a savvy curl user can bypass this requirement, but REFERER is HTTP's answer to your hypothetical. A standard Atom Protocol client may know how to DELETE Atom resources on a system, but lacking a REFERER will receive a 403 response, if the server wants to enforce the hypertext constraint implemented in its native hypertext client. > > But this is still confusing regarding non-hypertext media types, > notably common things like JPGs etc., which typically do not have a > representation that tells the client what it can and can not do. I > guess the only way for this information to be conveyed would be on, > say, an index page. > Or using Atom media entries, designed to solve this very problem... > > For example: http://example.com/images can return a result listing all > of the images, a link to POST to add an image, and individual links to > PUT and DELETE all of the existing images. > Careful, when you say "link" you're implying GET. Do you mean to say that this index page has a link that I can follow which causes an image to be deleted? Or do you mean that this index page is a <form> which the human or machine user can drive to the next application state? > > But, clearly, you can not get this information by getting the resource > itself, http://example.com/images/pic1.jpg, as that media type is not > hypertext capable. > Sure you can. An Atom media entry (pic1.atom) uses hypertext to link itself to pic1.jpg, while pic1.jpg can use a Link header to attach itself to pic1.atom. Atom Protocol specifies that deletion of pic1.atom will cause pic1.jpg to be removed as well. A HEAD request to pic1.jpg doesn't Allow: DELETE, but reveals Link: alternate=pic1.atom. If a HEAD request to pic1.atom reveals Allow: DELETE then we have a self-describing Atom Protocol interface. It isn't self-documenting -- all we know is that a DELETE on pic1.atom is the mechanism for removing pic1.jpg, we don't know *how* to DELETE pic1.atom (unless we rely on out-of-band knowledge and just DELETE it willy-nilly). -Eric
On Dec 29, 2009, at 1:56 PM, Eric J. Bowman wrote: > A self-documenting, hypertext-driven REST API may instruct the client > to do a HEAD request on each URL appearing in a <form> listing > deletable resources, and further instruct the client that it must > perform a conditional DELETE (to avoid deleting a resource that someone > else just altered, always consider time and multi-user). If the Allow: > header is implemented, the hypertext may instruct the client to exclude > any resource from the deletable collection that didn't explicitly > Allow: DELETE when the HEAD request was made. Couple of points - Neither presence of a method in the Allow header nor the presence of some hypertext guarantees that any method will succeed. For instance, Authorization may change the outcome. - The above point about conditional requests is not true. Conditional requests are part of the uniform interface, and servers never have to say explicitly via hypertext that client must make a conditional request. Subbu
On Dec 29, 2009, at 1:56 PM, Eric J. Bowman wrote: > No, the presence of DELETE in an Allow: header informs the client that > a DELETE is possible, but that is all. > > A self-documenting, hypertext-driven REST API may instruct the client > to do a HEAD request on each URL appearing in a <form> listing > deletable resources, and further instruct the client that it must > perform a conditional DELETE (to avoid deleting a resource that someone > else just altered, always consider time and multi-user). If the Allow: > header is implemented, the hypertext may instruct the client to exclude > any resource from the deletable collection that didn't explicitly > Allow: DELETE when the HEAD request was made. > > Yes, DELETE results in success or failure, however it's up to DELETE's > implementation for a given resource to determine the failure mode... > perhaps 401 to initiate challenge-response. Informing the user as to > why the DELETE failed differentiates the uniform REST interface from > the generic HTTP interface. Calling the DELETE method of a resource > out-of-band of the hypertext application may even have caused the > failure, as we shall see... > > (I haven't checked RFC 2616bis lately, but AFAIK the Allow: header may > be sent with GET and HEAD requests, not just as part of a 405 response.) Yes. BTW, "Allow: DELETE" in HTTP is a form of hypertext, as are all of the resource metadata fields (data-embedded control information). That is why they were considered part of the representation, though I think we are making Allow just a response-header in httpbis. ....Roy
"Roy T. Fielding" wrote: > > > > > (I haven't checked RFC 2616bis lately, but AFAIK the Allow: header > > may be sent with GET and HEAD requests, not just as part of a 405 > > response.) > > Yes. > > BTW, "Allow: DELETE" in HTTP is a form of hypertext, as are all of > the resource metadata fields (data-embedded control information). > That is why they were considered part of the representation, though > I think we are making Allow just a response-header in httpbis. > Whoops, definitely poor word choice on my part, "sent with" shoulda been "sent in response to" -- I never meant to imply Allow was a request header. Thanks for clarifying my understanding of the term "hypertext" to include resource metadata. -Eric
Subbu Allamaraju wrote: > > - Neither presence of a method in the Allow header nor the presence > of some hypertext guarantees that any method will succeed. For > instance, Authorization may change the outcome. > I don't see where I've implied otherwise. I would go even farther, and say that the presence or absence of a method in hypertext (including the Allow header, as it's resource metadata) guarantees you nothing. The only authoritative answer is the response to a request method itself. However, determining if a resource is deletable by actually deleting it, leaves something to be desired. Which is why I'm strongly in favor of implementing the Allow: header accurately. While authorization may change the outcome of a request, I don't base Allow: contents on authorization. I always use Allow: as a property of the resource, not as a resource state, but that's just me. > > On Dec 29, 2009, at 1:56 PM, Eric J. Bowman wrote: > > > A self-documenting, hypertext-driven REST API may instruct the > > client to do a HEAD request on each URL appearing in a <form> > > listing deletable resources, and further instruct the client that > > it must perform a conditional DELETE (to avoid deleting a resource > > that someone else just altered, always consider time and > > multi-user). If the Allow: header is implemented, the hypertext > > may instruct the client to exclude any resource from the deletable > > collection that didn't explicitly Allow: DELETE when the HEAD > > request was made. > > - The above point about conditional requests is not true. Conditional > requests are part of the uniform interface, and servers never have to > say explicitly via hypertext that client must make a conditional > request. > Conditional requests are part of HTTP's generic interface. There are two ways to make conditional requests part of a uniform REST interface. First, is if the media type specifies conditional requests (Atom). Second, is if the conditional request is API-specific, and that API is self-documenting in hypertext. A conditional DELETE isn't described by any media type, i.e. there's no common out-of-band knowledge detailing this application behavior. So in order for a conditional-DELETE API to be RESTful, hypertext must exist which instructs clients of this requirement in-band. Xforms makes it possible (not saying it's easy) to specify the round-trip of an Etag in an If-Match header be sent with DELETE, in hypertext. Without the server explicitly describing this interface in hypertext, how is a client to determine that DELETE must be a conditional request on the system? Besides out-of-band knowledge hard-coded into the client... -Eric
On Dec 30, 2009, at 5:49 AM, Eric J. Bowman wrote: > Conditional requests are part of HTTP's generic interface. There are > two ways to make conditional requests part of a uniform REST interface. > First, is if the media type specifies conditional requests (Atom). > Second, is if the conditional request is API-specific, and that API is > self-documenting in hypertext. > > A conditional DELETE isn't described by any media type, i.e. there's no > common out-of-band knowledge detailing this application behavior. So in > order for a conditional-DELETE API to be RESTful, hypertext must exist > which instructs clients of this requirement in-band. Xforms makes it > possible (not saying it's easy) to specify the round-trip of an Etag in > an If-Match header be sent with DELETE, in hypertext. > > Without the server explicitly describing this interface in hypertext, > how is a client to determine that DELETE must be a conditional request > on the system? Besides out-of-band knowledge hard-coded into the > client... Headers that drive conditional requests are in deed part of a representation, and media types such as application/atom+xml, text/html do not have to specify their behavior. Neither Atom nor AtomPub specify conditional requests (they just use them in examples). No per media type specification or out of band knowledge is necessary. In fact, per media type specification of such behavior would break caches. Subbu
Hello J�rn, You could in principle define your own headers (or try to standardise some headers) to propagate SAML assertions (or similar tokens) in a RESTful way. Unfortunately, that's unlikely to work in browsers. Even SAML's HTTP Redirect (GET) Binding is often only a one-off thing that can only be used to log in (and thus get a cookie), otherwise you'd have to repeat this query for all URIs you want to use (and thus change the URI, since the query is part of the URI, strictly speaking). We've been doing some work on FOAF+SSL whereby you avoid the non-RESTful authentication issue by using a TLS/SSL client-certificate for the authentication (which is under the HTTP level), but for servers that don't support SSL (or even the settings required for FOAF+SSL), we've also had to use some SSO-like login mechanism via cookies. This being said, discovering the identity in FOAF+SSL is really where this system makes use of REST: your ID is a URI (a WebID) than can be dereferenced and about which things can be said using RDF/semantic web. The issue of using cookies for authentication/authorisation comes from the lack of browser support (and standardisation) for other headers. I sometimes wish there were 'WWW-Authenticate: transport' (or something similar, to make handling tokens out of HTTP like SSL client-certificate cleaner, and thus avoid some problems related to the TLS renegotiation issue) or 'WWW-Authenticate: token' (to have clear authentication-dedicated tokens, rather than cookies that are also used for sessions), but they just don't exist in browser. Would it be worth suggesting this approach to the HTTP WG? Perhaps, but there's little point doing so if the major browser vendors are not on board. I presume most people consider that cookies are an acceptable practical solution, even if it breaks the REST principles. Best wishes, Bruno. J�rn Wildt wrote: > > > Is there any standard RESTful way of doing claims based authorization a'la > SAML and CardSpace? The authorization schemes I have seen so far usually > encodes a user reference and nothing more - there's no secure way to assert > claims like email=xxx@... <mailto:email%3Dxxx%40yyy.zz> or > employeenumber=12345 or age-below-twenty. > > I guess you can use SAML "HTTP Redirect (GET) Binding", but that generates > such a huge URL that it seems impractical to use (it's a base-64 > encoding of > a zip-encoding of a SAML XML document). > > As I understand it a RESTful authorization scheme must be stateless, so you > cannot rely on any kind of session use. This means you have to transfer all > the claims on each and every request which again means a potentially big > overhead. > > What is needed is a standard way of encoding multiple claims in a compact, > secure, trusted way such that they can be transferred on each request > without too much overhead (including whatever crypto stuff is needed). > > Maybe you could create a temporary ressource somewhere with the claims, > then > at least you only had to transfer the claims URL, not all the claims, and > the server could then cache these claims. > > Any ideas or references? > > It even occurs to me that claims could be more RESTful than > username/password since they don't require any out-of-band setup of user > accounts. All that is needed is a standard for claims and then everything > should work if the claims are issued by an authority that the web service > trusts. No need for any human interaction - the server just sends a > challenge "show me your claims (and I accept them from authority X, Y and > Z)" whereafter the client sends the claims. These claims can even be > obtained without human interaction if the client and the claims server > trust > each other. > > Comments? > > Thanks, J�rn